Why Big-O Matters at Google
Every Google interviewer will ask: "What’s the time and space complexity?" Not sometimes — every single time. If you can’t analyze complexity, you can’t pass the interview, period.
Big-O notation describes how an algorithm’s resource usage (time or space) scales as the input size grows. It’s not about measuring exact milliseconds — it’s about understanding the growth rate.
The Key Insight: We Care About the Worst Case
When Google asks for complexity, they usually want worst-case analysis unless stated otherwise. Why? Because real systems must handle edge cases.
| Notation | Name | Example | Input doubles → Time… |
|---|---|---|---|
O(1) |
Constant |
Array access by index |
Stays the same |
O(log n) |
Logarithmic |
Binary search |
+1 step |
O(n) |
Linear |
Linear search |
Doubles |
O(n log n) |
Linearithmic |
Merge sort, Timsort |
Slightly > doubles |
O(n²) |
Quadratic |
Nested loops (brute force) |
Quadruples |
O(2ⁿ) |
Exponential |
Recursive Fibonacci (naive) |
Becomes impossible |
O(n!) |
Factorial |
Generating all permutations |
Heat death of universe |
How to Analyze Time Complexity
Rule 1: Drop Constants and Lower-Order Terms
# This function does 3n + 5 operations.
# Big-O: O(n) — we drop the constant 3 and the additive 5.
def example_linear(items):
total = 0 # O(1)
for item in items: # Loop runs n times
total += item # O(1) per iteration → n × O(1) = O(n)
for item in items: # Another n iterations
total += item * 2 # O(1) per iteration → O(n)
return total + 42 # O(1)
# Total: O(1) + O(n) + O(n) + O(1) = O(2n + 2) = O(n)
Rule 2: Nested Loops Multiply
# Two nested loops over n → O(n) × O(n) = O(n²)
def example_quadratic(matrix):
rows = len(matrix)
cols = len(matrix[0])
total = 0
for row in range(rows): # O(rows)
for col in range(cols): # O(cols) — runs for EACH row
total += matrix[row][col] # O(1)
return total
# Total: O(rows × cols). If rows ≈ cols ≈ n, then O(n²).
Rule 3: Sequential Blocks Add
# Two sequential blocks: O(n) + O(n²) = O(n²)
# The dominant (largest) term wins.
def example_mixed(items):
# Block 1: O(n)
for item in items:
print(item)
# Block 2: O(n²)
for i in range(len(items)):
for j in range(len(items)):
print(items[i], items[j])
# Total: O(n) + O(n²) = O(n²) — the n² dominates
Rule 4: Different Inputs Get Different Variables
# ⚠️ Common misconception: "nested loops = O(n²)"
# WRONG if the loops iterate over DIFFERENT collections!
def print_pairs(list_a, list_b):
for a in list_a: # O(a) where a = len(list_a)
for b in list_b: # O(b) where b = len(list_b)
print(a, b)
# Total: O(a × b) — NOT O(n²)!
# If list_a has 1000 items and list_b has 5, it's O(5000), not O(1000000).
Rule 5: Logarithmic = Halving the Problem
# Each iteration cuts the problem in half → O(log n)
# log₂(1,000,000) ≈ 20 — only 20 steps for a million elements!
def binary_search(sorted_list, target):
left, right = 0, len(sorted_list) - 1 # O(1) setup
while left <= right: # Runs at most log₂(n) times
mid = (left + right) // 2 # O(1) — integer arithmetic
if sorted_list[mid] == target: # O(1) — array access
return mid
elif sorted_list[mid] < target: # O(1)
left = mid + 1 # Cut search space in half
else:
right = mid - 1 # Cut search space in half
return -1
# Total: O(log n) time, O(1) space
How to Analyze Space Complexity
Space complexity measures additional memory used by the algorithm (not counting the input).
# O(1) space — uses a fixed number of variables regardless of input size
def find_max(items):
max_val = items[0]
for item in items:
if item > max_val:
max_val = item
return max_val
# Only one extra variable (max_val), no matter how big the input.
# O(n) space — creates a data structure proportional to input size
def reverse_list(items):
reversed_items = [] # New list: O(n) space
for item in reversed(items): # reversed() is O(1) — it's an iterator
reversed_items.append(item)
return reversed_items
# The reversed_items list grows to size n → O(n) space.
# O(n) space — recursion uses the call stack!
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
# Each recursive call adds a frame to the call stack.
# Maximum depth = n → O(n) space even though we don't create any data structures.
|
Recursion space: Every recursive call uses stack space. If a function recurses to depth |
Amortized Complexity
Some operations are expensive occasionally but cheap on average. The classic example: list.append().
# Python lists use dynamic arrays that double in capacity when full.
# Most appends are O(1) — just write to the next slot.
# Occasionally, the array is full and must resize:
# 1. Allocate a new array of 2× the size
# 2. Copy all n elements → O(n) for this ONE append
# 3. Future appends are O(1) until the next resize
#
# Averaged over n appends: total work = n + n/2 + n/4 + ... ≈ 2n
# Per operation: 2n / n = O(1) amortized.
items = []
for i in range(1000):
items.append(i) # O(1) amortized per call, O(n) total
Common Complexity Patterns to Recognize
| Pattern | Time | Example |
|---|---|---|
Single loop |
O(n) |
Linear search, sum of array |
Two nested loops (same input) |
O(n²) |
Brute-force two sum |
Nested loops (different inputs) |
O(a × b) |
Comparing two lists |
Divide and conquer |
O(n log n) |
Merge sort, quicksort (average) |
Binary search on sorted data |
O(log n) |
Finding element in sorted array |
Hash map lookups in a loop |
O(n) |
Two sum with hash map |
BFS / DFS on a graph |
O(V + E) |
Number of islands, shortest path |
Sorting + iteration |
O(n log n) |
Many "two-pointer after sort" problems |
Recursive with 2 branches |
O(2ⁿ) |
Naive Fibonacci, brute-force subsets |
DP with memoization |
O(states × work_per_state) |
Fibonacci O(n), coin change O(n × m) |
What to Say in a Google Interview
|
Script for explaining complexity:
|
Mental Model: Time-Space Tradeoff
In almost every algorithm problem, you can trade space for time:
-
Brute force: minimal space, maximal time (nested loops)
-
Hash map: use O(n) extra space to get O(1) lookups
-
Sorting: O(n log n) preprocessing to enable binary search later
-
Memoization: cache computed results to avoid recomputation
Google wants to see you discuss this tradeoff explicitly. Don’t just jump to the optimal solution — show you understand why the tradeoff exists.