Calcady
Home / Scientific / Algorithm Time Complexity (Big O)

Algorithm Time Complexity (Big O)

Calculate real-world execution times for common Big-O complexity classes as input size scales. Compare O(1), O(log n), O(n), O(n log n), O(n²), O(2ⁿ), and O(n!) at any input size.

Calculate absolute real-world wall-clock execution times for common algorithms as input data arrays scale toward massive numbers.

Elements
Min: 1N = 1.00e+4
Ops/sec
Note: At N=10,000, exponential algorithms like O(2^N) trigger numerical overflow constraints and mathematically exceed the heat-death of the known universe.

Real-World Execution Times

Constant: O(1)

10.0 ns
Array Lookups, Hash Maps

Logarithmic: O(log N)

132.9 ns
Binary Tree Search

Linear: O(N)

100.0 µs
Standard Iterations, Filters

Log-Linear: O(N log N)

1.3 ms
Merge Sort, Quick Sort

Quadratic: O(N²)

1.00 sec
Nested Loops, Bubble Sort

Exponential: O(2^N)

Memory Overflow / > Universe Lifespan
Brute-forcing Cryptography
Email LinkText/SMSWhatsApp

Quick Answer: What do the Big-O complexity classes mean in practice?

Big-O notation describes how an algorithm's runtime scales with input size n. The class hierarchy from fastest to slowest growth: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2n) < O(n!). Real-world impact: sorting 1,000,000 elements with O(n log n) Merge Sort takes ∼20 ms at 109 operations/sec; with O(n²) Bubble Sort it takes ∼17 minutes — a 50,000× difference from a single class step. At n = 10,000,000 the gap is 500,000×. This is why Big-O class determines the feasibility of an algorithm at scale, not just its speed — O(2n) is literally impossible for n > 60 on any computer that will ever exist.

Big-O Complexity Class Reference

Operations count at n = 1,000,000 assuming 109 operations/second (a single modern CPU core). Exponential and factorial classes are listed for reference only — they are computationally infeasible at this scale and only practical for very small n.

Complexity Class Operations at n=1M Time at 109 ops/s Example Algorithms
O(1)11 nanosecondHash table lookup, array index access, stack push/pop
O(log n)2020 nanosecondsBinary search, balanced BST lookup, heap operations
O(n)1,000,0001 millisecondLinear search, single array traversal, counting sort
O(n log n)20,000,00020 millisecondsMerge Sort, Timsort, Heapsort, FFT
O(n²)101217 minutesBubble Sort, Insertion Sort, naive string matching
O(n³)101831.7 yearsNaive matrix multiplication, Floyd-Warshall (all-pairs shortest path)
O(2n)10301,030Cosmologically impossibleBrute-force TSP, power set enumeration, naive subset sum
O(n!)105,565,709Cosmologically impossiblePermutation enumeration, brute-force traveling salesman
Big-O ignores constant factors. In practice, an O(n log n) algorithm with a large constant can be slower than O(n²) for small n (e.g., Insertion Sort outperforms Merge Sort for n < 32 and is used as the base case in Timsort for exactly this reason). A 'constant factor' difference of 10x is irrelevant at n = 1,000,000; it matters critically at n = 100.

Common Algorithm Complexities by Type

Algorithm Best Case Average Case Worst Case Space
Binary SearchO(1)O(log n)O(log n)O(1)
Merge SortO(n log n)O(n log n)O(n log n)O(n)
QuicksortO(n log n)O(n log n)O(n²)O(log n)
HeapsortO(n log n)O(n log n)O(n log n)O(1)
Insertion SortO(n)O(n²)O(n²)O(1)
Hash Table (lookup)O(1)O(1)O(n)*O(n)
DFS / BFS (graph)O(V+E)O(V+E)O(V+E)O(V)
Dijkstra's (binary heap)O((V+E) log V)O((V+E) log V)O((V+E) log V)O(V)
*Hash table worst case O(n) occurs only when all keys collide into the same bucket (degenerate case). Well-implemented hash tables with a good hash function achieve O(1) amortized. V = vertices, E = edges in graph algorithms.

Pro Tips & Common Big-O Mistakes

Do This

  • Profile before optimizing — measure the actual bottleneck before changing algorithms. Most programs spend 90% of their time in 10% of the code. A function with O(n²) complexity that's called once on n=100 is not worth replacing with O(n log n). The same function called n times in an outer loop is O(n³) total and absolutely worth addressing. Use a profiler (Python cProfile, Java JProfiler, Go pprof) to find where actual wall-clock time is spent, then apply algorithmic improvements to those confirmed hotspots.
  • Consider both time AND space complexity together. Replacing a O(n²) time algorithm with O(n log n) by memoizing O(n) intermediate results is only beneficial if you have the memory. In embedded systems (microcontrollers with 256 KB RAM) or when processing datasets larger than RAM, space complexity often dominates the design decision. Always ask: does the O(n) space cost of my faster algorithm fit in the target environment?

Avoid This

  • Don't assume O(n log n) is always better than O(n²) for your specific n. For n < 32, Insertion Sort (O(n²)) consistently outperforms Merge Sort (O(n log n)) due to cache locality and lower constant factors. This is so well-known that Python's built-in sort (Timsort) uses Insertion Sort for sub-arrays < 64 elements. Java's Arrays.sort() uses Dual-Pivot Quicksort for primitives rather than Merge Sort for the same reason. Big-O class tells you asymptotic behavior; real runtime for small n depends on constants and cache behavior.
  • Don't mistake amortized complexity for per-operation complexity. Dynamic array append (Python list.append, C++ std::vector::push_back) is O(1) amortized — but individual appends that trigger a resize are O(n). If you are appending to a list inside a loop and the resize happens to coincide with a performance-sensitive moment (e.g., a real-time deadline), O(1) amortized does not protect you. In hard real-time systems (robotics, audio processing, avionics), amortized O(1) structures can be inappropriate; you need guaranteed O(1) worst-case data structures instead (ring buffers, fixed-capacity arrays).

Frequently Asked Questions

What is the difference between O (Big-O), Ω (Omega), and Θ (Theta) notation?

O (Big-O) is an upper bound: f(n) = O(g(n)) means f(n) grows no faster than g(n). It is the worst-case ceiling. You can say O(n²) about an O(n) algorithm and be technically correct (n grows no faster than n²) but not useful. Ω (Big-Omega) is a lower bound: f(n) = Ω(g(n)) means f(n) grows at least as fast as g(n). Comparison-based sorting has Ω(n log n) lower bound — no comparison sort can do better in the worst case. Θ (Theta) is a tight bound: f(n) = Θ(g(n)) means f(n) grows at the same rate as g(n) (both upper and lower bounded). Merge Sort is Θ(n log n) — it's always O(n log n) and always Ω(n log n) regardless of input. In casual software engineering usage, people often say “Big-O” when they mean Θ (tight bound). On interview problems and in research contexts, be precise: say O when you mean upper bound only, Θ when the bound is tight.

Why is O(n log n) the practical lower bound for all comparison-based sorting?

Any comparison-based sorting algorithm must distinguish between n! possible orderings of n elements. Each comparison produces 1 bit of information (this element is greater or less than that element). To distinguish n! outcomes, you need at least log&sub2;(n!) comparisons — by Stirling's approximation, log&sub2;(n!) ≈ n log&sub2;(n). Therefore, no comparison-based sort can do better than Ω(n log n) comparisons in the worst case. This is a mathematical impossibility, not an engineering limitation. You can beat O(n log n) only by not using comparisons: Counting Sort, Radix Sort, and Bucket Sort are O(n) because they exploit properties of the key distribution (integer keys, limited range) rather than comparisons. Python's sorted() and Java's Arrays.sort() are always O(n log n) because they are comparison-based and must work for any data type with a comparison operator.

How do I calculate the Big-O of a nested loop or recursive function?

For nested loops: multiply the complexities. A loop from 0 to n containing a loop from 0 to n is O(n × n) = O(n²). If the inner loop runs from 0 to i (not to n), the total is O(n²/2) = O(n²) (constants dropped). For a loop within a loop within a loop all running to n: O(n³). For sequential code blocks: add complexities, then drop lower-order terms. O(n²) + O(n log n) + O(n) = O(n²). For recursive functions: write the recurrence relation and apply the Master Theorem. Merge Sort: T(n) = 2T(n/2) + O(n) (splits into 2 halves, merging costs O(n)). Master Theorem: a=2, b=2, f(n)=n, log&sub2;(2)=1, f(n)=O(n1) → Case 2 (equal) → T(n) = O(n log n). Binary Search: T(n) = T(n/2) + O(1) → a=1, b=2, log&sub2;(1)=0 → T(n) = O(log n).

What is P vs NP and how does it relate to Big-O complexity?

P (Polynomial time) is the class of problems solvable in O(nk) for some constant k. All problems in the complexity table above with polynomial complexity are in P. NP (Nondeterministic Polynomial) is the class of problems where a proposed solution can be verified in polynomial time, even if finding the solution may not be feasible in polynomial time. The Traveling Salesman Problem (TSP), Graph Coloring, and Boolean Satisfiability (SAT) are NP problems. P vs NP asks whether every problem whose solution can be quickly verified can also be quickly solved. If P = NP, all NP problems have polynomial-time algorithms (most cryptography would collapse). If P ≠ NP (believed by virtually all computer scientists), some problems are fundamentally hard to solve but easy to verify. The P vs NP question is one of the seven Millennium Prize Problems with a $1 million prize — unsolved since 1971. In practice: if you find an NP problem in your code (typical signs: the optimal solution requires trying all possible combinations), use approximation algorithms, heuristics, or dynamic programming to find near-optimal solutions in polynomial time.

Related Calculators