Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction to Big-O Notation

Big O notation is a mathematical way to describe the performance or complexity of an algorithm, focusing on how it scales with input size ( n ). It provides an upper bound on the growth rate of an algorithm’s time or space requirements, typically in the worst-case scenario. It helps compare algorithms by abstracting away constants and lower-order terms to focus on the dominant behavior as ( n ) grows large.

By ignoring constants and less significant terms, Big-O focuses only on the dominant factor that affects performance when ( n ) becomes large.

Key Points:

  • Purpose: Measures efficiency time complexity (execution time) or space complexity (memory usage) of algorithms.
  • Focus: Describes worst-case performance unless specified otherwise.
  • Notation: Expressed as ( O(f(n)) ), where ( f(n) ) is a function describing the upper bound.
  • Why ignore constants?: Because as ( n ) grows, large-scale trends matter more than small-scale differences.

📌 Example

  • A simple loop from ( 1 ) to ( n ) → O(n) (linear time).
  • Nested loops each running ( n ) times → O(n²) (quadratic time).

Key takeaway:
Big-O notation helps you predict scalability, not actual execution time. Two algorithms with different Big-O complexities might perform differently for small inputs, but the lower complexity will usually win for large ( n ).

Next: Common Big-O Complexities