March 20, 2026 ยท 3 min read

Big-O Notation Explained With Real Code Examples

What is Big-O Notation?

In computer science, Big-O notation is the mathematical language we use to describe how the runtime of an algorithm grows as the input size ($n$) gets arbitrarily large. We don’t care if an algorithm takes exactly 4.2 seconds on your laptop. We care about what happens when the input size goes from 1,000 to 1,000,000. Does the time double? Does it square? Does it explode entirely?

Let’s drop the formal limits and dive into what these time complexities actually look like in everyday code.

$O(1)$ – Constant Time

An algorithm is $O(1)$ if its runtime does not depend on the size of the input. Whether you give it an array of 10 items or 10 billion items, it takes the exact same number of steps.

def get_first_element(arr):
    # This always takes 1 operation, regardless of len(arr)
    return arr[0]

Accessing an array by its index, or looking up a key in a well-distributed Hash Map / Dictionary, are classic examples of $O(1)$ time complexity.

$O(n)$ – Linear Time

An algorithm is $O(n)$ if its runtime grows directly in proportion to the input size. If you double the input, the runtime roughly doubles.

def find_max(arr):
    max_val = arr[0]
    for num in arr:
        if num > max_val:
            max_val = num
    return max_val

In the worst-case scenario, the loop must inspect every single element in the array exactly once. Therefore, the time scales linearly with $n$.

$O(n^2)$ – Quadratic Time

This is where things start to slow down. An algorithm is $O(n^2)$ if the runtime is proportional to the square of the input size. If you multiply the input by 10, the runtime increases by 100.

The hallmark of $O(n^2)$ algorithms is nested loops iterating over the same dataset.

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(0, n-i-1):
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]

For an array of size $n$, the outer loop runs $n$ times, and the inner loop runs roughly $n$ times. $n \times n = n^2$. Avoid quadratic algorithms for massive datasets!

$O(\log n)$ – Logarithmic Time

Logarithmic time is incredibly fast. An algorithm is $O(\log n)$ if the number of operations increases very slowly as $n$ grows. This usually happens when the algorithm cuts the problem size in half with every step.

The ultimate example is Binary Search. If you have a sorted array, you don’t need to check every element. You look at the middle. If your target is smaller, you discard the right half and repeat on the left half.

Even if you have an array of 1,000,000,000 elements, $\log_2(1,000,000,000) \approx 30$. It takes a maximum of 30 checks to find your answer!

$O(n \log n)$ – Linearithmic Time

This is the theoretical speed limit for comparison-based sorting algorithms. You cannot sort an array using greater-than/less-than comparisons faster than $O(n \log n)$ in the worst case.

Algorithms like Merge Sort and Heap Sort achieve this. They typically work by recursively dividing the array into halves (which provides the $\log n$ factor) and then iterating through the elements to merge them back together (which provides the $n$ factor).

Conclusion

Understanding Big-O notation is the first step toward writing scalable software. When you write a function, always ask yourself: “How many nested loops do I have?” and “Does the problem size shrink at each step?” By identifying the bottlenecks, you can optimize your code to handle millions of users with ease.

Ready to solve your problem?

Start with AI, then bring in a tutor when it gets serious.

Try the same topic with MathGoose, or send the brief to a matched STEM tutor.

Start solving with AI Contact a tutor