Tech Tuesday: Growth Behavior (Big O Notation)

Today’s Tech Tuesday will let you in on a bit of computer science secret language. Sometimes when discussing a solution to a problem an engineer will say something like “that’s too slow – it’s O of n squared” (that’s the letter “O” not a zero). What does it mean? Well it is a shorthand way of describing the growth behavior in the time or space required by an algorithm as n (the number of things we are processing) grows large. More formally it is known as “Big O Notation.”

As we have seen in the last few posts on computational complexity, for small n the differences between algorithms may be relatively small but that can change dramatically as n grows. The mysterious sounding Big O notation is a relatively simple way of formalizing the idea that the relevant growth behavior can be captured entirely by its dominant term. Let me explain using an example.

Suppose algorithm A takes 1,000 steps (a constant) and algorithm B takes n + 20 steps. For small n, say n = 20, B will be much faster than A. But for any n > 980, algorithm B will be slower and increasingly so. For n = 1 million, B is already one thousand times slower than A. What Big O Notation does is give us a way of expressing that A and B belong to different growth classes or growth orders (hence the “O” as in order). In this example, A belongs to O(1) which is constant “growth” and B belongs to O(n) which is linear growth.

We write O(1) and O(n) as if they were functions themselves but they are better thought of as sets of all functions that share the same growth order. For instance, n + 20, 3n - 5 and 17n all share the same growth order O(n). Wait what? 17n would seem to grow a lot faster than n + 20, no? 17 times as fast to be precise. But this is not what matters. Why? Because it is still grows way more slowly than say O(n ^ 2) as n grows large (that’s n squared).

One relatively easy way to think about the difference is that if your computer gets 17 times faster, then an algorithm that used to take 17n steps becomes as fast as one that took n step for any value of n. But if your computer gets 17 times faster, then n ^ 2 / 17 will still be way more steps than n for large n. This is captured in the formal definition of big O notation which says roughly that f(n) ∈ O(g(n)) if there exist two positive values M and N, such that for every n > N, f(n) ≤ Mg(n).

So what is the growth order of an algorithm that takes n^2 + 20n + 1000 steps? Well it turns out to be O(n^2). It is pretty easy to see why. For any n > 20, the n^2 term will exceed the 20n term. At n = 20 the 20n + 1000 add up to 1,400 and n^2 is 400. So if we set M = 4 that should do the trick. And you can check here to see that in fact for n > 20, n^2 + 20n + 1000 ≤ 4n^2.

That’s why I said earlier that what Big O notation does is reduce everything to the dominant term. The n^2 is the dominant term and the n and the constant don’t really matter as n gets large. There are many different growth orders and we will encounter a variety of them as Tech Tuesday goes on. While understanding order of growth is super important, you should keep in mind  that for some practical problems that you may encounter n may in fact be known to be small and always small. In that case things like large constants may matter (more on that soon).

Loading...
highlight
Collect this post to permanently own it.
Continuations logo
Subscribe to Continuations and never miss a post.
#tech tuesday#complexity