The Big-O notation, many talk about it but what is it? It is the question, how fast is the code, which you have in front of you. You have an algorithm that is supposed to solve a task, all well and good, but how long does it take? And more importantly, how does it look with a lot of data?
Google, for example, needs just 0.33 seconds for a search query "c++" for 68,300,000 results. Here you can see how good the performance of the search engine is. That is exactly our keyword: performance. When we talk about performance in code, we mean how long the code takes to solve a task. You can also say time complexity. So the question how much effort does the code need to solve the problem?
Small example, from the Linked Lists post, where we want to insert something at the end. Here we have a next and a prev pointer, as well as head and tail. Does it make sense to go through the whole list from head to add an element at the end of the list?
Short answer: No, it's nonsense. It would make more sense, since we have Tail and a prev pointer, to say we start directly at Tail and add it there directly. Why does it make more sense? It's simply because we don't have to go through the whole list, which contains n elements. Because you can skip this part and go directly to the operation. Here you can really use the saying "Think practical, give coffins".
If we look at the diagram, we can see what types of efforts exist. Basically, everything greater than is in the bad zone. Everything equal to or less than is in the good zone.
There are a few rules regarding notation. These are the following:
- Efforts that contain one or more operations, and are executed only once -> , .
- Constants are omitted, because n is considered to be very large. -> ,,, =
- Always the largest effort is taken, which is considered as the total effort. Complexities like -> .
- Strive to reduce complexity or even reach .
- Always consider worst case.
void foo()
{
std::cout << "bar" << std::endl;
}
Here we see a function which has an operation. Here, for example, the effort is .
void foo(int n)
{
for(int i = 0; i < n; i++)
{
std::cout << "bar" << std::endl;
}
}
Here we have built in a loop that runs to . So here the effort is .
void foo(int n)
{
for(int i = 0; i < n; i=i*2)
{
std::cout << "bar" << std::endl;
}
}
Here we have the same function, only that we now double the counter per run. Here the effort would be .
and
void foo(int n)
{
int m = n / 2;
for(int i = 0; i < n; i++)
{
for(int j = 0; j < m; j++)
{
std::cout << "bar" << std::endl;
}
}
}
Here we now have 2 loops, one that runs to n and one that runs to m. This results in an effort . Suppose if . Wouldn't that be ? Yup it would be.
void foo(int n)
{
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
std::cout << "bar" << std::endl;
}
}
}
Here we have a code, with the effort , because both loops run to n and are nested in each other.
void foo(int n)
{
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
for(int k = 0; k < n; k++)
{
std::cout << "bar" << std::endl;
}
}
}
}
Here we have a cubic effort. In this code there are 3 loops nested inside each other and all three run to n. So the effort here is .
We have an effort of e.g. with the game Towers of Hanoi or with password attacks (Bruteforce).
We have an effort of , if we calculate the Fibonacci numbers recursively.
int fib(int num)
{
if(num <= 1)
return num;
return fib(num - 2) + fib(num - 1)
}
I hope I could bring a little insight into the topic 😄