# Big O notation: — Order of (notation)

A way to define how much time and data a certain code takes

Visual Example:

## Why does this method exist?

Because it gives us a simple function of time versus iteration.

**Pros**: a simple function that describe the efficiency of code. **Cons**: little hard to create and understand. The use of this method comes handy when creating big data analysis projects, that the efficiency is a main thing.

**Time Complexity of big O:**

- O(1): constant time, in code it is a simple line of code
- O(n): linear time, one loop of n(length) in the code
- O(n²): quadratic time, nested loop in the code
- O(n^k): polynomial time, algorithm examples, quicksort, insertion sort.
- O(log n): logarithmic time, algorithm examples, binary search.
- O(n log n): linearithmic time, examples, merge sort, heapsort.

**How does it works?**

O(n), is like f(x) when O stand for order of, and n is the length of the input for a given function. When calculating big O we need to look at the operation to measure and determent those things:

- Firstly go throw the code line by line and establish whether it’s O(1), O(n) etc. by the worst case scenario.
- Add up all the big O of each line to the
*final big O.*should look something like this O(10 + 3n +n²), depends on your algorithm - Lastly, take only the biggest exponent of n. In our example — O(n²).

**TIP:** there is one more use of big O in terms of calculating the efficiency of a program. You can use big O to calculate the memory usage of a program. It’s works just like the time efficiency, but n is consider use of storing data while the program is running.