Algorithm Complexity Simplified

Algorithm Complexity is the favorite interview topic of FAANG companies and vital for choosing tons of available solutions for a single problem. If you go by textbook definitions and concepts, it will leave you scratching your head in most cases. This blog post will put the Algorithm Complexity in its simplest form, required to meet the purpose of real-life coding challenges and interviews.

  • The concepts explained here might not help you crack competitive examinations like GATE as they focus on mathematical definition rather than practical usage. But this will simplify things to make those concepts understand easily.
  • The pre-requisite to this blog is an understanding of various sorting algorithms, arrays, data structures (stack, queue, binary tree), and recursion.

Why Algorithm Complexity?

What is Algorithm Complexity?

Big Omega, Big O, and Big Theta

Drop the Constant and Non-Dominant terms

1. public void calculateSumAndProduct(int nums[]) {
2. long sum = 0;
3. long product = 1;
4. for (int num : nums) {
5. sum += num;
6. product *= num;
7. }
9. System.out.println(sum);
10. System.out.println(product);
11. }

Every time the function calculateSumAndProduct executes, lines #2, #3, #9, and #10 will run exactly one time, while lines #5 and #6 will run N times (size of the array “nums”). Based on it, the complexity will be O(2N + 4). But as we discussed earlier, the goal of complexity is to determine the rate of increase; we’re not gaining much from being accurate, as seen from the rate of increase graphs below for O(N) and O(2N + 4). And hence, in Big O, constant terms are dropped, making the complexity of the above code O(N).

Runtime Comparison of O(N) and O(2N+1)

Similarly, the complexity of the non-dominant term is dropped. For example, O(N² + N) will become O(N²).

Runtime Comparison of O(N²) and O(N²+N)

While we drop Constant and Non-Dominant Terms, remember, the complexity cannot be reduced further if statements are getting executed a different number of times. For example, the below code’s complexity will be O(N + M) as we cannot be sure what will be the value of N and M (size of num1 and num2 respectively). It’s as good as comparing mangoes with oranges.

1.  public void calculateSumAndProduct(int nums1[], int nums2[]) {
2. int sum = 0;
3. int product = 1;
4. for (int num : nums1) {
5. sum += num;
6. product *= num;
7. }
9. for (int num : nums2) {
10. sum += num;
11. product *= num;
12. }
14. System.out.println(sum);
15. System.out.println(product);
16. }

Best, Worst, and Expected Case Complexity

  1. Best Case: When the number we are searching for is found at the first index of the array, the runtime becomes O(1).
  2. Worst Case: When the number we are searching is at the last index of the array, the runtime becomes O(N).
  3. Expected (Average) Case: In real life, we never get too lucky or unlucky in most cases, and hence Expected Case is the most practical one. The runtime for searching a number in an array is O(N/2) = O(N).

In most cases, the worst and expected case complexities are the same (as seen in the above example as well).


Constant Runtime

1. public int sum(int a, int b) {
2. int c = a + b;
3. return c;
4. }

The complexity of the code is O(2) as there are two instructions. Since we drop the constant terms, such complexity is represented as O(1), which is also the best possible runtime complexity for an algorithm. More such examples are accessing the array elements, find if a number is even or odd, find a value on a Map, etc.

O(N) Runtime

1. public void print(int nums[]) {
2. for (int num : nums) {
3. System.out.println(num);
4. }
5. }

O(N²) or O(N^N)

1. public void print(int nums[][]) {
2. for (int i=0; i<nums.length; i++) {
3. for (int j=0; j<nums[i].length; j++) {
4. System.out.println(nums[i][j]);
5. }
6. }
7. }

And similarly, if there are 3 nested loops, the complexity is O(N³) and so on…

O(log N) and O(N log N)

Binary Search INPUT on every iteration

Suppose we need to search N numbers from the given sorted array using Binary Search. In that case, the complexity will be O(N log N) as we’ll be dividing the input in half for every number N. Typically, Divide and Conquer sorting algorithms, such as Quick Sort, Merge Sort, etc., are O(N log N) runtime algorithms, which is also the best runtime for a sorting algorithm.


1. public int fibonacci(int N) {
2. if (N <= 1) {
3. return 1;
4. }
5. return fibonacci(N-1) + fibonacci(N-2);
6. }

The program will be executed as follows.

Fibonacci function calls after every execution

The complexity of the code is O(2^(N-1)) = O(2^N). These complexities can be generalized as O(branches^depth)

Cheatsheet for Runtimes Comparison

Comparison of all runtimes

O(1) < O(log N) < O(N) < O(N log N) < O(N²) < O(2^N)

Appendix 1: Recursion

Note: Recursion is a complex topic in itself, and it’s not possible to cover it in a paragraph. I’ll be covering this topic in detail in another blog.