Algorithm Complexity Simplified

  • The concepts explained here might not help you crack competitive examinations like GATE as they focus on mathematical definition rather than practical usage. But this will simplify things to make those concepts understand easily.
  • The pre-requisite to this blog is an understanding of various sorting algorithms, arrays, data structures (stack, queue, binary tree), and recursion.

Why Algorithm Complexity?

To understand the need for Algorithm Complexity, let’s go for purchasing a Car. To simplify things, let’s assume we’ve already fixed a budget and filtered two cars. Both the vehicles share the same feature set and price; which one you’ll pick? In most cases, you’ll go for the most fuel-efficient car, which is one of the performance criteria. Similarly, Algorithm Complexity represents an algorithm’s performance criteria and helps us pick from various available solutions.

What is Algorithm Complexity?

Algorithm Complexity essentially represents the rate of increase in time (taken by the algorithm) as the input size increases. For example, how much time a web page takes to load when 5 users are using it versus 5 Million at the same time? And yes, this is relatable to functions we all learned in school, also known as Asymptotic Functions (see this article for a refresher). Therefore, Algorithm Complexity is represented in Asymptotic Annotations; Big Omega, Big O, and Big Theta.
Note: There are 2 complexities for an algorithm, one is Time, and another is Space. We’re not going to talk about Space complexity as it’s no longer an issue with a wide availability of high memory hardware (hosts) cheaply. It used to be a constraint when memories were still in MBs, and we were still using floppies.

Big Omega, Big O, and Big Theta

Big O (or simply ‘O’) describes an upper bound on time. Meaning, the algorithm will not take more than Big O time in any case. It can take less time than Big O (on specific input values) but not more than Big O in any case. For example, searching number 5 in the array {1, 2, 3, 4, 5} will take 5 iterations, while searching number 1 will take 1 iteration. Since Big O describes the upper bound, the time complexity for searching a number in an array of size N is O(N).

Big Omega (or simply ‘Ω’) describes a lower bound on time. Meaning, the algorithm will not take less than Big Ω time in any case. It can take more time than Big Ω (on specific input values) but not less than Big Ω. In the array example above, the Big Ω complexity is Ω(1) as searching for the number won’t take less than 1 iterations.

Big Theta (or simply ‘Θ’) describes tight bound on time. Meaning, the algorithm will take exactly Big Θ time, not less or more (+/- a constant time). For example, a web page will be displayed within 5 milliseconds if the number of users is between 50–60. In this case, the complexity will be Θ(55) with +/- 5 users.

Moving forward, the algorithm complexity will be represented only in Big O as the industry doesn’t care about Big Omega or Big Theta. But why? from the practical point of view, both Big Omega and Big Theta don’t provide any value. For example, the website will display the web page in 1 millisecond at best, or the website will display the web page in 5 milliseconds if the number of users is between 50–60. Both of these are exceptional cases, and what we care about is knowing practical scenarios (such as time taken as the traffic increases).

Drop the Constant and Non-Dominant terms

Let’s take the below code as an example and determine its complexity.

1. public void calculateSumAndProduct(int nums[]) {
2. long sum = 0;
3. long product = 1;
4. for (int num : nums) {
5. sum += num;
6. product *= num;
7. }
8.
9. System.out.println(sum);
10. System.out.println(product);
11. }
Runtime Comparison of O(N) and O(2N+1)
Runtime Comparison of O(N²) and O(N²+N)
1.  public void calculateSumAndProduct(int nums1[], int nums2[]) {
2. int sum = 0;
3. int product = 1;
4. for (int num : nums1) {
5. sum += num;
6. product *= num;
7. }
8.
9. for (int num : nums2) {
10. sum += num;
11. product *= num;
12. }
13.
14. System.out.println(sum);
15. System.out.println(product);
16. }

Best, Worst, and Expected Case Complexity

The Big O runtimes are categorized as Best, Worst, and Expected case. Let’s understand this with an example of searching a number in an array.

  1. Best Case: When the number we are searching for is found at the first index of the array, the runtime becomes O(1).
  2. Worst Case: When the number we are searching is at the last index of the array, the runtime becomes O(N).
  3. Expected (Average) Case: In real life, we never get too lucky or unlucky in most cases, and hence Expected Case is the most practical one. The runtime for searching a number in an array is O(N/2) = O(N).

Runtimes

Constant Runtime

An algorithm is said to be constant time if it has a fixed set of instructions irrespective of the input. For example, see the code block below

1. public int sum(int a, int b) {
2. int c = a + b;
3. return c;
4. }

O(N) Runtime

When an algorithm has an N number of instructions, usually in a loop, for example, the below code will execute exactly N instructions depending on the array’s size.

1. public void print(int nums[]) {
2. for (int num : nums) {
3. System.out.println(num);
4. }
5. }

O(N²) or O(N^N)

When an algorithm has a piece of code with nested blocks, usually nesting of loops, for example, the below code, will execute O(N²) times.

1. public void print(int nums[][]) {
2. for (int i=0; i<nums.length; i++) {
3. for (int j=0; j<nums[i].length; j++) {
4. System.out.println(nums[i][j]);
5. }
6. }
7. }

O(log N) and O(N log N)

The Divide and Conquer algorithms usually have logarithmic runtimes. For example, searching a number in a sorted array using the Binary Search algorithm has O(log N) runtime. The array’s size is divided in half until the number is found, or the input cannot be divided further.

Binary Search INPUT on every iteration

O(2^N)

These are also known as recursive (see Appendix 1 for more details) runtimes where the code is executed 2^N times for every input N. For example, finding Nth Fibonacci number.

1. public int fibonacci(int N) {
2. if (N <= 1) {
3. return 1;
4. }
5. return fibonacci(N-1) + fibonacci(N-2);
6. }
Fibonacci function calls after every execution

Cheatsheet for Runtimes Comparison

Comparison of all runtimes

Appendix 1: Recursion

A function that calls itself is known as a recursive function. The problems that are too complex for an iterative approach are broken down into smaller ones through recursion — for example, iterating through a Binary Tree. It requires maintaining previous states via Stack/Queue in an iterative approach, while the recursive approach leverages the platform’s stack approach and keeps the code clean.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store