Codementor Events

The Big O Notation - Explained

Published Jul 11, 2022
The Big O Notation - Explained

Introduction

Focusing on the validity of an algorithm is typicaly a priority when developing algorithms. What happens if an algorithm is theoretically useful but becomes useless due to the time required to run it? Or what if the amount of space it needs is so substantial that the machine might run out of memory?

In this article, we will talk about time and space complexity in the context of algorithm analysis. We will examine the Big O notation, which is the most used statistic for describing the effectiveness of algorithms.

We'll also talk about how the Big O, Big Theta, and Big Omega notations differ from one another. In order to show how the time and space complexity may actually be determined, we will also deal with a few practical cases.

In computer science, the Big O notation is used to represent the time complexity of an algorithm. The time complexity being the number of operations that an algorithm needs to perform in order to complete its task. The Big O notation is simply a way of representing the time complexity in a more general way.

Big O vs Big Theta vs Big Omega

Big O, Big Theta and Big Omega are all measures of the efficiency of an algorithm. Big O is an upper bound on the efficiency of an algorithm, meaning it is an estimate of the maximum amount of time the algorithm will take to run. Big Theta is a tight bound, meaning it is an estimate of the exact amount of time the algorithm will take to run. Big Omega is a lower bound on the efficiency of an algorithm, meaning it is an estimate of the minimum amount of time the algorithm will take to run.

Best Case vs Worst Case vs Expected Case

When considering the running time of an algorithm, we often think in terms of the best case, worst case, and expected case. The best case is the inputs for which the algorithm runs the fastest. The worst case is the inputs for which the algorithm runs the slowest. And the expected case is the inputs for which the algorithm runs in the average amount of time.

The best case and worst case can be determined by analyzing the algorithm. The expected case is usually determined by running the algorithm on a large number of randomly generated inputs and taking the average running time.

The best case is important when the algorithm will be used on inputs that are known to be easy. For example, consider the sorting algorithm quicksort. The best case for quicksort is an already sorted array. On an already sorted array, quicksort will run in O(n log n) time, which is the fastest possible running time for any sorting algorithm.

The worst case is important when the algorithm will be used on inputs that are known to be difficult. For example, again consider the sorting algorithm quicksort. The worst case for quicksort is an array that is sorted in reverse order. On an array sorted in reverse order, quicksort will run in O(n^2) time, which is the slowest possible running time for any sorting algorithm.

The expected case is important when the inputs to the algorithm are unknown. For example, consider the problem of finding a needle in a haystack. The best case is when the needle is at the top of the haystack. The worst case is when the needle is at the bottom of the haystack. But the expected case is when the needle is somewhere in the middle of the haystack. The expected case is the most important case to consider when analyzing an algorithm.

Space Complexity

Space difficulty should also be considered as a significant factor in addition to time complexity. This is a measure of the amount of memory an algorithm uses. It is usually expressed as a function of the size of the input to the algorithm. Space complexity can be thought of as a measure of the efficiency of an algorithm, as it is often related to the time complexity of an algorithm.

Calculating Space Complexity with Big O Notation

In this basic example, we will calculate the space complexity of an algorithm using Big O notation.

Let's say we have an algorithm that requires the following steps:

  1. Allocate an array of size n.
  2. Initialize all elements of the array to 0.
  3. Loop through the array and set the value of each element to its index value.

The space complexity of this algorithm is O(n), because we are allocating an array of size n and using n additional units of space to store the index values.

Calculating Time Complexity with Big O Notation

Now let's take a closer look at time complexity.

Consider the following javascript code:

function foo(n) {
  for (let i = 0; i < n; i++) {
    console.log("Hello, world!");
  }
}

The time complexity of this code is O(n), or linear time. This is because the number of operations (in this case, console.log statements) is directly proportional to the input (n). If the input is doubled, the number of operations is also doubled.

In conclusion, Big O notation is a powerful tool that allows us to compare the time and space complexity of algorithms. By understanding how to calculate the time and space complexity of an algorithm, we can design more efficient algorithms that run faster and use less memory.

Thanks for reading.

Discover and read more posts from Fotie M. Constant
get started
post commentsBe the first to share your opinion
Show more replies