Back

What is Big O Notation: Explaining with Simple Examples

2024-05-02

Written by: Juri Breslauer

Big O Notation

Big O notation is a mathematical notation that describes how the runtime of an algorithm or the amount of memory it uses changes with the size of the input data. There is also “little O” notation, which provides a more stringent upper bound on the complexity of an algorithm, but it is often more difficult to calculate than Big O. In practice, Big O notation is used more frequently because it is simpler to analyze and provides a sufficiently accurate measure of complexity in most cases.

The Purpose of Big O Notation

Understanding this notation allows you to:

Post in progress
Loading