2024-05-02
Written by: Juri Breslauer
Big O notation is a mathematical notation that describes how the runtime of an algorithm or the amount of memory it uses changes with the size of the input data. There is also “little O” notation, which provides a more stringent upper bound on the complexity of an algorithm, but it is often more difficult to calculate than Big O. In practice, Big O notation is used more frequently because it is simpler to analyze and provides a sufficiently accurate measure of complexity in most cases.
Understanding this notation allows you to: