Big-Oh Notation
- We will typically measure the computational efficiency of an algorithm as the number of basic operations it performs as a function of its input length.
- That is, a function `T(n)` where `n` is the input length and `T(n)` is the number of steps to do the computation.
- We don't want our discussion to be overly dependent on the low-level implementation and representation of our algorithm, so we introduce the concept of Big-Oh notation to avoid this.
Definition. If `f,g` are two functions from `NN` to `NN`, then we say that (1) `f=O(g)` if there exists a constant `c` such that `f(n) le c cdot g(n)` for every sufficiently large `n`, (2) we say `f= Omega(g)` if `g= O(f)`, and (3) say that `f=Theta(g)` if `f=O(g)` and `f=Omega(g)`.
(4) We say that `f = o(g)` if for every `epsilon > 0`, `f(n) le epsilon g(n)` for sufficiently large `n` and say that `f = omega(g)` if `g = o(f)`.
- As an example, you should work out formally that if `f(n) = 100n log n` and `g(n) = n^2` that `f= O(g)`, `g=Omega(f)`, `f=o(g)`, and `g=omega(f)` .