Introduction
I’m talking about the (re)definition done by Donald Knuth, because he changed the one by Hardy and Littlewood (used in analytic number theory) to make it more useful for computational complexity theory.
Definition
Publication: BIG OMICRON AND BIG OMEGA AND BIG THETA (Donald E. Knuth, 1976)
“After discussing this problem with people for several years, I have
come to the conclusion that the following definitions will prove to be
most useful for computer scientists:
O(f(n)) denotes the set of all g(n) such that there exist positive
constants C and nO with |g(n)| ≤ Cf(n) for all n ≥ nO .
Ω(f(n)) denotes the set of all g(n) such that there exist positive
constants C and nO with g(n) ≥ Cf(n) for all n ≥ nO .
Θ(f(n)) denotes the set of all g(n) such that there exist positive
constants C, C’, and nO with Cf(n) ≤ g(n) ≤ C’f(n) for all n ≥ nO .”
Misusages
While searching the web, I have seen a lot of misusages of these definitions.
- Nearly everyone uses O(f(n)) to describe the computational complexity, when they instead could use Θ(f(n)) to provide a way more precise information about the behavior of g(n).
- In a lot of posts, people wrongly claim that O(f(n)) is the worst case, Ω(f(n)) the best case and Θ(f(n)) the average case of the computational complexity of an algorithm.