Tuesday 4 November 2014

Week 8: Time Complexity

            The post in Week 7 talks about algorithms being linear, quadratic or cubic.  Such descriptions are neither precise nor fancy enough keep computer scientists satisfied.  So we have this thing called the time complexity of an algorithm, which describes the growth in runtime over the growth in input size.

Here Comes the Notation!


            Time complexity is described in Big O.  It looks something like this.  Let $f: N \to R^{\geq 0}$ be a sequence that describes the runtime of an algorithm $P$ and $g: N \to R^{\geq 0}$ be a sequence that describes an arbitrary runtime (i.e. linear, quadratic, cubic, etc).  If we say that $f \in O(g)$, then we mean that:

\[\exists c \in R^+, \exists B \in N, (\forall n \in N, n \geq B \implies f(n) \leq cg(n))\]

Intuitively, what these little symbols say is that as the input becomes arbitrarily large, then the runtime of $P$ performs better than or as well as $g$.  Likewise, if we say that $f \in \Omega (g)$, then we mean that:

\[\exists c \in R^+, \exists B \in N, (\forall n \in N, n \geq B \implies f(n) \geq cg(n))\]

Again, this is just a fancy way of saying that as the input becomes arbitrarily large, then the runtime of $P$ performs worse than or as well as $g$.

A Fun Problem


            Let $f: N \to R^{\geq 0}$ be a polynomial sequence of degree $m$ and $g: N \to R^{\geq 0}$ be a polynomial sequence of degree $n$ with $m, n \in R$.  Find all $m, n$ such that $f \in O(g)$ and $(f \times g) \in \Omega(f)$.

No comments:

Post a Comment