4.4. Complexity of Divide-and-Conquer Algorithms¶
We have seen several examples of divide-and-conquer algorithms. As their main trait is dividing the input into several parts and solving the problem recursively, one should be able to analyse their complexity via the recurrence relations method (Section Complexity of Simple Recursive Algorithms). The only thing we need is a little twist:
4.4.1. Changing variable in recurrence relations¶
Consider the binary search program:
let rec binary_search arr k =
let rec rank lo hi =
if hi <= lo
then
(* Empty array *)
None
(* Converged on a single element *)
else
let mid = lo + (hi - lo) / 2 in
if fst arr.(mid) = k
then Some (arr.(mid))
else if fst arr.(mid) < k
then rank (mid + 1) hi
else rank lo mid
in
rank 0 (Array.length arr)
Its complexity can be described by the following recurrence relation, depending on the size \(n\) of the input array arr
:
where the complexity of returning a value is taken to be negligible (and, hence, 0) and \(c\) is a complexity of computing the middle and dereferencing elements of an array.
Let us notice that this is not a first-order recurrence relation that we’ve used to see, as the input size is divided by two rather than subtracted 1 as in previous examples. In order to reduce the problem to the one we already know how to solve, we make a change of variable, assuming, for the time being, that the size \(n\) is a power of 2. Specifically, we take \(n = 2^k\) for some \(k\). We can then rewrite the relations above as follows, for a function \(f(k) = t(2^k) = t(n)\):
Therefore, by changing the variable, we obtain:
This is a familiar form that can be solved by the method of differences obtaining \(f(k) = c \cdot (k + 1)\). An since \(f(k) = t(2^k)\), and also \(k = \log_2 n\), we get
Remember, however, that this has been done under assumptions that \(n = 2^k\), and this result is not guaranteed for other \(n\). The total asymptotic guarantee, in the big-O sense, can be however, obtained via the following theorem.
Theorem
If \(f(n) \in O(g(n))\) for \(n\) being a power of 2, then \(f(n) \in O(g(n))\) for any \(n\) if the following two conditions hold: 1. \(g\) is non-decreasing for all \(n > n_0\) for some fixed \(n_0\), and 2. \(g(2n) \in O(g(n))\), i.e., \(g\) is smooth (does not grow too fast).
The first condition of the theorem is true for most of the functions of interest and means that \(g\) is “predictable” and will not suddenly start decreasing in between the “checkpoints” being the powers of two. The second condition states that in between those checkpoints the function does not grow to fast, so on the segment \([2^k ... 2^{k+1}]\) one can still use it for asymptotic approximation. Combinations of polynomials and algorithms are smooth, but it’s not the case for exponents and factorial.
Getting back to our example with binary search and its complexity \(t(n) \in O(\log n)\) for powers of 2, we can assert that
- \(\log n\) is growing monotonically, that is, it’s non-decreasing.
- \(\log (2n) = \log 2 + \log n \in O(\log n)\), therefore the function is smooth.
As the result we can conclude that time demand of binary search is within \(O(\log n)\) unconditionally.
4.4.2. Complexity of Merge Sort¶
Recall the code of merge sort:
let rec merge_sort arr =
let rec sort a =
let lo = 0 in
let hi = Array.length a in
if hi - lo <= 1 then ()
else
let mid = lo + (hi - lo) / 2 in
let from1 = copy_array a lo mid in
let from2 = copy_array a mid hi in
sort from1; sort from2;
merge from1 from2 a lo hi
in
sort arr
Notice that the complexity \(t(n)\) of the internal sort
is combined from the following components:
- Computing the middle of the array (constant \(d\)),
- Copying the array into two sub-arrays (\(c_1 \cdot n\)), and
- Merging two sub-arrays of the half-size (\(c_2 \cdot n\)), and
- Running two recursive sorting calls (\(2 \cdot t(n/2)\)).
Therefore, merging come constants, we can write its recurrence relation as:
We can now solve it by means of changing the variable \(n = 2^k\), \(t(n) = f(2^k)\):
Therefore
We can solve this first-order inhomogeneous recurrence relation by changing the function
Repeating the steps from the previous lectures, we obtain:
and by dividing both parts of the equation by \(2^{k - 1}\), we obtain:
By the method of differences, we obtain
The last sum of series is less than \(d\), hence it was approximated by \(d\).
We can now substitute back, obtaining
Recalling that \(t(n) = f(\log_2 n)\), we obtain:
As we have obtained \(t(n) \in O(n \log n)\) for the powers of two, we need to check the conditions of the theorem, Indeed, \(n \log n\) is a monotonically growing function. It is also not difficult to check that it is smooth, hence the worst-case complexity of merge sort is in \(O(n \log n)\).
4.4.3. Complexity of Quicksort¶
Recall the code of Quicksort:
let quick_sort arr =
let rec sort arr lo hi =
if hi - lo <= 1 then ()
else
let mid = partition arr lo hi in
sort arr lo mid;
sort arr mid hi
in
sort arr 0 (Array.length arr)
The complexity \(t(n)\) of the internal sort
is combined from the following components:
- Partitioning the array into two sub-arrays (\(c \cdot n\)), and
- Running two recursive sorting calls (\(2 \cdot t(n/2)\)).
Therefore, one can obtain a complexity in the class \(O(n \log n)\) for quick sort by solving the following recurrence relation, similar to the one we have already solved for merge sort:
The second component is, however, a bit subtle as the fact that the input is divided by two is not guaranteed (unlike in merge sort). The partitioning into two equal halves is only the case if the pivot for partitioning has been chosen so putting it at its “right place” would partition the array precisely in the middle. And, as we’ve seen from the examples before, this is not always true.
However, remember that we have assumed that all keys in the array are randomly distributed. Therefore, it is highly unlikely that at each recursive call we will partition the array badly (e.g., to \(n - 1\) and 1 element). Proving that the average sorting time of quick sort is still within \(O(n \log n)\) is beyond the scope of this lecture.
As the final remark, when asked about the worst-case complexity of Quicksort, one should be careful and tell \(O(n \log n)\) only specifying that it is an average-case complexity on uniformly distributed inputs. For the truly worst-case complexity the recurrence relations will be somewhat different (see Exercise 8):
What do you think the complexity will be in this case?
4.4.4. The Master Theorem¶
Divide-and-conquer algorithms come in many shapes, and so far we have seen only a class of very specific (albeit, arguably, most common) one — such that divide their input into two parts. For a general case of analysing the recursive algorithms, there is a widely used theorem, known and Master Theorem or a Master Method for solving recurrence relations, which covers a larger class of algorithms. Here we provide necessary definitions to formulate the theorem and give its statement.
Definition (Theta-notation)
The positive-valued function \(f(x) \in \Theta(g(x))\) if and only if there is a value \(x_0\) and constants \(c_1, c_2 > 0\), such that for all \(x \geq x_0\), \(c_1 \cdot g(x) \leq f(x) \leq c_2 \cdot g(x)\).
Definition (Omega-notation)
The positive-valued function \(f(x) \in \Omega(g(x))\) if and only if there is a value \(x_0\) and constants \(c > 0\), such that for all \(x \geq x_0\), \(c \cdot g(x) \leq f(x)\).
As a mnemonics, one can think of
- \(f(n) \in O(g(n))\) as “\(f \leq g\)”
- \(f(n) \in \Omega(g(n))\) as “\(f \geq g\)”, and
- \(f(n) \in \Theta(g(n))\) as “\(f = g\)”
The following theorem serves a “Swiss-army knife” for recurrence relations of the form \(T(n) = aT(n/b) + f(n)\), where \(a \geq 1\) and b > 1 are constants, and \(f(n)\) is eventually non-decreasing.
Theorem (The Master Method for Solving Recurrences)
Let \(T(n) = aT(n/b) + f(n)\), then \(T(n)\) has the following asymptotic behaviour:
- If \(f(n) \in O(n^{\log_b a - \varepsilon})\) for some \(\varepsilon > 0\), then \(T(n) \in \Theta(n^{\log_b a})\);
- If \(f(n) \in \Theta(n^{\log_b a})\) for some then \(T(n) \in \Theta(n^{\log_b a} \log n)\)
- If \(f(n) \in \Omega(n^{\log_b a + \varepsilon})\) for some \(\varepsilon > 0\), and it \(a f(n/b) \leq c f(n)\) for some constant \(c < 1\) and sufficiently large \(n\), then \(T(n) \in \Theta(f(n))\).
The proof of the Master Theorem as well as its advanced applications are beyond the scope of this course, and you are welcome to refer to the book Introduction to Algorithms by Cormen et al. for the details and examples.