Prev Next

Master's method

Master's method is used to solve the recurrence equation of an algorithm that uses Divide-and-conquer. Instead of using elaborate methods like recursion tree, substitution, etc., one can easily determine the complexity using quick computation.

The generalized recurrence equation for an algorithm that uses divide-and-conquer technique takes the following form.

    T(1) = 1
    T(n) = aT(n/b) + nk

To solve a problem of size n, the algorithm breaks it down to a sub-problems each of size n/b, solves them individually and combines the solution. Each sub-problem is solved recursively solved in the same fashion until the sub-problem's size is 1. The term nk denotes the time taken to break and merge while T(n/b) denotes the time taken to solve a sub-problem.

Lets try to apply substitution to solve this.

    T(n) = aT(n/b) + nk
      = a[aT(n/b2) + (n/b)k] + nk
      = a2T(n/b2) + a(n/b)k + nk
      = a3T(n/b3) + a2(n/b2)k + a(n/b)k + nk
      =     :
      =     :
      = alogbnT(1) + ... + a3(n/b3)k + a2(n/b2)k + a(n/b)k + nk
      = nlogba + nk [alogn-1/b(logn-1)k... + a3/b3k + a2/b2k + a/bk + 1]
      (log rule)       (logbn terms)
      = nlogba + nk [(a/bk)logn-1... + (a/bk)3 + (a/bk)2 + a/bk + 1]

The final recurrence equation is

    T(n) = nlogba + nk [(a/bk)logn-1... + (a/bk)3 + (a/bk)2 + a/bk + 1]

Looking at the above recurrence equation, we note that the rhs is the sum of two terms. There are 3 cases to consider here. Master's method states that:

Case 1: If nlogba > nk, then T(n) ∈ Θ(nlogba) since nlogba will dominate. Stated otherwise, the cost of solving the subproblems dominates the cost of breaking the problem and combining the solutions of subproblems.

Case 2: If nlogba < nk, then T(n) ∈ Θ(nk) since nk will dominate. i.e. the cost of breaking the breaking the problem and combining the solutions of the subproblems dominates the cost of solving the subproblems.

Case 3: If nlogba = nk, then T(n) ∈ Θ(nk.logn) since there are logn terms.

Why did we append logn to case 3?

nlogba = nk implies logba = k
This implies blogba = bk
This implies alogbb = bk (applying logrithmic rule)
This implies a = bk

Now consider the second term in the rhs of the recurrence equation: [(a/bk)logn-1... + (a/bk)3 + (a/bk)2 + a/bk + 1]. Since a = bk, this reduces to (1 + 1 + ... + 1) = logn (as this is the sum of logn 1s). Therefore, the complexity is given by Θ(nk.logn)

Why didn't we append logn to case 2?

Working out in the same manner, we now have a < bk.
This implies a/bk < 1.
This further implies, the series a2/b2k, a3/b3k, ....., alogn-1/b(logn-1)k goes on reducing. So, even though there are logn terms, they don't sum up to logn. Instead, the quantity can be treated as a constant.

Let's work out some examples. We assume T(1) = 1 always.

1) T(n) = 9T(n/3) + n

a = 9, b = 3 and k = 1
logba = log39 = 2 (greater than k)
Hence,T(n) = Θ(n2)

2) T(n) = T(2n/3) + 1

a = 1, b = 3/2 and k = 0
logba = log3/21 = 0 (equal to k)
Hence, T(n) = Θ(log n)

3) T(n) = 3T(n/4) + n√n

a = 3, b = 4 and k = 1.5
logba = log43 < 1 (less than k)
Hence, T(n) = Θ(n1.5)

Applicability of Master's method

Master's method can be applied to determine the complexity only if the recurrence equation is of the form T(n) = aT(n/b) + nk. For instance, it cannot be used if we have aT(n-b) in place of aT(n/b) and/or instead of nk we have some function f(n) which is not a 'pure' polynomial.

Determining time complexity for problems solved using decrease-and-conquer

Master's method dealt before gave a quick way to determine the complexity of problems solved using divide-and-conquer approach. i.e. a problem is solved by breaking it to a subproblems each of size n/b. In this section, we try to obtain a similar formula for determining the complexity of problems solved using decrease-and-conquer approach. A generalized recurrence equation for such a problem is given as follows:

    T(0) = T(1) = ... = T(b) = 1
    T(n) = aT(n-b) + nk

Again, lets solve this using the substitution method.

  T(n) = aT(n-b) + nk
      = a[aT(n-2b) + (n-b)k] + nk
      = a2T(n-2b) + a(n-b)k + nk
      = a3T(n-3b) + a2(n-2b)k + a(n-b)k + nk
          :
          : (takes n/b steps till it hits T(1)
          :
      = an/bT(1) + an/b-1(n-(n/b-1)b)k + ... + a2(n-2b)k + a(n-b)k + nk
      <= an/b + an/b-1nk + ... + a2nk + ank + nk
      = an/b + nk[an/b-1 + .. + a2 + a + 1]
                  (n/b terms)
      = an/b + nk(cn) (c is the sum of the terms which is a constant)
      = an/b + c.nk+1

We see that the recurrence equation contains 2 terms. The first one an/b is exponential while the second one is polynomial. The second one is polynomial because we initially started with the recurrence equation that way. If we were to consider a more generalized recurrence equation:

  T(n) = aT(n-b) + f(n)

where f(n) can be polynomial, logarithmic or exponential, the closed form equation can be determined as

  T(n) = an/b + n.f(n)

Here too, there are three cases to consider.

Case 1: If an/b > n.f(n), then T(n) = O(an).

Case 2: If an/b < n.f(n), then T(n) = O(n.f(n)).

Special Case: If a = 1, then an/b = 1. Hence, T(n) = O(n.f(n)).

Note: It is highly unlikely that f(n) is exponential. Usually it is a constant, logarithmic or at worst a polynomial. Case 2 is extremely rare.

Recall the recurrence equations that we had worked out earlier.

1. Computing factorial, max element and number of binary digits in an integer.

      T(n) = T(n-1) + c

Here a = 1 and f(n) is a constant (special case). Hence T(n) = O(n), same as the one we derived earlier.

2. Tower of Hanoi

      T(n) = 2T(n-1) + 1

Here a = 2 and f(n) is a constant(case 1). Hence T(n) = O(2n), same as we derived.

Note: Recursive algorithms fall into either divide-and-conquer or decrease-and-conquer category.