Recurrences
Many divide-and-conquer algorithms, which are ones that take a large task, split it into many small ones, solve them, and then merge the results of those tiny tasks together, naturally lead to recurrence relations. Examples:
- Mergesort, which is a type of a sorting algorithm, recursively splits the array in half and recursively sorts each half, doing \( O(n) \) work to merge: \( T(n) = 2T(n/2) + n \). [Solution: $T(n) = c \cdot n \log n$.]
- Binary search, whose purpose is to find an element in a sorted array, only explores one half of the array, performing constant-time comparisons: \( T(n) = T(n/2) + 1 \). [Solution: $T(n) = c \cdot \log n$.]
- Strassen's matrix multiplication reduces matrix multiplication to $7$ recursive calls on half-size matrices: \( T(n) = 7T(n/2) + cn^2 \). [Solution: $T(n) = c \cdot n^{\log_2 7} \approx c \cdot n^{2.8074}$.]
These recurrence relations can be solved using various math techniques that you'll cover in the Analysis of Algorithms course.
Solutions to recurrence relations tells what the precise growth rates of the algorithms are, and helps us in choosing the fastest algorithm for our program.