If you were given a full deck of cards and asked for search for a card (say, the Queen of Hearts), how would you do it? You could go about it several ways: start searching from the top of the deck, bottom of the deck, pick cards randomly... In the worst case, you'd have to shuffle through the entire deck (a whopping 52 cards) before reaching your goal. If I asked you to do this many times, and you were patient enough to oblige, you'd quickly grow tired of shuffling through the entire deck and look for ways to make your ordeal less monotonic. You'd put the cards in an order, using your favorite sorting algorithm (for example, merge-sort), using the value and suit of each card as the key.

Once the deck is sorted, then searching through becomes simpler. There are several ways to perform the search. One of the simplest to explain is the binary search, where you pick a card from the middle of the deck (dividing it into lower and upper halves), and compare it to the value you're searching. If you got lucky and it's the card you want, then you're all done. If the card you want is less than the card you just picked, then you look at the lower half. Otherwise, you look at the upper half.

What's the computational complexity of this search? At each step, you effectively halve the number of cards you have to look at next: 52, 26, 13, 6, 3, 1. The number of search steps is thus $\log_2 52$, or approximately 6 steps. The overall complexity is thus $\log N$, where $N$ is the number of elements to search.

The coding problem this time was to implement a binary search algorithm that searches a sorted array. It works mostly like what I described above, with the exception of how it handles elements with the same value. When the array contains more than one element of the same value (e.g. a messed-up deck with 2 Queens of Hearts), then the algorithm searches for the*first* occurrence of the element.

Here is a JavaScript demo (butt-ugly, but does what it's supposed to). The "<<" and ">>" arrows to move to the previous and next search steps, respectively. At each step, the blue, orange and red numbers represent the first, middle and end of the search, respectively. A green element indicates the search has reached its goal and has terminated.

Once the deck is sorted, then searching through becomes simpler. There are several ways to perform the search. One of the simplest to explain is the binary search, where you pick a card from the middle of the deck (dividing it into lower and upper halves), and compare it to the value you're searching. If you got lucky and it's the card you want, then you're all done. If the card you want is less than the card you just picked, then you look at the lower half. Otherwise, you look at the upper half.

What's the computational complexity of this search? At each step, you effectively halve the number of cards you have to look at next: 52, 26, 13, 6, 3, 1. The number of search steps is thus $\log_2 52$, or approximately 6 steps. The overall complexity is thus $\log N$, where $N$ is the number of elements to search.

The coding problem this time was to implement a binary search algorithm that searches a sorted array. It works mostly like what I described above, with the exception of how it handles elements with the same value. When the array contains more than one element of the same value (e.g. a messed-up deck with 2 Queens of Hearts), then the algorithm searches for the

Here is a JavaScript demo (butt-ugly, but does what it's supposed to). The "<<" and ">>" arrows to move to the previous and next search steps, respectively. At each step, the blue, orange and red numbers represent the first, middle and end of the search, respectively. A green element indicates the search has reached its goal and has terminated.