Search This Blog

The Misuse of Recursion

Recursion is an important and powerful concept in computer science. It's something that every programmer should have in their toolbox because the right application of recursion can produce elegant solutions to difficult problems. That's why I'm astounded that authors of programming books occasionally make some pretty blatant errors about recursion that only serve to confuse the reader and mislead the novice programmer. One case that I came across recently in HTML5: The Missing Manual was actually addressed to the advanced user in a "Power User's Clinic" sidebar:
This technique is useful for complex computational tasks that require recursion, like calculating the Fibonacci sequence.
There are two things wrong with this statement. Do you see them?  If you already know recursion, then you probably do. If not, let's take a little crash course in recursion to get you up to speed, first.

What is Recursion?


Recursion is actually quite a simple concept. It is merely a function that calls itself. That's it. If you don't want infinite recursion, you need some way to stop the function from calling itself every time it runs. To do that, you would pass a slightly different value as a parameter to the function each time it was called, and when that parameter reaches a certain value, the function stops calling itself.

By way of example, let's implement addition using recursion. You would never do this in a real program because it's ridiculously inefficient, but it works as an easy example since everyone knows how addition works. Let's start with a function that takes two parameters 'a' and 'b', defined as Add(a,b). This function should return a result equal to a+b, but we're going to do it by calling the Add function again within itself.
Add(a, b) {
    return Add(a, b-1) + 1
}
Do you see what's going on there? The Add function calls itself with a new 'b' parameter that's one less than before, and adds one to the result before returning it. Unfortunately, this is not enough because the recursive calls will never end. If you want it to end, you have to check for a stop condition:
Add(a, b) {
    if (b == 0) return (a)
    otherwise return (Add(a, b-1) + 1)
}
In this case, we can use the identity a+0=a, so if 'b' is zero, return 'a'. This is all pseudo-code, but real code wouldn't look that much different. Now that we have a working recursive function, we can think about what happens with some real inputs. If you call Add(3, 3), what actually happens is Add gets called repeatedly as shown in the following sequence of operations to produce the result:
Add(3, 3)
= Add(3, 2) + 1
= (Add(3, 1) + 1) + 1
= ((Add(3, 0) + 1) + 1) + 1
= ((3 + 1) + 1) + 1
= (4 + 1) + 1
= 5 + 1
= 6
All we did was implement addition by counting up from 'a' the number of times specified by 'b'. The same principle could be used to implement multiplication with addition or exponents with multiplication. Now that we all know what recursion is, we can look at why that quote is so wrong.

What Computational Tasks Require Recursion?


It seems to be a common misconception in the software industry that there are certain complex problems that need to be solved with recursion. This is simply not true! Any problem that can be solved with recursion can be solved with iteration instead, and usually with better performance. Iteration is another programming concept where a task is repeatedly done in a loop. For any programming problem, recursion and iteration are interchangeable.

Now there are certain problems that are solved more easily or elegantly with recursion. Sorting is one example of such a problem. Conceptually, sorting a list of numbers using recursion is quite straightforward. First you split the list of numbers in half and sort each half of the list. You keep splitting the lists until you get to the point where each list is only one number. Then you merge each pair of lists by selecting whichever number is smaller at the beginning of the two lists and moving it to the merged list until both original lists are empty. Once you have merged all of the split lists, you have a sorted list. This algorithm is fittingly called mergesort.

Another type of problem that can easily be solved using recursion is any kind of graph traversal. A graph is made up of nodes and links connecting those nodes. To search a graph for something, you start at a node in the graph. Often times there is a root node where you start, and then the graph is called a tree. If the node you're at contains what you're looking for, then you're done. Otherwise, follow each link from your current node and search that sub-graph. Pretty simple, right? If the graph can have cycles, then things get more complicated, but the basic idea is really easy to understand.

Both sorting and graph searching can be solved using iteration, but the algorithms are much uglier and harder to understand. Iteration does provide one benefit, though - performance. An iterative version of an algorithm is almost always faster than a recursive version because the recursive function calls incur some time and memory overhead, unless you can get the recursive version into a form that uses tail-recursion and the language supports tail-call optimization. If that's not an option, and the problem size is too large, you could quickly run out of memory and crash your program.

I'm not saying you should never use recursion. There are, in fact, many real situations where recursion is clearly better than iteration. The point is that you definitely need to understand the trade-offs when using it, and there is no problem that absolutely requires recursion.

Is Recursion Suitable For The Fibonacci Sequence?


If we revise the original questionable statement to say:
This technique is useful for complex computational tasks that lend themselves to recursion, like calculating the Fibonacci sequence.
Is it any better? Would you actually use recursion to calculate the Fibonacci sequence? It is defined recursively where given a number 'n',
Fibonacci(n) = Fibonacci(n-1) + Fibonacci(n-2)
Fibonacci(0) = 0
Fibonacci(1) = 1
But implementing this definition in code as a recursive function is expensive! For every Fibonacci number, you're calculating almost the entire Fibonacci sequence twice. That takes a lot of time and memory. A much faster and smaller algorithm would just start with two variables set to 0 and 1 and then add one to the other, back and forth, 'n' times. Here's the pseudo-code:
Fibonacci(n) {
   f0 = 0
   f1 = 1
   for (i from 2 to n) {
      if (i is even) f0 = f0 + f1
      otherwise f1 = f0 + f1
   }
   if (n is even) return f0
   otherwise return f1
}
This iterative form is almost as easy to understand, it takes a constant amount of memory, and it's way faster than the recursive form. I would never calculate a Fibonacci sequence using recursion when iteration is so much better.

We Can Do Better


If calculating the Fibonacci sequence is a bad example of when to use recursion, what's a good example? It should be something obvious, something that immediately makes sense to most people that you would actually use recursion to solve it. I already brought up two problems that are better - sorting and searching, and both of them are pervasive in software development. Think about it. Most of what Google, Amazon, Facebook, and Twitter do can be boiled down to sorting and searching. Why not use those key problems as the examples of when to use recursion:
This technique is useful for complex computational tasks that lend themselves to recursion, like sorting or searching massive data sets.
I'm not sure that any of these large companies use recursion in their algorithms, although it's pretty likely. Even though it is true that recursion runs into memory problems as the problem size grows, and iteration is generally faster when running on a single processor, everything changes when the problem size gets so large that you need to split it up and run the program on multiple processors. You see, most recursive sorting and searching algorithms use a divide-and-conquer approach. They automatically divide up the data set while processing it. That makes it quite easy to spin off chunks of data to be crunched on separate processors while keeping the basic algorithm simple.

It isn't that difficult to come up with much more compelling examples of why to use recursion. Using the Fibonacci sequence is lazy and misguided. It doesn't make sense to use recursion for it, and you're rarely going to need to calculate the Fibonacci sequence anyway. But everyone who uses software is exposed to sorting and searching, so those are the prime examples that need to be used when talking about recursion. Maybe recursion would be more widely understood if those who talked about it used real-world examples that mattered.