Search This Blog

The Ivory Tower of Software Development and Messy Reality

What is the best way to learn something that is complex enough and broad enough that you will spend a lifetime mastering it? Some people believe the way to learn something is to first study the fundamentals and proper techniques, and only then move on to doing or creating things. Other people take a different approach and introduce new students to something simple, fun, and engaging right away to capture their interest and imagination before getting into the more technical aspects of the subject.

It's a question of a purist approach or a pragmatic approach—the ivory tower or the open market. This dichotomy has a strong presence in software development, but it exists almost everywhere.

Learning an Instrument


One example of this purist-pragmatist divide occurs when learning to play an instrument. Let's say you wanted to play the piano. You wouldn't have to go to lessons with too many different teachers before you would recognize different beliefs about teaching methods.

The purist approach would involve learning all of the proper techniques first. New student would learn scales, chords, finger position, and a bunch of other fundamentals, and then they would drill and drill and drill until they could do these things perfectly without thinking. They would do all kinds of exercises completely outside the context of real music, and then when they were ready, they could begin playing classical pieces. This is the ivory tower of perfection. Mistakes are eliminated quickly, bad habits are never allowed to take root, and the student is always on the straight and narrow path to musical virtuosity.

The pragmatic approach is a bit different. Instead of starting with technique, the students will start playing songs right away. The songs start out simple, to be sure, and proper technique is taught within the context of the songs that use those techniques. New skills are learned as needed to play new songs, and the focus is on playing real music. The belief is that the required skills will be learned in time; it doesn't have to all be done up front. Certain techniques may not be learned until much later and students may spend more time in areas that they enjoy. The students that want to become much better musicians and challenge themselves more will naturally put in the extra effort to master the more difficult techniques because the music is driving them to do it.

Which method would you prefer? They both have their merits, but I think more people are going to stick with the pragmatic approach because it's more fun and engaging. The purist approach quickly gets boring, and only the most dedicated (or most externally forced) students are going to get to the point where they can play actual music. Most people would give up on it before they got that far.

Programming Purity


Any sufficiently complex field will have this debate between the purists and the pragmatists about what is the best way to teach beginners coming into the field. The analog to the musical purist approach in the software domain is to have students learn all of the proper computer science theory first, before allowing them to write any real software. To a purist, the right computer science education would start with data structures and algorithms, preferably in Haskell or LISP (or both). It would quickly get into compiler theory and lambda calculus so that students would have a firm understanding of the most important parts of computer science.

I have no problem with languages like Haskell or LISP. I think they're great languages, and you'll learn a lot of stuff about programming by learning these languages. But as introductory languages for beginner programmers, they're terribly difficult and a huge barrier to entry.

The purist may claim that difficulty is a desirable trait in a language, and that one of the purposes of introductory programming courses should be to weed out students that don't understand these all important language features and computer science concepts. A computer science education should certify that a professional programmer knows how to write a compiler and an operating system, is proficient in the use of strongly typed languages, and has a firm understanding of closures and first class functions.

Taking this line of thought to its logical conclusion is absurd. Do we really want to require aspiring programmers to learn all the theory first, to certify professional programmers against some arbitrary set of standards, and to only allow professional programmers to write software? What would we test for? How would we differentiate between programmers that are allowed to write compilers from those that are allowed to write payroll software for an HR department? How would we establish which level of the ivory tower each programmer could work in?

Not every programmer needs to know the same stuff to be effective at what they do, and the vast majority of programmers out there don't need to know pure computer science theory to do their work. Besides, thinking that it's even possible to structure a perfect learning path for programming and force every aspiring programmer to follow it is just silly. People don't learn how to program that way.

How People Really Learn to Program


How did you learn how to program? I bet you didn't start by studying memory safety and resource allocation. I've never heard anyone describe their early programming experiences that way. I started out programming in LOGO, making the turtle draw simple shapes on the computer screen. Then I started writing clones of simple games in QBasic. It was fun and exciting, and that is what hooked me in. I wanted to learn more because these early programming experiences were fascinating. I hear a lot of stories similar to mine, but never have I heard a programmer say they got hooked on programming the first time they learned about closures or type safety.

People become programmers for a wide variety of reasons. Not every programmer loves programming, but the best programmers almost certainly do. The desire to program is necessary for learning the more advanced topics of programming. It's too hard otherwise, and the programmers that don't love it are going to find ways around having to think about the hard topics. In the end, that's okay. They'll still have plenty of work to do without the hard stuff.

For those that do love programming, they will be driven to keep getting better, and as a result they will learn the proper techniques and best methods of programming. The fundamentals are hugely important, and one of the best ways to get better is to practice and develop a deep understanding of the fundamentals. Like the piano student that learns the necessary techniques while playing beautiful music, the best programmers will learn the fundamentals while making great software. If they need to learn a certain programming concept to make their software better or to become more effective as a programmer, they'll do it, but the desire to program originated in those early experiences of writing simple programs to do cool things with a computer.

Creating cool programs in an accessible way should be the focus of any introduction to programming. It may be messy and impure, offending the sensibilities of the proprietors of the ivory tower, but it captures the imagination. And the imagination is a powerful force for learning complex subjects and doing awesome things.