Search This Blog

Knowing the Basics

We need to know an awful lot of stuff to be effective programmers. That isn't so incredible a claim, really. To be effective in any profession, you need to know a lot of stuff. But I don't know many other professions. I know how to be a programmer and what it entails, so that's what I'll focus on here. As programmers we need to know syntax, control flow, libraries, data structures, algorithms, design patterns, programming environments, paradigms, protocols, and good practices. That may be a long list, but it only scratches the surface.
If we need to know so much to be effective, how can it be done? What are the most important things to learn to get started programming and to continue climbing the learning curve from wherever we happen to be? We want to figure out what topics are going to give us the most bang for our buck, and learn those things well. The rest of the knowledge we need to get the job done can be paged in as needed.

Taking a Cue from Computer Architecture


One thing that has stuck with me since college came from the introduction to computer architecture course. Near the beginning of the course we explored the various types of historical computer architectures. They were generally classified into two groups—CISC and RISC architectures. CISC (Complex Instruction Set Computer) architectures tried to provide solutions to the programming interface with instructions that had lots of different options and did large amounts of work per instruction. RISC (Reduced Instruction Set Computer) architectures provided a set of primitive instructions that could more easily be linked together and used in different ways to achieve the same goal with possibly more, but simpler, instructions.

Compiler writers found the RISC architectures much easier to work with because the simpler instructions could be combined in many more ways and used under a much wider variety of contexts than the more rigid CISC instructions. Computer architects also found the RISC architectures to be easier to design to, and they were able to more effectively implement microarchitectural features like branch prediction, pipelining, and out-of-order execution. RISC architectures ended up being better on both sides of the hardware-software interface, and over time the CISC processors died out and RISC processors took over. Even Intel's dominant x86 architecture, which started out as a CISC architecture, became more RISC-like over time, both in the types of instructions exposed in assembly language and how the instructions were translated into microcode for the pipeline to execute.
 
RISC primitives were found to be much more useful for a wider variety of problems than pre-packaged CISC solutions, and they often led to higher-performance code.

Primitives and Solutions in Programming Languages


How does choosing primitives over solutions translate to software engineering? To answer that question, let's compare two languages—Java and C. I used to know Java. I used it for a number of classes in college. (I know that doesn't mean I really knew Java. I only used it incidentally and superfluously for some class projects.) I didn't experience the OO cathedral that Neil Sainsbury talked about in this post on why it's painful to work in Java for Android development, but I did get an early taste of it. If what he says is true, then the Java libraries have been built up into extensive frameworks that are trying to be solutions, and as was already noted, solutions have problems.

When solutions grow in an attempt to address more and more of a problem space, they necessarily become more generic. They become less applicable to any particular problem because they have to increase their surface area to cover all of the issues related to other problems that are within the same domain. All of this extra configuration and layers of architecture make the frameworks more complicated and more difficult to use. As the libraries keep expanding in both number and size, no programmer is able to keep all of the details in their head and one's ability to understand the big picture of how to use a language and its libraries starts to plummet. At some point it's worth questioning if a framework is really adding value, or if you are spending as much time understanding the framework as you would spend building up a more targeted solution from simpler primitives.

In contrast, C libraries exhibit the approach of supplying primitives for the programmer to use as building blocks. Heavy frameworks are much more rare in the C community, so primitives are used more often to solve problems. Of course, primitives have a number of problems of their own. It may take longer to implement a solution from primitives if a framework exists that is a good fit for the problem at hand. If you blindly use primitives without checking out what's available, you could be reinventing the wheel while recreating mistakes that have already been made and fixed in an available library, especially security or memory safety mistakes in the case of C.

Clearly, primitives shouldn't always be used when a better solution is available, but they do have a number of benefits that should be considered. In C you can hold nearly everything about the language and its standard library in your head. Once you learn the language and libraries, you're done. It's possible to pull those primitives from your working memory and use them efficiently without having to dig through mountains of documentation or relying on your IDE to constantly provide hints. You know how things work, and you have total control over what's happening in your application. This level of control allows you to cut out unnecessary cruft and maintain a highly optimized code base.

Keep in mind that using primitives doesn't guarantee a clean, optimized code base. Nor does using large frameworks prevent it. In either case clean code requires diligence and skill to achieve, but if a framework doesn't fit the problem—if the framework is too large or poorly targeted—then using primitives is more likely to lead to a better solution. Also note that I'm not picking on Java or C here. Both languages have their strong points and their drawbacks, and both work well for certain types of problems. They happen to exhibit certain tendencies in their libraries and frameworks that could as easily be shown by comparing C# and the .NET framework with Haskell or Ruby and its Gems with LISP. Remember, the best language doesn't exist.

The point is that building up solutions from first principles can be better than trying to memorize a lot of heavy frameworks. Thoroughly learning the basics gives you a set of robust tools that you can use in a wide variety of situations. Knowing a certain amount of the basics is essential to being a good programmer, and the more you learn, the better you get. Learning a large framework is less beneficial because preferred frameworks change frequently, they're hard to memorize and understand completely, and worst of all, they don't teach you much about how to solve problems. They just give you the answers.

Building a Foundation


Okay, if it's much more worthwhile to learn primitives, how do we learn them? In this post on what you need to know to function as a software engineer, (read it if you have time; it's long, but entertaining) Steve Yegge talks about the astonishingly large number of rules you need to memorize, and accumulating that knowledge takes a long time. As a result more experience yields better developers, in general. But memorizing everything is hard. Gaining enough experience to be effective purely through memorization would take more than a lifetime to accomplish. There must be a way to speed up this process. After all, we certainly know of great programmers that are only on their first lifetime.

We need ways of linking these bits of programming knowledge together so that we can remember a much smaller set of basic things and derive the rest. The best way that I've found to ingrain this knowledge in my brain is to study and practice the basics enough to have a deeper understanding of why they are true and how they work together to solve problems. Separate fundamental ideas can fit together to form a coherent overall view of programming that makes it all easier to keep in your head.

As an example, let's turn to mathematics, specifically the quadratic formula. It is certainly possible to memorize the quadratic formula, and a lot of people do. A lot of people also forget it, especially if they don't ever have to use it in their day job. I'm not arguing that everyone needs to know the quadratic equation, but it is actually fairly straightforward to derive it from a more basic principle, namely from completing the sum of squares.

If you work out a few example second-order polynomial problems by completing the sum of squares and solving for x, you'll begin to notice a pattern in how the coefficients are manipulated to get to the answer. Once you see the pattern, it shouldn't be too hard to derive the quadratic formula on your own. When you understand completing the sum of squares well enough that you can derive the quadratic formula, you no longer have to worry about memorizing it. Next time you can figure it out again. Ironically, because you have that deeper understanding, you're also more likely to just remember it the next time you need it. The concept of the quadratic formula is resting on a solid foundation. It's a win-win.

Mathematics is filled with examples of deriving higher-order results from first principles. The entire field is built on that idea. In the same way, most of the ideas in programming can be built up from simpler primitives. If you deeply understand the fundamentals of object-oriented programming, you can easily derive design patterns and understand when and where to use them instead of over-applying them. If you deeply understand pointers and program memory, you can derive most simple data structures, and then simple data structures lead to the more advanced data structures. If you deeply understand control structures and recursion, you can derive many different algorithms and better understand how they work. Understanding how these ideas are built on primitives makes them easier to remember and use effectively.

The importance of fundamentals is probably why learning how to write a compiler is so instructive for programmers. Compilers use so many data structures, algorithms, and patterns that they're a one-stop shop for getting a deeper understanding of programming. While you're implementing a compiler, you're simultaneously learning how to use programming concepts effectively, exploring the underlying mechanics of a language at a deeper level, and discovering how the language interfaces with the lower-level machine, be it a virtual machine or a physical processor. You can't get more fundamental than that.

The Basics Lead to Better Programmers


Programmers often complain that learning basic data structures and algorithms is pointless, that those things are never used in day-to-day programming, but you never know when some low-level principle that you learned will come in handy. That may sound flippant, but it happens all the time. The more fundamental principles you know, the easier it is to brainstorm ideas that are likely to work. You can pull from a larger store of flexible primitives that can be pieced together to form a solution tailored to the problem at hand, and you may not even realize that you're doing it.

Learning the basics trains your mind in abstract problem solving in a way that memorizing solutions won't. Sure, you don't need to know the fundamentals to get work done. Plenty of programmers put together working software from heavy frameworks and libraries that solve all of their problems for them. But those programmers probably don't understand the tradeoffs they're making or how the system they built really works.

A programmer that does know how their code works and why things were done the way they were done is better equipped to handle the most difficult problems or the most complex features without making a total mess of the code base. Frameworks are certainly useful for solving a wide variety of design problems, but knowing the basics gives a programmer the wisdom to know when to use them and the confidence to roll a custom solution when necessary. To become a better programmer, start with knowing the basics.

No comments:

Post a Comment