Search This Blog

What Once Was Hard, Is Now Easy

Think back to something that you learned that was really hard, harder than most things you were learning at the same time. Maybe it was a difficult mathematical or computer science concept. Maybe it was a complicated procedure for managing resources in a program. Maybe it was an intricate set of architectural principles for designing software. Is it still hard now, or is it easy?

If the difficult concept is now something you use all the time, it's almost certainly much easier to understand than it used to be. You're familiar with how to use it and what to watch out for while using it. But even if you haven't used the concept since you learned it, it may now be much easier to pick up and use than you expect.

I've had multiple experiences where I learned something difficult, especially in a college course, and then set it aside for a while before revisiting it because I needed it for some project. To my surprise, I found that I understood the concept much better than I thought I would, and I could use it effectively without days of study to relearn it.

One case I remember in particular was the Digital Design Fundamentals course I took in college. I took it during my Freshman year, and I distinctly remember thinking during the course that I was not understanding the material as well as I should have. Everything was new to me—binary and hexadecimal number systems, Karnaugh maps (extremely useful for boolean logic, by the way), Moore and Mealy state machines, and combinational logic design—and it felt like it was going over my head. I got a decent grade, but I finished the course thinking that I might need to study this stuff a bit more before it truly sunk in.

A couple years later I picked up the textbook again to brush up on my digital fundamentals for a more advanced course, and lo and behold, I found the entire book to be super easy. I had been using most of the concepts all along in other courses without knowing it because they were, well, fundamental, so I already had a solid working knowledge of them with no need to revisit the textbook.

Other things that stick out as being really difficult when I first learned them but much easier now are pointers, recursion, and interrupts. (There are also things that will always be hard, especially the big two: naming, caching, and off-by-one errors.) Pointers and recursion are fundamental concepts that, once you understand them, will make you a much better programmer than you were before. You won't just be better; you'll be a different kind of programmer altogether, able to solve whole classes of problems much more easily and elegantly than you could before. Interrupts are also a fundamental concept, although not as useful for all types of programming. They are most applicable to embedded and system programming.

At first glance, pointers don't seem that complicated—a pointer is simply a variable that contains a memory address that refers to another variable—but to someone who has never seen or used them before, they can be mind-bending. For some reason adding a level of indirection to a variable confuses everything. Things get even more confusing when passing pointers as arguments to functions, using function pointers, and figuring out pointers to pointers. At some point, everything clicks, and you go from completely not understanding pointers to wondering why you thought they were so hard. They suddenly start to make perfect sense, and you never look back. Of course, pointers may still trip you up from time to time, but not because you don't understand them.

Recursion is largely similar to pointers in that it's fundamental to many programming problems and programmers who can't think recursively are totally confused by it. A recursive solution to a problem can be created in three simple steps:
  1. Solve a trivially easy base case of the problem.
  2. Solve the current case of the problem by splitting it into a trivially easy part and a smaller version of the current case.
  3. Make sure that the smaller version of the current case will always reduce to the base case.
It sounds simple—and once you get it, it is—but for programmers new to recursion, it is incredibly easy to get lost in the details. Recursive problems are really hard to think about if you try to think them through in the iterative manner that most people are used to. You have to let that way of thinking go and trust that the combination of solving the base case and continually reducing the problem to the base case is actually going to work. It's not a normal way of thinking, but it is extremely powerful for certain classes of programming problems. Once you understand recursion, those problems become much easier to think about.

Interrupts add their own complexities to programming, and learning how to deal with those complexities can be a real struggle. A program with interrupts is actually a form of a multi-threaded program with all of the same issues that any multi-threaded program has, including deadlocks, synchronization, and memory consistency. Not that these threading issues are easy once you have experience with them, but even understanding the basics of interrupts is challenging at first. Interrupts can happen between any pair of instructions in your program, and that doesn't mean only the instructions you're using in your higher level language. An interrupt can happen between assembly instructions that make up one higher level operation, so an interrupt could happen right in the middle of your count++ increment. Because of this behavior, you have to be much more careful about how you use variables that are shared between the interrupt service routine and the main program. Having a good understanding of how interrupts work is vital to embedded and systems programming, and it takes time to master.

I remember how hard it was to understand each of these concepts. I struggled with pointers. I wrestled with recursion. I wrangled with interrupts. None of them were easy at first, but now I use them often without breaking a sweat. I can think of plenty of other examples of difficult concepts, some I use regularly and others not so much. Because I've had good experiences with some hard things getting easier with time, I'm not afraid to pull out a concept that I haven't used in a long time to solve a gnarly problem. Even if it was a hard concept to learn, it's probably easy now.

This idea—what once was hard is now easy—has two major implications. First, when you are exposed to something new, it can feel overwhelming, and especially if you are trying to learn it purely by reading, it can feel impossible to fully understand and remember it. After using that concept to build something real, and struggling through all of the implementation details and reasons for doing things a certain way, you can come back to the original concept and find that it now seems trivially easy. Don't get discouraged when learning and things don't make sense right away. Sometimes all you need is more exposure and practice before everything starts falling into place.

Second, it is easy to forget that some concepts are difficult to learn and that you need to give yourself time. As you learn more things, more things are easy for you. You remember all of the things you can do that are easy, and you start to think that it's better to fall back on the skills you've already mastered than to learn a new, difficult concept. If you remember that the stuff you already know was once a real struggle to learn, then you may be more willing to struggle through another new concept, confident in the knowledge that this, too, will become easier with time. And don't think that this idea is limited to programming. It's true of everything in life that starts out hard. Once you know it, it's easy.