Tech Book Face Off: The Design of Everyday Things Vs. Designing Interfaces

Everyone should at least learn the basics of design. Knowing a little about design will open your eyes to how we interact with the consumer products and other man-made objects all around us. When something proves difficult or frustrating to use, you will have a better idea of what makes it such a poor design. When something is a pleasure to use, either because it is enjoyable in and of itself, or because it enables you to accomplish a task with ease, you will better appreciate why it is such a great design.

Here are two books that take very different approaches in describing the same fundamental principles of good design. The first, The Design of Everyday Things by Donald A. Norman, presents examples of common objects that we interact with everyday of our lives - doors, appliances, phones, etc. - and explains what makes these things easy or difficult to use. The second, Designing Interfaces by Jenifer Tidwell, uses a catalog of software interface design patterns to showcase the same design principles and how they can be used to make well-designed software.

The Design of Everyday Things front cover VS. Designing Interfaces front cover

The Design of Everyday Things


This book comes highly recommended by both Joel Spolsky and Jeff Atwood, and even though it does a great job of laying out the fundamental design principles behind the great products that we use in our daily lives, I couldn't help but be constantly annoyed by the delivery and tone of the book. Before getting into the annoyances, let's take a look at the benefits.

Donald systematically presents and describes four principles of good design: visibility, constraints, feedback, and affordances.
  • Visibility includes everything from visual to auditory to tactile stimulus, and not only involves making the user aware of certain aspects of the system, but also doing so in a way that the user will correctly interpret the operation or state of the system with little effort. 
  • Constraints should be used to limit the scope of the design and make the use of the object simple and obvious. 
  • Feedback gives the user a clear idea of what the system is doing and how it is reacting to the user's inputs or manipulations.
  • Affordances make it clear what the purpose of the features of the object are, and how the object can be manipulated in a natural way to produce the desired result.
These principles will connect the physical model of the object with the mental model of the person using it. The better this mapping is between the physical and mental models, the better the user will be able to understand the system and use it productively.

The concept of affordances is an especially valuable contribution to the theory and practice of design. Common examples of this idea include handles that afford pulling, buttons that afford pushing, and knobs that afford turning. Being a father of two young children, I get to experience all kinds of other examples of this principle where the users, my children, have no preconceived notions about how an object is supposed to be used. In my day to day life, I am intimately aware that couch arms afford riding, crayons afford breaking and throwing, stairs afford falling, and wrapping paper tubes afford swinging - normally at a sibling. The next time you're designing a new feature for a product, think about how it would be used in the hands of a six-year-old, and you'll be getting close to its real affordances.

This is all good stuff, and there are many more essential concepts described in much more detail in the book, including things like "knowledge in the head or in the world" and "making the right thing easy and the wrong thing hard." There is a wealth of design wisdom here that should be learned and internalized, but the book is not without shortcomings that I found hard to overlook. They seriously impacted my enjoyment of the book and made me hope that there is a better treatment of the material somewhere else. Here is what I'm talking about.

I couldn't decide whether the tone of the book was more patronizing or more pandering to the reader. Donald constantly reminds you that the average person has great difficulty using the most basic of objects, especially doors. He would go on and on about poor door designs and how he and other people he meets are frustrated by not being able to open doors. You would think we were suffering from an impossible door epidemic. Now I am sure there are some terrible door designs out there, but in general that has not been my experience. If the book would have focused on those bad designs and how to improve them using the presented design principles, that would have been better than trying to overgeneralize the state of door design and the people trapped behind it. Granted, he did do some explaining of how the principles applied to doors, but it was lost amid the torrent of grievances.

Okay, enough about doors. Some of the other examples used were even more confusing. When he was talking about design constraints he extolled the virtues of a LEGO police motorcycle and rider as a perfect example of how to use constraints to make a design easy to use. Test subjects could almost invariably assemble the toy without any picture, instruction manual or help from the moderator. But why would you try to exhibit good use of design constraints by cherry-picking this LEGO set from a type of toy whose main characteristic is that it is free from design constraints so that you can use your imagination and creativity? I'll bet if he had picked any LEGO set of moderate size, the test subjects would have had significant difficulty putting together the set's featured vehicle, even with the picture on the box. He should have at least picked something that was meant to be constrained as an example. As it is, this was entirely distracting.

Later in the book, Donald reasoned that slow evolution of design was better than rapid iteration, and used the classic telephone with a separate handset and a cradle as an example. He argued that the cradle prongs protected the switch hook from being depressed and hanging up the call if the telephone happened to fall off the table. Really? That is the intended function of the prongs? How about to hold the handset on the cradle when the phone is hung up? That is what the prongs are doing 99% of the time. I think the fact that the prongs protect the switch hook from that rare fall is incidental, and would be a design feature by coincidence. I wonder what he thinks of the modern phones that are products of rapid iteration instead of slow evolution. The iPhone, in particular, is considered to be of exceptionally good design. Of course, there is the problem of butt-dialing.

One last thing that drove me crazy was the way Donald constantly referred to DOET or POET as if they were separate works that he was referencing. I always had to stop and think about what he meant before remembering, oh yes, that's Design Of Everyday Things, the book that I am, in fact, reading right now, or its previous title, Psychology Of Everyday Things. At best this was terribly annoying and jarring, and at worst it came off as pretentious and aloof. Why was that necessary?

Designing Interfaces


This book covers all of the main design principles that were laid out in The Design of Everyday Things, but in the context of a set of design patterns for software user interfaces. Jenifer tries to be comprehensive, and is fairly successful, covering everything from the ubiquitous Thumbnail Grid pattern to the exotic Data Brushing pattern and everything in between. Every chapter started off with some pertinent design tips for the pattern category being presented, and patterns were well organized into logical categories such as information architecture, navigation, layout, forms, etc.

I found the chapters on data presentation (chapter 7) and visual design (chapter 11) particularly interesting. The discussion about displaying data in a clear and user-friendly way at the beginning of the data presentation chapter was quite good. It emphasized making the visual presentation as obvious and understanding as  possible, and provided great tips for doing it. For example, preattentive variables, the graphical characteristics of the displayed data, can be used to highlight important data points. Varying the color, texture, position, size, or shape of the data can all make it immediately apparent what the user should pay attention to.

The visual design chapter covered the basics of creating attractive layouts and provoking a desired emotional reaction by judiciously choosing design elements. I enjoyed seeing how color, typography, lines, space, texture, and images could be used to completely change the feel of a design without changing any of the content. I think this is an area of software design that every programmer could benefit from learning more about. Having a better understanding of how the look of software affects the user's experience has arguably become just as, if not more important than the features and function of the software.

What I like most about this book is that it is pragmatic. It uses real-world examples to showcase the practical use of the design patterns presented, and offers concise, focused explanations of the principles behind the patterns. It makes for a great overview on the first read through, and then it can be used as a reference or an idea generator when you are actually designing software. The only issue I can see is that the design principles tend to take a back seat to the patterns because there are so many of the latter. But since this is a book about patterns, that was probably the right choice.

And the Winner Is...


Designing Interfaces, of course. I felt that it did a much better job of presenting the principles of good design without being distracting, annoying, or pretentious. Even though the focus was on the patterns, the principles came through clearly and were used to good effect when explaining the patterns. Because it is specific to software interfaces, it doesn't have the general applicability that The Design of Everyday Things does, but for designing software that doesn't matter much.

I really did want to like The Design of Everyday Things. The premise of developing good design principles in the context of everyday objects that we're all familiar with is intriguing. Some of the examples were actually entertaining, but the logic for including others doesn't hold up very well. By the end of the book I was getting preoccupied with disagreeing with the author instead of learning from him, and I found his perspectives on computers and the internet about as befuddling as the ones on doors and phones.

I would certainly appreciate a good book that focuses on design principles, one with better execution than The Design of Everyday Things. I'm sure that it was groundbreaking in its day, but I'm also sure that there are better alternatives available today. I even have a few on my list to try out. Until then, Designing Interfaces will do. Even though the focus is on patterns, it does a fine job of covering the relevant design principles as well.

7 Ways That Programming is Like Weight Lifting


I like to find ways to connect the different things that I enjoy doing in life, even if at first they seem to have nothing to do with each other. Certain good practices can stand out more than others in any activity, and those practices can be applied more generally to other activities. Finding what works well in one activity can be used to achieve new levels of skill in other pursuits. Take some time to appreciate this, and you'll be surprised at how many connections you can make between seemingly independent things. What is really in common between different activities is you, the human being in the middle of it all.

The activity I'll explore today is weight lifting. I don't lift competitively, and my goal is to look fit and toned, not like the Incredible Hulk. I lift weights a couple times a week to stay strong and happy. Yes, happy. If I miss a workout I start to feel grouchy from the loss of physical activity. I sit at a computer all day and am otherwise relatively inactive, so my body and my mind need the physical exertion to keep from getting run down.

I have plenty of time to think during my workouts, and sometimes I ponder becoming a better programmer. As it turns out, a lot of the habits that help you progress in weight lifting can make you a better programmer, too. For instance:

Stick to a Schedule


You need a schedule. There is no getting around this fact in weight lifting. If you don't lift regularly, you're not going to get any stronger. Once you reach your desired level, you can maintain that level on two workouts a week, but until then, you need to be lifting three times per week, minimum. If you stick to  a schedule, you'll be able to make fairly steady progress, especially in the beginning before your muscles build up a tolerance for the workout.

If life interrupts or you succumb to laziness and miss a few workouts, all is not lost. It will take less time to get back to where you were than the time that you missed. As a rule of thumb, it will take one workout per week missed to recover your lost strength and start making progress again. But it's easy to keep skipping more workouts once you've missed a few so I wouldn't start down that road.

A continuous improvement schedule is a good habit to get into for programming as well. It may not be as essential as in weight lifting, but doing programming workouts a couple times a week to maintain your skills - or more often to improve your skills - is a great idea. Take some time to solve small algorithmic problems or write utility programs to keep your problem solving skills fresh and strong. Get a few books and explore whole new areas of programming to really make progress and learn new skills.

A Routine is Good, a Rut is Bad


If your workout routine is working for you and you enjoy it, then keep at it. If you're dreading the boredom of your workouts or feeling like you're stuck in a rut, then it's time to mix things up a bit. After doing the same routine for a couple months, your muscles will get used to it and stop responding to the stress you're putting on them. You need to change what you're doing to shock your muscles out of their comfort zone so they can keep growing. Do different exercises, use free weights instead of machines, do more repetitions with less weight, do less repetitions with more weight, or even change the order of the exercises to make your body work in a way that it's not used to.

Your mind works pretty much the same way. It adapts quickly to a particular task and gets bored easily. Find ways to challenge yourself to keep programming fresh and interesting. Explore new languages, libraries, and frameworks. Write algorithms in different languages and compare their performance and readability. Try writing code that is as understandable as possible without comments. There are all kinds of things you can do to mix it up and get out of a rut. The point is to do something different. You'll see things in a new way, and your programming skills will get stronger because of it.

No Pain, No Gain


There is no escaping it. Weight lifting is hard. When you're pushing yourself, every workout will get your blood pumping and your heart racing and leave you somewhere between tired and exhausted. You shouldn't push to the point of injury or extreme fatigue because then you're doing more harm than good, but a little exhaustion is a good thing. If you're feeling back to normal, or even energized, about an hour after the workout, that's right about where you want to be.

On a longer timescale, getting into shape will take years of work. It's not going to happen overnight, and you'll get as much out of it as the amount of work that you put into it. When you're first starting a workout routine, you're not going to be able to do much, and you're going to be sore for weeks until your muscles adjust. Once your muscles have learned to deal with the exertion, you'll have to keep pushing them to make progress. When you've reached a level of fitness that's satisfying, you'll still have to keep up the workouts to maintain the gains you've achieved.

You should expect no less of programming. It's also hard, it takes a great deal of time and effort to get better, and the amount of mental exertion you'll go through will be exhausting. Like weight lifting, you don't want to overdo it when programming. At some point you'll lose the ability to retain anything else without resting, and you'll make more mistakes than you fix so that you'll actually make negative progress. You need to push to make progress, but it needs to be sustainable. And like weight lifting, programming is going to take years to reach a decent proficiency, and decades of maintenance. If you want to stay at a certain level, you'll need to keep working at programming.

You Can do More With a Spotter


A workout partner is a wonderful thing to have. They are so much more than a pair of helping hands when you can't push out that final rep. A good workout partner will motivate you when you aren't feeling up to a challenging exercise. They will help keep you safe if you bite off more than you can chew. They offer constructive advice when you make mistakes or lose your form. They will keep you honest and hold you to your workout schedule when you want to take the day off. They will bring different ideas to the workouts, encourage you to try new things, and mix things up when they get boring. That pretty much sums up the advantages of pair programming and code reviews, too.

Don't Focus Only on Your Chest



If you only work your chest with bench presses, chest flies, and the like, you're going to plateau in a hurry. Having a strong back helps stabilize your torso so you can lift more weight safely. Strong arms, wrists, and hands do a lot during any upper body exercise, so they shouldn't be ignored, either. A strong core is even more important for stabilizing your body, providing power, and preventing back injury. And finally, don't forget about your legs. Chicken legs look ridiculous on anyone. Don't put a big barrel chest on those stilts. Build a nice strong foundation for your upper body.

The equivalent mistake in programming would be focusing on one specialized corner of the field. Don't do that. Branch out and develop your other programming muscles. If you're an embedded programmer, learn some web programming. If you write desktop applications, learn database design. Work on your core by learning some operating system design, compilers, microprocessor architecture, digital design, discrete math, or graph theory. Don't think of this as a definitive list, just some ideas that you can expand on. Find an area where you're weak and develop it. Learning new programming subjects will make you stronger in the areas you already know and bring you new ideas that were previously inaccessible or unknown.

Don't Forget the Little Muscles


Weight machines are very good at targeting specific muscle groups, but they take all of the variation out of the exercises. If you're not doing any free weight exercises and you don't do exercises that target smaller muscles like your rotator cuffs, wrists, ankles, and hip adductors, the neglect will likely lead to injury eventually. However, if you do develop these muscles, they will help stabilize your joints, making lifting easier and safer.

Most free weight exercises work a much wider range of muscles because your body has to work to control the weight, bringing these smaller muscle groups into play and strengthening them as well. The exercises become a positive feedback loop where the smaller muscles get stronger, which helps you lift more weight, which makes the bigger muscles stronger, which helps you lift more weight, which requires the smaller muscles to get stronger to control the heavier weight. Targeting the smaller muscles with special exercises can also help this process along.

In programming your small muscle groups are what support you in your day-to-day programming: language features, sorting and searching algorithms, string processing, regular expressions, network protocols, IDEs, typing, etc. Don't neglect these things because you think you already know them or it's a waste of time. The stronger you are at them and the more automatic they become, the less mental energy you'll waste on them and the more you'll have available for the really complex programming challenges.

You're Going to Have Good Days and Bad Days


Some days you won't feel like you can lift anything. You feel exhausted before you even begin, and it takes every ounce of will to get through the workout. This is normal. Keep in mind that tomorrow will be better, and do what you can. Other days you'll feel like Superman, and you can add extra weight and reps without breaking a sweat. Take full advantage of your temporary super powers.

Who knows why this happens. Maybe it's something you ate, or how you slept (or didn't), or the weather, or your biological cycle. Whatever it is, you're going to have good days and bad days, whether it's weight lifting or programming. Take the bad days in stride and don't get discouraged. Enjoy the good days when they happen, but don't let them go to your head. And the rest of the time, keep on keepin' on, and you'll keep getting stronger.

A Taxonomy of Code and Comments

Last week we had an excellent discussion in the C++ professionals group on LinkedIn (update: and now HN, as well) about my last post on trying to comment code less and make it more self-documenting. Thank you to everyone who contributed to the discussion. I really enjoyed reading everyone's perspective and debating the merits of commenting code. One commenter even pointed out the one line of code that I was still uncomfortable about in my example. I applaud such attention to detail when reading an article!

Forming Comment Camps


Over the course of the discussion I could see three distinct camps emerge that differed in what they thought of commenting based on the experiences they have had programming.
  • The No Comment camp had seen too many worthless or misleading comments in their day and recommended trying to use less comments and more self-documenting code. 
  • The Document! camp had found documentation comments extremely helpful when using APIs and stressed the importance of documenting interfaces and providing references for algorithms.
  • The Explain Intent camp had run into too much confusing and convoluted code in the past and either wished it had been more well explained in comments or were thankful that it had been, as the case may be.
All of these ideas have merit, and there is much overlap and nuance involved in the arguments for and against each one. However, those arguments lay mostly in the grey area between these camps where combinations of bad code and bad comments make things interesting and programmers' lives difficult.

My personal experience led me to the No Comment camp, as I showed with an example from the code base that I've been working on for the past year and a half. In it there were nine comments: Six of them were worthless, one was attached to code that I eliminated, one was a question about whether the following code was necessary, and one was the answer to that question stated poorly and in the wrong place. That's essentially 100% bad comments in a little bit more than 50 lines of code. Since I did move and reword the one poorly stated comment, I pruned the code of nearly 90% worthless comments and, I think, made it much more readable in the process.

Of course, this is a small section of code, but it is fairly representative of the code base I'm working on. I didn't have to look too hard to come up with an example. I grabbed the current code I was refactoring. Now that doesn't mean other code bases are the same. In fact, I am certain they are different, and that is where most of the differences in the opposing camps comes from. The rest could be chalked up to differences of opinion and personal style.

The Comment-Code Taxonomy


These three camps can be put into a larger collection of comment-code types that make up a taxonomy. Let's think of code as being either Good, Bad, or Ugly. Good code is clean, self-explanatory, and self-documenting with meaningful variable and function names. Bad code is buggy, wasteful, or under-performing; it's not right, and it needs work. Ugly code is confusing or convoluted; it's working, but it makes you want to tear out your hair - or your eyes.

Comments could also be categorized as Good, Bad, or Ugly, too. Good comments clearly explain the intent of the code and answer why the programmer chose to do things the way they did. They can also provide references and document interfaces for other programmers to easily use. Bad comments are the misleading or flat-out wrong comments that do more harm than good. Ugly comments are irrelevant or redundant because they restate what the code already says. We need a fourth category here for Nonexistent comments - pretty self-explanatory.

So this comment-code taxonomy can show where any particular piece of code falls within the landscape and how you could improve the code to get it to one of the three camps:



Good Code
Bad Code
Ugly Code
Good Comment
Document! camp
Fix the code
Explain Intent camp
Bad Comment
Fix the comment
Fix the code
and the comment
Fix the comment
and maybe the code
Ugly Comment
Remove the comment
Fix the code and
remove the comment
Maybe fix the code;
remove or improve the comment
Nonexistent Comment
No Comment camp
Fix the code
Fix the code or
add a comment

My previous post focused on the reasons to work towards good code without comments, but the landscape is much more vast than that. There are excellent reasons to make any one of the camps the goal for the code you're working on. For the Document! camp, if the code is an interface that needs to be documented for your users, or the ideas came from somewhere else and should be credited, then good comments should be written for that code. Header files should be well-documented, and they tend to be the main place for these types of comments. However, comments in the implementation code can be an indication that a function is needed there instead, and the comments can be moved to the header file along with the function declaration.

For the Explain Intent camp, there are numerous reasons why we would decide to put up with ugly code. The code base may be restricted to changes. We may be coding around a bug in a library. We may be working under time constraints that don't allow for significant code refactoring. Or we may not be able to find that clean, self-documenting way to express the code that would preclude a comment. In those cases the comment should be there and it should be good.

One last type of comment that falls somewhere in between ugly and nonexistent is the TODO comment. These comments are extremely useful for remembering what still needs to be done while you're in the middle of refactoring. I usually litter the code with TODO comments while initially working out what will be refactored and then make sure they're cleaned up at the end with a simple search through the code. They would be truly ugly if left in production code, so they should be nonexistent by the time you release.

Those are the special cases. As Don Norman addresses in The Design of Everyday Things, they may seem to be extremely frequent and are easy to remember because they are exceptional, but how often are we working on public interfaces or dealing with ugly code in frozen code bases? I'm sure some programmers do, but, even for them, not all the time. Otherwise, what are we doing spending all of our time staring at code we can't change? In all other cases - the common cases - I would still recommend doing what you can to make the code expressive enough to not need comments. Programming is an intricate puzzle, and we can use all the hints and guidance we can get when reading other people's code, or our own. But those hints don't need to be in ancillary comments when they can be directly imbedded in the code.

Besides, if the code is so difficult to understand that a comment is necessary, what makes you think that the comment will be any more understandable than the code? I suppose it could happen, but I've actually never seen Ugly code with Good comments. Ugly code and Ugly comments both betray a lack of understanding, and they tend to stay together, if there are comments at all. If you've managed to express the intent adequately in a comment, many times the way to make the code better becomes blatantly obvious. That knowledge should be rolled back into the code instead of left hanging in a comment.

Make no mistake, writing good comments is hard - probably as hard as writing good code - because in both cases you have to clearly understand what you are trying to do and how you are expressing that in code. But, why spend the time writing good comments when you could spend that time writing better code? It will improve both your programming skills and your code comprehension skills. It will stretch your abilities, and in the process, your mind, so that thinking in code becomes more natural over time. Self-documenting code should always be the goal. Comments are the exception when we fail to attain it.

Don't Comment Your Code - Write Better Code

Update: Somehow, I managed to remove the middle section of this post and most of the code example when I was updating labels, or adding a link, or something. It read like nonsense because of that, but I've rewritten the middle section to the best of my recollection. If you were confused before, it should make more sense now.

As I've gained programming experience, I've noticed a significant change in how I write code. I tend to write less and less comments, and the nature of them has changed. Where I used to explain what the code did and how it worked, I now leave those explanations up to the code itself. I try to more directly express what the code is doing, and in the rare cases where that is not sufficient, I may put a few words in a comment to explain why the code is doing what it's doing. If I find comments answering 'what' or 'how', I take that as an indicator that the code is not written well enough. Then I refactor the code to make the comment redundant and eliminate it.

I do this refactoring primarily because I hardly ever read comments. When I'm trying to understand a block of code, I focus on the code because that is what gets executed. The comments are a distraction, and I don't trust them. They very easily get out of sync with the code, and then they lead you astray instead of aiding your understanding.

I've come to think of comments like a writing device that I find more than mildly annoying - footnotes and end notes. I understand their use for citing references, but when an author feels the need to add explanations and anecdotes in footnotes that should have been in the main text, all it does is break up the flow of the text. If the footnotes were so important that they had to be included, they should have been integrated better with the main text. If they don't fit in the main text, then they should have been cut out completely.

The same reasoning applies to comments. Footnotes should not be a substitute for better writing, and comments should not be a substitute for better code. And like writing, better coding involves making the code more directly express what the programmer intends. This is not an easy thing to do, and it can take many drafts to reach a version of the code that does an adequate job of expressing its purpose. This process is a form of optimization that improves readability instead of performance, yet it is just as important as performance optimization because confusing code is a minefield of potential bugs and performance losses.

An Example of Bad Code Badly Commented


To make my reasoning a bit more concrete, here is an example of one particularly messy method that I refactored recently that illustrates how I go about making code more clear. Keep in mind that some months prior I had already done a fair amount of work assigning better variable and method names, but the code still went through significant changes before reaching a clear and concise result. As the method name implies, it runs a filter over a set of samples:

void CFilter::Run(void) {
   // Update stage 0 after EDMA writes into the Stage 0 buffer
   _rgStageInfo[0].SetWriteIndex(_pWrFirstStage);

   // Run filter stages
   int i = 0;
   int rgixFirOutput[MAX_ADCS] = {0};
   for( i = 0; i < _cStages; i++ ) {
       // Determine if there are enough Samples to produce at least 2 results
       int cResultPairs = _rgStageInfo[i].NumResultPairs();

       if ( 0 == cResultPairs ) break;

       bool fIsInternalStage = i+1 < _cStages;
       int dixFirOutput = 1;
       if (!fIsInternalStage) {
           if (_fDownConverting) dixFirOutput = 2;
       }

       // process all available samples in this stage
       for( ; cResultPairs > 0; cResultPairs-- ) {
           if ((i == 0) && _fDownConverting) {
               _rgStageInfo[i].DownConvert(_centerFrequency, _cChans);
           )

           int *pixFirOutput = _rgixFirOutput;
           Sample *pFirOutput = _rgFirOutput;
           Sample *pFirOutputAlt = _rgFirOutput + 1;
           if (fIsInternalStage) {
               GetFirOutputIndexes(rgixFirOutput, i+1);
               pixFirOutput = rgixFirOutput;
               pFirOutput = _rgStageInfo[i+1].GetBuffer();
               pFirOutputAlt = _rgStageInfo[i+1].GetBufferAlt();
           }

           _rgStageInfo[i].CalculateResultPair(fIsInternalStage, _cChans, 
                                               pixFirOutput, dixFirOutput, 
                                               pFirOutput, pFirOutputAlt);
       } // for (cResultPairs)
    } // for (all Stages)

    // Is this the final stage?
    if ( i == _cStages ) { 
       // Reset EDMA src addr to simulate a 4-word linear buffer src
       // then trigger EDMA to move filtered results to vInput buffers
       if ( _csHoldoff ) {
           _csHoldoff--;
       } else if (_fDownConverting) {
           CFft::EdmaTriggerAll(_mid, _rgStageInfo[_cStages].GetBuffer(),
                                _rgixFirOutput, _fDownConverting);
       } else {
           // This should be unecessary since the parameter set will reload itself?
           EdmaSetSrc(_hEdmaFirOut, _rgStageInfo[_cStages].GetBuffer());
           EdmaSetChannel(_hEdmaFirOut);
       }
   }
}

Did your eyes glaze over? It's fairly confusing, and frankly, ugly code. There is so much going on here that doesn't have much to do with running the filter stages, and the comments are not at all helpful. But before getting into that, I should briefly explain the Hungarian notation being used. I use a variation of Apps Hungarian Notation to prefix variable names with information about the variables in shorthand. I was skeptical of Hungarian notation at first, but I've found that using prefixes that are actually meaningful in the context of the application are quite helpful for naming and understanding variables. Once you get used to the prefixes used in a given app, you no longer have to waste much time thinking of good variable names because, most of the time, they quickly come to mind. A lot of these conventions are the same across applications, but some are specific. Here are the ones that are relevant to this code:

'_' = member of a class
'c' = a count of something
'd' = a difference between two things (i.e. an offset)
's' = a data sample, specific to this DSP app
'f' = a boolean flag
'p' = a pointer
'h' = a handle to a system resource
'rg' = a range (i.e. an array)
'ix' = an index

These prefixes can be stacked, so 'cs' is a count of samples and 'rgix' is a range of indexes. Not all variables will lend themselves to this notation, but those variables normally have a specific purpose that's easily named. For example, _centerFrequency doesn't have a prefix except for the member designator, but it doesn't need one because it's clearly the center frequency that the sample stream is being down converted to.

An Intermediate Step on the Way to Something Better


Getting back to the problems with the code, the first line of the method shows the main problem with this whole piece of code. It is too detailed for the level of meaning that this code should convey. The method is running a filter, so it should clearly show how it runs the filter, not muck around with setting the write index of the first stage. The write index was optimized out, which I'll get to later, so this line and its accompanying comment were removed.

The next improvement comes from knowing that this code is only called when a pair of samples are available for processing, and only one pair of samples is ever available when the method is called. These constraints are not apparent in the comments. In fact, the comments lead the programmer to believe that any number of samples could be ready, even zero, but that is not the case. Moving the test for available samples to the end of the for loop and checking if enough samples have been accumulated to run the next stage makes more sense. Oh, and that redundant comment? Gone.

Let's move on to the inner for loop. As I said, this method is only called when two results will be generated for the first stage. Additionally, each subsequent stage can only generate a maximum of two results as well. That means the inner for loop will only run once. It's useless! And so is the redundant comment preceding it. They'll get axed.

Finally, there is a lot going on with a couple of FIR output buffers that is mucking up the inner for loop. All of those xxFirOutput variables are rather confusing. They are used to keep track of the stage buffers and the channel indexes within the stage buffers so that filter results get moved to the right place after each stage is processed. Instead of having this method keep track of all of this stuff, the stages themselves should keep track of it, and they should each have a pointer to the next stage so that they can coordinate the movement of their filter results. Moving the buffer handling code into the stage class simplifies the moved code, and reduces this Run() method to its fundamental operations:

void CFilter::Run(void) {
   // Run filter stages
   int i = 0;
   for( i = 0; i < _cStages; i++ ) {
       if ((i == 0) && _fDownConverting) {
           _rgStageInfo[i].DownConvert(_centerFrequency, _cChans);
       }


       _rgStageInfo[i].CalculateResultPair(_cChans);


       // TODO integrate this better and possibly use a while loop
       if ((i+1 < _cStages) && _rgStageInfo[i+1].DecSamplesUntilResult()) break;
   } // for (all Stages)


   // Is this the final stage?
   if ( i == _cStages ) {
       // Reset EDMA src addr to simulate a 4-word linear buffer src
       // then trigger EDMA to move filtered results to vInput buffers
       if ( _csHoldoff ) {
           _csHoldoff--;
       } else if (_fDownConverting) {
           CFft::EdmaTriggerAll(_mid, _rgStageInfo[_cStages].GetBuffer(),
                                _rgixFirOutput, _fDownConverting);
       } else {
           // This should be unecessary since the parameter set will reload itself?
           EdmaSetSrc(_hEdmaFirOut, _rgStageInfo[_cStages].GetBuffer());
           EdmaSetChannel(_hEdmaFirOut);
       }
   }
}


That already looks much better. Now it's much more clear what the method is doing. For every filter stage, it does an optional down conversion if it's the first stage, it calculates a result pair, and it checks if enough samples have accumulated in the next stage to run it as well. If not, it breaks out of the for loop. If all stages have run, then the results are passed on to the next step of the process. But wait, look back at that down conversion code. It only runs in the first stage and the first stage is guaranteed to run, so it can be moved above the for loop. Also, that first comment isn't saying anything useful so we can get rid of it.

Next, notice that TODO comment? That is one kind of comment I don't hesitate to put in my code. It's there to remind me to go back to something that I might forget, but it is temporary. As soon as I finish the TODO task, I remove the comment. In this case the DecSamplesUntilResult() call can be moved inside the CalculateResultPair() call so that the latter call returns true if the next stage has enough samples to run. Then to convert the for loop to a while loop, we can use a pointer to the current filter stage instead of array accesses and put the CalculateResultPair() call inside the while loop condition. Then all we have to do inside the while loop is increment the stage info pointer to the next stage.

Finally, did you notice the question in the last comment? That's been there for quite a while, and I finally got around to answering it. The answer was actually right there in the previous comment, but it's not very clear. The reason why the DMA source needs to be reset is because the destination size is bigger than the source size, and if it wasn't reset, the DMA source pointer would keep incrementing until the destination was full - right past the end of the buffer. The relevant comment was moved and reworded. The redundant comment before the test for the final stage was removed.

Better Code With a Single Relevant Comment


Now look at how much cleaner the final code is:

void CFilter::Run(void) {
   SStageInfo *pStageInfo = _rgStageInfo;

   if (_fDownConverting) pStageInfo->DownConvert(_centerFrequency, _cChans);

   while ( (pStageInfo != &_rgStageInfo[_cStages]) &&
           pStageInfo->CalculateResultPairForNextStage(_cChans) ) {
       ++pStageInfo;
   }

   if ( pStageInfo == &_rgStageInfo[_cStages] ) {
       if ( _csHoldoff ) {
           _csHoldoff--;
       } else if (_fDownConverting) {
           CFft::EdmaTriggerAll(_mid, pStageInfo->GetBuffer(),
                                _rgixFirOutput, _fDownConverting);
       } else {
           // Reset EDMA source address to simulate
           // a FIFO source to a larger sink buffer.
           EdmaSetSrc(_hEdmaFirOut, pStageInfo->GetBuffer());
           EdmaSetChannel(_hEdmaFirOut);
       }
   }
}

The code pretty much stands on its own now and clearly shows its intentions. The only surviving comment explains why the DMA source is getting reset because that's normally an odd thing to do. When I have to come back to this code six months from now, I'll be able to easily see what it does. I can read the code without having to slog through irrelevant or redundant comments or tedious details that should be handled at a lower level. Instead of becoming mired in byzantine logic, I can get on with the task at hand because the code's intent will be obvious. That is the goal of well-written code.


Follow Up: A Taxonomy of Code and Comments

Beware: Premature Optimization Can Happen at Any Time

I'll be the first to admit that I love optimization. No matter what type of code I'm writing, my mind will be constantly formulating and experimenting with alternative ways of achieving the design goals in the most efficient way possible and weighing the trade-offs. I would have a hard time choosing between optimizing and debugging as my favorite programming tasks. I know. That probably makes me weird, but honestly, I'm okay with that. Optimizing and debugging involve a kind of problem solving that I find extremely enjoyable during the process and satisfying when completed well.

Optimization can take many forms. Over the years I've learned to focus on the ones that yield more bang for the buck - architectural, data structure, algorithm, and readability optimizations - and avoid those that are more trouble than they're worth. These troublesome optimizations can generally be classified as premature optimizations and micro-optimizations - categories that are not mutually exclusive. It is all too common to see micro-optimizations that are done prematurely.

Before we go any farther, some definitions are in order. Premature optimization is simply optimization that is done before it's known to be necessary, i.e. before you have actually measured the time consumed by the particular piece of code you want to optimize and found it to be a little CPU piggy relative to the rest of your program. Micro-optimization is twiddling with small code sections to try to beat the compiler at its job without making significant gains in performance, i.e. moving the deck chairs around on the Titanic.

Avoiding these types of optimization does not give you the right to be sloppy. You should still be picking appropriate data structures and algorithms for the job at hand. Joe Duffy has a great writeup on ways that premature optimization has been used as an excuse for bad programming choices. Don't be yet another example of that.

Why Not to Optimize


There are a number of great reasons to avoid these bad optimizations, the most obvious being that you are likely wasting your time. If you are optimizing code that doesn't have timing constraints or already has good enough performance, those optimizations are worthless. If your optimizations are getting optimized out by the compiler or the compiler would have done the same thing anyway, your hard work would be all for naught. Instead of fighting on the compiler's turf, you could be spending your time optimizing at a higher level in an area of your program that will matter. Measure first, then optimize only where you need to.

When you optimize, you are committing to potentially more complicated code for the sake of potentially more performance. If the optimizations are done at a high level, that commitment is probably fine. The mental overhead of the optimizations is integrated into the architecture of the program, and so it is manageable and possibly simplifies the design instead of complicating it.

If the optimizations are at a low level, you are at risk of competing with the compiler or the libraries and frameworks or even the hardware you're using. After all, processors do all kinds of optimizations including branch prediction, out-of-order execution, and memory caching, and they are changing and improving all the time. Every time you upgrade your programming environment, you will have to remeasure your optimized code to make sure it still performs well. And besides, are your users all using the same environment that you are? Even changes to other parts of your program could change the assumptions that made the optimization work and nullify the performance gains. Any changes, both within and outside your control, could make the optimization obsolete, so it will have to constantly be tested and verified. Trust me, you want to avoid that rabbit hole.

Finally, you should be optimizing for readability over performance whenever you can. That may seem a little harsh, but in reality, readable code begets performant code. I can't begin to count the number of times I refactored a complicated section of code to be more readable and only when I was finished simplifying did I see another way to refactor it to make it faster and use less memory. Often times I could see that the original code was trying to be a performance optimization, but the complexity was getting in the way. It wasn't until I made the code comprehensible that I could make it performant, and without fail the newer, faster code was much easier to read and smaller. That's a win-win-win. The bottom line is that in almost all cases, you should optimize first for readability.

When Measured Optimization Becomes Premature


Even though I try to follow these guidelines to the best of my ability, I still get caught by premature optimization sometimes. Last week was one of those times, and it brought out another reason to resist the urge to optimize until you are sure of how the program is working. But before getting into the code, here's a little background to hopefully make better sense of the example.

I'm writing embedded C++ code for a real-time application running on a TI DSP processor. A big part of what makes the real-time data processing possible for this application is the DMA (direct memory access) controller. The application has a number of memory buffers to stage the data so the processor can do calculations efficiently on a contiguous block of data before it's shuttled off to the next staging buffer. The DMA controller takes care of moving the data to the processor's internal memory and back out to external memory so that the processor is free to do program control and computation.

This DMA controller is packed with features to automate different types of memory transfers and kick off transfers from external events, but one of the more basic features is the ability to send a set of commands to the controller to initiate an immediate block transfer. The controller will go off and move the data and then interrupt the processor when it's done. Great!

The issue that I was dealing with was that sometimes there was nothing to do while waiting for the controller to move the data, so the processor had to wait for it to finish. It seemed like the controller was taking a fair amount of time, and I wanted to see if a plain old memcpy() call would be faster. Here's the little program I used to compare them:

void main() {
    int cs = 2000;
    Sample *pSrc = new Sample[cs];
    for (int i = 0; i < cs; i++) pSrc[i] = i;
    Sample *pDst = new Sample[cs];


    int cTests = 10;
    for (int j = 0; j <= cTests; j++) {
       unsigned cb = (j+1)*cs/cTests*sizeof(Sample);
       unsigned tStart = CLK_gethtime();
       int id = DatCopy(pSrc, pDst, cb);
       DatWait(id);
       unsigned tDatCopy = CLK_gethtime();


       memcpy(pDst, pSrc, cb);
       unsigned tMemcpy = CLK_gethtime();


       UTL_logDebug2("Test %d copying %d bytes", j+1, cb);
       UTL_logDebug2("  DatCopy: %d, memcpy: %d", 
          tDatCopy - tStart, tMemcpy - tDatCopy);
    }
}

The DatCopy() function call sets up the DMA transfer of cb bytes from pSrc to pDst, and then the DatWait() call waits until the transfer is complete. The CLK_gethtime()call is a special function that returns a count of the number of processor clock cycles since reset, and the UTL_logDebug2() calls are special print statements that log the formatted strings to a memory buffer that can be viewed with an emulator.

The real application doesn't copy more than 4000 bytes at a time, so this loop measures the amount of time that the DMA and memcpy() take to copy from 400 to 4000 bytes. Here are the results I got:

Data Copy Comparison graph

So according to this measurement, memcpy() is always significantly faster than DatCopy(), and I should pretty much always use memcpy()unless there is some other computation that can be done after kicking off the transfer to hide the latency in DatCopy(), right? That's what I thought, too. It seemed pretty straightforward, and I figured it would be an easy performance gain. I was about to go change all my uses of DatCopy()-DatWait() to memcpy(), but I had a nagging feeling that this couldn't be right.

The Premature Optimization Bug


Because of the memory architecture of this processor, memcpy() has to use the DMA controller to do its copying. All memory operations go through the DMA controller, but memcpy() was doing the transfer in small pieces under program control while DatCopy() should have been using the controller directly to transfer the data in one big block. It shouldn't be possible for memcpy() to be faster. Indeed, it isn't.

There was a bug in the DatCopy() code. The problem was that DMA transfers are set up by default to link to a null transfer that tells the controller to do nothing. The null transfer can be replaced by another DMA transfer so that when one transfer completes, it starts the next transfer automatically. However, if the null transfer is left there, then the DMA controller flags it as a missed transfer and takes its sweet time getting back to tell the processor that it's finished.

Since all of these DatCopy() transfers are one-offs, they shouldn't link to another transfer. They should return immediately. Once I figured this out and flipped a controller bit to make the transfers static instead of linked, I got this instead:

Corrected Data Copy Comparison graph

Ah, the world is right again. DatCopy() is generally faster than memcpy(), as it should be, except for transfers less than about 1000 bytes because of the time needed to set up a DMA transfer. There's only one place in the application where the transfers were guaranteed to be that small and could improve throughput if they were faster, so that's the only place where I put in the memcpy() optimization. As a bonus, all the other transfers sped up because the DMA setup bug in DatCopy() was fixed.

If I hadn't listened to that nagging doubt, I would have changed the code in dozens of places in the mistaken belief that I was improving performance, when in fact I was papering over a performance bug. Don't fall victim to that kind of hastiness. If something doesn't seem right, think it through and make sure you're not optimizing prematurely. You could be trying to do the compiler's job. You could be committing to code maintenance that isn't worth it. You could be unnecessarily complicating your code. Or you could be ignoring bugs that are preventing real performance gains. Do your homework instead, and only do the optimizations that matter.