Search This Blog

Programming Is an Endeavor, Not a Race

It seems obvious that the field of programming is an endeavor and not a destination to be reached. The technology and best practices are always changing, sometimes at an alarming pace, and keeping up with that rate of advancement requires a constant commitment to personal development. That means that good programmers are always learning, yet I read internet discussions all the time that attempt to separate novice programmers from experienced ones and argue in favor of one or the other.

The advocates for the novice programmer appeal to the need for training, mentoring, and standards to help the novice programmer become productive, protect the team from mistakes the novice would make, and generally expedite the process of getting the novice to an expert level. The advocates for the experienced programmer argue that such measures waste the expert programmer's time with endless distractions and stifle their creativity with rigid coding guidelines. The novice programmers complain that the experts are aloof and won't give them the time of day, while the experienced programmers claim that the novices don't care about learning the best way of doing things or are incapable of comprehending their advanced code. Therefore, teams should be made up of only the best programmers to be successful.

Certainly, I'm oversimplifying the ongoing argument here, but this is the general form that I see it take, time and again. I see a lot of validity to both sides of this argument, especially when considering that everyone has their own perspective coming into the debate, and their points are certainly true for their own experience. It seems to me that they are all right, and at the same time they are all wrong, because the framework of the argument is wrong.

The major flaw comes from assuming that programmers can be categorized as either novice or experienced and that there should be any tension between them. In reality, programmer skill is a continuum, and there is no definite point when someone transitions from a novice to an expert or to any intermediate skill levels that you could come up with. We all started out as novice programmers at one time, and now we are all at different points on our learning path.

New programmers are entering the field every year as people take their first programming class or discover a book or online tutorial that introduces them to programming. That will still be true in 10, 20, or 30 years from now, and we need these new programmers because at least some of them will become tomorrow's great programmers. They won't be able to write great code to begin with, and some of them never will. But unless they write a lot of code, they will never learn how to write good code. Some of that code will likely make it into your code base, and yes, you will have to maintain it. That is part of your job as a more experienced programmer.

Another part of your job is to help teach other programmers. You may not feel like an expert programmer capable of bestowing your knowledge on others, but unless you are a beginner, you likely know some aspect of programming fairly well. If you are an expert, it is likely in specific areas, and you know of other areas that you'd like to work on. You're an expert and a novice at the same time. You should do what you can to spread the knowledge you have so that others can benefit from your expertise, and hopefully, you will benefit from others with expertise in areas where you are weak.

There is an additional benefit to teaching, and that is that while you are teaching, you are also learning. Having to explain something so that another programmer understands it will deepen your own understanding of the subject in a way that reading about it or having it explained to you or even doing it yourself will not achieve. Teaching something forces you to organize the topic in your own mind more fully and solidify concepts that you may have only superficially learned before.

Take a simple example, like how to choose when to use a for loop or a while loop. You may have been choosing one or the other in different coding contexts for years without thinking much about it. You know how they both work, and you can use them fluently. As soon as you have to explain to someone else why you would use each of them in different contexts, I guarantee that you will learn something new because you are forced to examine their pros and cons, their structure, and their idioms in ways that you probably haven't thought about before.

Earlier I mentioned standards, and I think standards are a big point of contention for a lot of programmers. Some think they are necessary because standards will protect the code base from novice programmers that don't know what they're doing, and maybe those programmers will learn something about good programming by following standards, assuming they actually read and internalize them. Others hate standards because they feel that coding standards are draconian. They feel that standards condone coding without thinking and stifle the creativity of more experienced programmers so that they can't write the best code as the situation warrants. Do you see the inconsistency here? How can standards promote good programming but at the same time prevent good programming?

The problem is that most coding standards are created under the assumption that a few experienced programmers can codify their knowledge, impose it on novice programmers, and expect good code as a result. There's that false dichotomy of novice and expert programmers again. It is true that new programmers need to learn and understand good programming practices before potentially abusing more complex language features. Basically, we need to learn the rules before we can break them. How can we encourage that practice without prohibiting experienced programmers from using complex features in elegant ways because they are potentially dangerous in the wrong hands?

Then there is the related concern that novice programmers won't be able to understand code that uses more complex language features, so they should be restricted for that reason as well. That line of thinking ignores the fact that programmers can and likely will learn these features with time, and the more they are exposed to them, the faster that learning is likely to happen. A better way of implementing coding standards would be to make them as lightweight as possible with the emphasis on things like good naming conventions and unit testing. Then encourage a culture of asking questions and mentoring through practices like code reviews to help speed up the process of disseminating knowledge and improving everyone's programming skills.

For every programmer, there is a beginning, but there shouldn't be an end. There is no point you can reach where you know everything you need to know. Programming is a dynamic field, and there is always something new to learn. We are all at different points on that endless endeavor. We have the responsibility to learning when our skills are lacking and to teach others when the opportunity arises. The next time you're working with novices and their code, remember that you were a novice once, too. You could learn something - by teaching them.

What Can Artists Teach Us About Programming?

Today I witnessed a remarkable feat. We had a guest speaker by the name of Ben Glenn at my daughter's Sunday School class. He gave an amusingly comedic and somewhat inspirational talk about appreciating and developing your gifts, but that's not what I want to talk about. I want to talk about what he did next. He turned on some music, put on a surgical mask, and began drawing with chalk on a large black sheet clamped to a standing wooden frame. He said the sheet was a plain old bed sheet from Wal-Mart, and it looked like it would fit a queen-sized bed.

As he jumped back and forth, spreading colored chalk over that bed sheet and sending clouds of dust into the air, I was mesmerized. Slowly at first, and then progressively faster, a scene appeared before us of a sunset on a shoreline with an island of mountains off in the distance and a lighthouse and palm trees framing the image in the foreground. Within about 20 minutes he had created a beautiful work of art, and we all got to see it come to life.

I hadn't brought a camera, so I can't show exactly what he drew for us, but it looked something like this print from his web store:

Ben Glenn print Castaway

I think the fact that he can create these images so quickly and in front of a live audience is incredible. I know it's not a unique talent, but it sure does take a major amount of dedication, focus, and practice. I appreciate that. And it was awesome to watch. If you ever have the opportunity to watch a great artist at work, take the time to really see what it takes to create a thing of beauty in real time. As I was watching the scene unfold, I couldn't help but think about how what he was doing with chalk and a canvas mirrors what programmers do with a programming language and a computer. Even though his is a physical medium and ours is a virtual one, how we create our masterpieces has many similarities, and we can learn a lot from watching a great artist in the act of creating.

Let's start at the beginning, with an artist's worst enemy: a blank canvas. Now Ben obviously had a pretty good idea of what he was going to draw. He's done this hundreds of times before, and he's probably not going to draw something completely new for the first time on stage in front of an audience. He has a design of a scene in mind and he knows what he wants the scene to communicate with the audience. Let's assume he's taken care of all of those preconditions of design and requirements and we're looking solely at execution.

That blank canvas can be pretty scary. I know I've faced it plenty of times in the form of a blank file and an insistently blinking cursor. How to begin? Ben didn't hesitate. He grabbed a big stick of yellow chalk and immediately began throwing color up on that canvas in great strokes and swirls. Then he dropped the yellow and picked up some orange and did the same thing. Then he used some blue and some purple and some green, building a foundation for the scene he wanted to create. The blank canvas was the enemy, but he quickly beat it by putting up something real that he could work with and build on.

We can do the same in programming by writing something into that blank file as quickly as possible. Define a class and start filling it in. Don't try to think about every requirement and feature that class has to fulfill from the beginning. Trying to keep all of that in your head can paralyze you. Put up a skeleton first, something you can build on. Test Driven Development can help here because you can define the features and requirements as tests so you won't feel like you're forgetting any of them. The tests become part of the foundation of that class, and they give you something to write that should be easy to start with. Once you've filled in the class enough and all of your unit tests are passing, you know you're done.

Once the foundation was done, Ben started defining what would become the background with some additional detail. Here I started noticing that he used a wide variety of strokes. Some where finer and more precise to add definition to an element while others were coarse and strong to get a lot of chalk on the canvas quickly and rough out a new element. He knew intimately well when to use different techniques, and his movements were always definite and confident. That confidence came from practice, probably thousands of hours of practice. He knew exactly how each stroke would add to the visual appearance of the scene and he could produce each one without thinking because the motions were ingrained in his arms and hands from those hours of practice.

It was also obvious that he knew at a deep level how the different colors, strokes, and elements would interact on the canvas to produce a visually rich and stunning image. That knowledge likely came from intense study in addition to practice. We should strive for the same kind of understanding of our tools, languages, and frameworks, and experiment and practice with them until we can use them with the same elegance and ease.

As the scene developed, he would leave some elements unfinished, go to a different space to create another element, and then return to further define the earlier ones. He would move around the canvas, roughing in the sun, the planets, and the mountains at the same time so that he could use them as markers for each other and the other foreground elements yet to be created. They helped define the space and flow of the scene, but at first it was not clear what any of them were. He didn't need them to be completely refined and perfect right away; he only needed them to be there to add structure to the scene. Once everything was arranged and he went back and added detail, those nondescript elements quickly jumped to life.

The same type of process can be quite helpful in programming. First roughing in different classes and methods so that the whole program can hang together, before going back and filling in the details, is a great way to keep a project moving along without getting bogged down in the details too early.

Another effective technique that he used when adding new elements to the scene was throwing stuff away. He would develop an area of the scene and then later come back and draw something else right over the top of it. If you only look at the final picture, you may not even know that originally, the colors of the sunset covered the entire canvas. The mountains and the lighthouse weren't drawn on empty space; they replaced the sunset behind them. The same is true of the palm tree covering the mountains behind it. He wasn't concerned with preserving every ounce of work that was done on any particular element, even though time was limited. Some of that first chalk that was laid down is never seen in the final picture. It was put on the canvas only to be thrown away, but it still served a purpose. It helped define the background, and since it was drawn in first, it helped make the background that can be seen in the final image look continuous and flow behind the foreground elements.

What should we take away from that technique? For one, our code is not sacred and we shouldn't worry so much if some, or even a lot, has to be thrown out, redone, or replaced. Even if some particular code is pitched, it helped define the program at one point, and hopefully the good effects of that code will endure even after the code itself is removed. Inevitably, some of the original code will remain to the end, just as some of those original chalk strokes are still there in Ben's finished picture.

One last lesson we can take from watching an artist at work is how to stop. Ben knew he had our attention for a limited time, so when he had developed the scene enough and all of the elements looked recognizable and evoked the right mood, he set down his chalk and took his applause. He had created a beautiful, moving image in less time than it takes bake a pizza. I'm sure there are parts of it that he thought he could have done better, or things he wanted to change, but he also could have ruined it with endless tweaking while losing our engagement in the process. Instead, he shipped it to his customers, the audience. We could do worse than learning to do the same.

Ender's Game is an Understated Story of Uncertainty

I've spent nearly all of my free time over the past two years studying different aspects of software engineering that I had let languish after college. I've quite enjoyed picking up learning again after a long hiatus, but recently I've felt the need to take a small breather, and watching the occasional movie wasn't going to cut it this time. I haven't played a video game in the past two years, but I tend to get sucked into those to a point that they take up weeks of my life.

I haven't read any novels in that time, either, and I love reading fantasy novels. The last ones I read were The Lost Chronicles Trilogy from the excellent DragonLance series. I've read most of the series, and I've enjoyed it immensely. But for this little break, I wanted something a bit lighter. Since Ender's Game is coming out as a movie very soon, I thought I'd pick up the book and see if I liked a classic work of science fiction as much as I like fantasy. I read it in three days.

Three days might not sound fast, but it was for me, considering I only have a couple hours a night of free time. The first couple chapters were slow and awkward, as if Orson Scott Card didn't quite know how to begin. And the last chapter was a bit off, too, as if he didn't know how to end it, either. But the middle was excellent, and I couldn't put it down.

I had forgotten how nice it was to sit down with a good book and get entirely lost in another world, to set your imagination free and dream while you're awake. I especially liked Card's sparse descriptions. He stayed away from specifics about the battle room or the simulators or any other settings so that you could imagine it for yourself. If a scene was taking place in a corridor, you were free to build up the environment in your head in your own way on the fly. It helped put you right there in the middle of the story because you're not working to figure out what the author is trying to describe, but witnessing the unfolding of the events on the page.

I also thoroughly enjoyed reading a book that only showed you what was happening. There was no telling you what to think. This feature is common in novels. But it's something I had forgotten, and it's in stark contrast to all of the technical books I've been reading. I know they're not really comparable, but please, bear with me. Some of the best technical books are very good at showing with examples or anecdotes or what have you, but even the best ones still fall back on their own analysis to make their points. They end up telling you what you should think or do. "This is the best way to format your code, for these reasons," or "that is the proper way to build a web form because the user is going to want to do this."

In all fairness, it is essentially unavoidable. People expect the author of a technical book to give plenty of advice on the right way to do things, and they would criticize any book that didn't give a complete analysis of the material being presented. I would, too. After all, we read technical books to get an expert opinion on the topic at hand. But boy, is it refreshing to get away from all of that telling, telling, telling for a while. In Ender's Game there was none of that. The story showed you what happened. It took you inside Ender's mind and showed you his feelings and reactions to what he had to deal with. And then it left you to sort everything out for yourself, to draw your own conclusions.

Ender's Game covered a lot of ground. It dealt with bullying, social hierarchy, and violence. It dealt with loyalty, authority, and responsibility. It dealt with self-reliance, teamwork, and perseverance. It never beat you over the head with any of these themes, but it left you thinking about them. The more I thought about all of these themes and others, the more I thought that the overarching theme of the book was that nothing is certain.

I would like to explore the way the book presents this theme, but I don't want to give away any spoilers for anyone who hasn't read it, yet. I hate having plot surprises spoiled for me because I love the thrill of experiencing them first hand, and I want to protect that experience for others as well so forgive me for being somewhat vague. People who have read the book should know what I'm talking about.

Nothing is black and white in the book. Characters that appear to be evil end up doing things that could be considered good, and characters that appear to be good end up doing things that are incredibly evil, when viewed from a certain perspective. Sometimes the character is not even aware of the consequences of their actions. Nothing is as it seems. Everything depends on how you look at it. And events, motivations, and intentions can be interpreted multiple ways.

Once I found out what was really going on with the I.F., (because it should be obvious that all is not as it seems) I immediately began to wonder what would have happened had the course of events been different. Everyone was so focused on the current strategy and the tools they had developed that no one questioned whether another path could be taken or if there was an alternative explanation for past and current events. What if different decisions were made? It seemed to me that there was not a single, definite path to achieve the final goal, but numerous possibilities. The outcome that resulted was by no means necessary, but now that what was done was done, could the reconciliation at the end be adequate? That is left for the reader to decide ... and ponder.

That line of thought is where the questions lead out of the book and back to the real world. Coming from software engineering, where almost everything is supposed to have a definitive answer, or at least most people believe that they have the definitive answer on any number of topics, it is important to come back to reality every once in a while. The reality is that there is no right answer. There may be wrong answers and mediocre answers and possibly some good answers, but nothing definitive. The best that can be said is that you have an answer, one of many that could possibly work.

The point is to consider whether the methods, practices, or standards being advocated are indeed the final answer to the subject at hand, or are the good outcomes resulting from those practices due as much to luck or coincidence or pure randomness as to the processes that were followed. Does the programming language you use really matter that much? How about the design methodology, or the development environment? Is it possible that the positive results that were achieved in successful projects could have come about in any number of other ways, and that the most important factor might have been the people involved?

After all, software engineering is mostly a human endeavor. We are not so much dealing with the physical laws of nature as the continually evolving rules and constraints of human design. Software is built on top of hardware and other software designed by humans. The design constraints matter, and as they change over time, best practices will change with them. We are also continuously gaining new insights and information about how we can be more productive both as individuals and in teams. With new information comes new ideas and new ways of doing things, so we shouldn't hold on to old practices too tightly just because we've already committed to a certain course of action, like the I.F. in the book. The goal should be to always evaluate the current context for the best way of achieving the most positive outcome for all parties involved.

Tech Book Face Off: User Stories Applied Vs. Learning UML 2.0

Like many software engineers, not to mention people in general, I hate busy work. I tend to view most formal development processes as involving an inordinate amount of busy work. Granted, the reason for much of that work is to facilitate the building of a quality product that meets the customer's needs within a certain schedule and budget. But all too often, the processes put in place can put undue pressure on all of those goals because spending too much time in meetings on requirements documents and specifications can either distract the engineers from making real forward progress or sap their desire to build a great product.

How do we balance these opposing forces so that engineers stay engaged and productive building the right product for the customer using as lightweight a process as possible? Here are two books that offer complimentary ideas for how to do it.

User Stories Applied: For Agile Software Development front cover VS. Learning UML 2.0 front cover

User Stories Applied: For Agile Software Development lays out a process for defining a product with plenty of customer involvement and enough documentation to ensure that the right thing is built, but no more. Learning UML 2.0 describes the Unified Modeling Language that can be used to design and document a complex software system (or any system, really) so that engineers and customers alike can visualize in more detail exactly what is being built. It may not seem like these two books are directly comparable since they are dealing with different aspects of the development process, but they have more in common than you would think. Let's look a little closer.

User Stories Applied: For Agile Software Development

More than a lot has been written about Agile software development over the years. I've read a lot of bits and pieces, mostly on various blogs, but I had yet to read a detailed description of any part of it until picking up this book by Mike Cohn. I was pleased to find it a very easy and enjoyable read that reminded me of The Pragmatic Programmer. The focus is centered on one aspect of Agile that defines the process for product definition and project management, and Mike does an excellent job of clearly explaining all of the ins and outs of user stories within the context of real projects.

What appeals to me most about user stories is that they make a concerted effort to address the realities of developers and customers and the project that sits between them. The fundamental reality is that no one knows what the right system that best achieves the user's goals will be until it is built. That means that in a working development process, the developers will need to change the design of the system as it is being built, and the customers will change their minds about what they want built. The fact that the final product is a moving target is not the fault of either the developers or the customers. It is the result of our inability to predict the future, and the better a development process accepts that reality, the better it will be able to manage the inevitable changes that will occur.

Requirements specifications try to address the product definition problem by documenting every minutiae of the system to be built so that both the developers and customers can sign off on the resulting contract with their expectations in full agreement. Then the developers only need to go off and build the system to the specifications and deliver it to the customer. But the results are hardly ever satisfactory because a requirements spec is not an adequate substitute for the working system that it is trying to represent.

Part of the problem is that specs can so easily be misinterpreted by anyone and everyone. Each stakeholder can interpret nearly any part of a spec to mean what they want it to mean because human language is ambiguous no matter how hard we try to make it otherwise. Disagreements can easily erupt over things that everyone thought were clear in their own minds when they were written down. I've experienced these arguments first-hand, and it always amazed me that there seemed to be no way to adequately phrase the offending requirement so that everyone understood it the same way.

Another problem with specs is that it's nearly impossible to get an overall understanding of a project from a 300+ page spec. Even a 100-page spec can be mind-numbingly boring to read and understand. Trying to process all of the details to form a complete mental image of the system is a nightmare. Beyond that, most people loathe writing specs in the first place, so if no one volunteers, some hapless developer will probably get stuck writing most of the spec while everyone else signs off on it without reading it. Then it will sit on a shelf and a file server until a disagreement needs to be resolved, at which point it is brought out and the above mentioned arguments ensue because no one really understood the spec in the first place.

User stories attempt to solve these problems by focusing on the user's goals and encouraging communication between the developers and the customers. Product features are written down on note cards instead of in requirements documents, and they are described only briefly so that it is obvious that they encompass a discrete feature without being the final, detailed answer to the problem. The stories are there to encourage discussion instead of record keeping, and those discussions should happen as they are necessary, not all upfront.

Then the development process consists of a series of 2-4 week iterations that each consist of implementing a subset of the stories. The team of developers and customers decides which stories will be done in each iteration, and during the planning of each iteration stories can be refined, split, added, and removed. As stories are further defined and implemented, details can be written on the back of the story card as tests that should be written, run, and passing before the story is considered complete.

The development process includes ways to estimate the scheduling of stories, planning software releases, and measuring and monitoring the project's progress. A concerted effort is made to defer implementation details and infrastructure building until it is absolutely necessary because features may change or get dropped, and then that work would be wasted. Everything is done in as lightweight and flexible a way as possible while giving the team the responsibility to build a product that, first and foremost, solves the user's problems.

This is only a very brief explanation, and there is much more covered in this completely accessible book. All in all, the development process resonated with me a great deal, and I would highly recommend User Stories Applied to anyone interested in getting the requirements monkey off their back.

Learning UML 2.0

Whereas User Stories Applied explained a development process, Learning UML 2.0 describes a modeling language. Something that user stories leaves out is documentation of the system that is being built, even though it was the suboptimal documentation of the requirements specification. One way to fill that gap at least partially is with UML diagrams that show how the system is connected together at various levels of abstraction.

Russ Miles gives a fine treatment of UML in this book, explaining clearly and concisely all of the intricacies of UML diagrams with a running blog content management system example. He did seem to lose the thread once when comparing communication and sequence diagrams with a goofy boxing match analogy, but otherwise he stuck to the task at hand and the book was a quick and easy read.

As for UML itself, it can model different aspects of a complex system using a wide array of diagrams: use cases, activity, class, object, sequence, communication, timing, composite structure, component, package, state machine, and deployment diagrams. These diagrams can describe a system in multiple levels of detail starting with throwaway sketches that only convey key points, moving into blueprints that form a more detailed specification of the system, and going all the way to a programming language that can completely model a system and generate code for multiple deployment environments.

I got the impression from the book that UML as a programming language (or executable model) has not really been achieved, yet. I would have to say that I can't see how an executable UML model would hold much of an advantage over a high-level dynamic language as a modeling language. Sure, it would be represented as a set of diagrams that would theoretically be easier to understand by less technical people, but in reality such a model would be so incredibly complex and unwieldy that it would be more trouble than it was worth. Not to mention the fact that drawing the model as a set of diagrams in that amount of detail would make the model terribly inflexible. Any changes to the system would be quite difficult to maintain in a massive drawing of interconnected symbols, even if there are ways to subdivide and group components. That's partly why digital hardware design moved from schematics to HDLs (hardware description languages) decades ago.

I've done a fair amount of modeling of ASICs (application-specific integrated circuits) using C++ and MATLAB as modeling languages because it was much faster to develop a working system that could be shown to the customer using these languages than using the normal ASIC design tools. And the resulting model was much more flexible than the final implementation that was done in schematics and Verilog, so when the customer requested changes, it was fairly straightforward to experiment with the model before committing to a particular design.

Even though the software models of these ASICs had a lot of value in rapid virtual prototyping and enabling better communication with the customer, we were always looking for more ways to use the models throughout the projects. One of the problems with models is that if there is no reason to keep them up-to-date with current design changes, they will quickly become out of sync with the real product. To prevent that from happening, and because it was easy to show that the output from the models was correct, we designed the models to generate test cases that could be used as input stimulus and output checks in simulations of the ASIC products.

This feature proved invaluable for verifying our designs because we could sit down with the customer and our software model and make sure that the system was generating the desired output with their provided input stimulus. The model became the golden standard for the design. Then we could close the loop by verifying the physical design simulations against the model so that everyone was confident that when the first silicon came back, the chips would behave correctly. We enjoyed a number of successes using this method.

When I try to see an equivalent method of using UML to model software systems, I end up drawing a blank. The reason C++ and MATLAB worked for modeling ASICs was that they were a sufficiently detailed abstraction that ran orders of magnitude faster than the simulations, but were easily modified to enable experimentation. UML doesn't have those same advantages over high-level languages. A sufficiently detailed UML abstraction would most likely run slower and be harder to develop in enough detail while being less flexible than a model written in Ruby or Python.

The bottom line is, why spend the time making UML diagrams a programming language when there are better alternatives? That is not to say that UML is not useful. I thought the class, object, sequence, and state machine diagrams were incredibly useful for designing the architecture of software systems. In fact, I've been using those types of diagrams, albeit more informally, throughout my career, and I'm sure most software engineers do as well. It's great to have a more formal treatment of these important design tools.

It's often difficult to wrap your head around a system, whether it's one you're looking at in code that you need to maintain, or one that you're actively developing. Drawing these types of diagrams and seeing a visual representation of the system can give you a much clearer understanding to work from. Those same diagrams can also be useful as documentation for the customer or for other developers that need to maintain the software in the future. But it would be good to stop at the right level of detail, before the UML diagrams get so complex that they cease to be clearly recognizable as a visual representation of the system.

UML Augmented Stories

User stories are a great improvement over the dreary busy-work of requirements specifications. They encourage better communication with the customer, engage engineers more fully in the development process, and help everyone be more productive in making a quality product that meets user's needs by getting out of the way. They do leave a hole in the area of documentation, but UML can at least partially fill that gap while also providing value to the development process through clear visual models of the software system. If I had to choose between them, I think User Stories Applied will have a bigger impact on a team's productivity and results, but both books are easy reads with plenty of useful stuff in them. I highly recommend them both.