Search This Blog

Too Young To Understand

I hated hearing, "You're too young to understand," when I was a teenager. Now that I've had some time to learn some things and then learn from my misunderstandings about those things, I've started to think that about myself—that I was too young to understand. I also notice that sentiment more now when others say it about themselves, probably because of my own introspection.

I don't think we should stop there, with lamenting about our past ignorance, or even being pleased that we have come so far in our understanding of things since our youth. We still have much to learn, and years into the future we could look back on today as our ignorant youth. The more you know, the more you know you don't know, as the saying goes.

When I was first learning to program, I learned Pascal and then C++. Then I had to pick up Java for a few college courses, and it wasn't that difficult. I was starting to think that I had pretty much mastered programming. Sure, the other material in Computer Science was challenging, but I found the actual programming rather easy and figured that was all there was to it. This programming thing wasn't nearly as hard as they made it out to be. Boy was I wrong.

The reason I thought programming was so easy was that I didn't really understand the power and utility of the more abstract programming language constructs. Because I didn't understand them, I didn't use them and get comfortable with them. Things like C++ templates and Java reflection actually went right over my head at the time, and I didn't even know it.

Looking back, I can see how little I understood about programming. I can clearly remember how I thought about learning new languages back then, and my thought process was pretty simple. All I needed to do was learn the new syntax and keywords, and then I could go off and write awesome programs in the new language. I ended up writing a lot of programs in various languages that looked a lot like C++ programs. I didn't know how to think in any language other than C++.

One language in particular was SKILL, a scripting language for a very expensive IC CAD tool. SKILL is actually based on a LISP dialect. I'm not sure which one because it's been so long since I used it, but now I at least realize that it was a form of LISP. At the time I was using SKILL I had no idea, and I was totally clueless about LISP's importance and potential. I did write some cool layout generation macros and other utilities in SKILL, but I wonder what I could have done with it if I had understood what it really was.

Another example of being too young to understand what I was learning was the compilers course I took in college. We covered the main stages of a compiler and wrote our own compiler for a toy language that was a reduced form of Java. I could do all of the work for the course fairly easily, but I wasn't even aware of the fact that I didn't understand what I was doing. I didn't have any context for what I was learning. After learning many more languages and studying programming more extensively in recent years, I have a much deeper appreciation for language design, and I would love to learn more. I'll have to pursue that with all of the free time I have. Heh.

I can think up dozens more examples like these. There are all kinds of things that I thought I had learned well years ago, but now I know that I had only scratched the surface. Some things I've learned much more in depth, some things I want to learn more, and some things I've had to put aside. But one thing is true of all of them. There is no limit to the extent of knowledge you can attain in any area you choose to pursue. In five, ten, twenty years I will look back on what I think I know now and chuckle at my own ignorance. At least I hope I will.

It's fine to look back and think I was too young to understand, but the thought shouldn't end there. What that thought really means is that I was too young to understand things the way I understand them now. That will always be true, no matter what age I am. What is even more interesting is what I will learn in the future that will make my current ideas seem naive. Study, experience, and time will reveal what those ideas are, as long as I make the choice of what to pursue and put in the effort. I no longer hate the thought that there are things I don't know. I embrace it as an opportunity, and I look forward to the time when I'll think that my current self was too young to understand.

What's Past is Prologue

If you want to build something new, it's best to start with history. I am a firm believer in that. The best systems we have at our disposal today, software or otherwise, have evolved over years, decades, and centuries from simple beginnings. Rarely is a new system designed and built from scratch in a short time, and even then the successful ones borrow liberally from the successful systems that came before them.

Of Software Systems and Planets


A few months ago I read an article from David A. Dalrymple, whose opinion seemed to contradict these ideas when it comes to software systems. I found it rather curious, and it's been simmering in the back of my mind since then. I don't intend for this article to be directed at David or to discount the notion that we should strive to improve the systems we work with. I think many people share his viewpoint, and I offer this as another perspective on how computing history has developed, using some quotations from his article to guide the debate. The crux of his article is laid out relatively early (emphasis is David's):
This is the context in which the programming language (PL) and the operating system (OS) were invented. The year was 1955. Almost everything since then has been window dressing (so to speak). In this essay, I’m going to tell you my perspective on the PL and the OS, and the six other things since then which I consider significant improvements, which have made it into software practice, and which are neither algorithms nor data structures (but rather system concepts). Despite those and other incremental changes, to this day, we work exclusively within software environments which can definitely be considered programming languages and operating systems, in exactly the same sense as those phrases were used almost 60 years ago. My position is:
  • Frankly, this is backward, and we ought to admit it.
  • Most of this stuff was invented by people who had a lot less knowledge and experience with computing than we have accumulated today. All of it was invented by people: mortal, fallible humans like you and me who were just trying to make something work. With a solid historical perspective we can dare to do better. 
I have a problem with this line of thought. People coming from this position think that most of what we're using now is suboptimal because the people that created it didn't know what they were doing. The logical conclusion would be, now that we have decades of experience with computing, we should throw the old stuff out and create whole new systems that do things better. The problem is that there is no guarantee that such an undertaking would result in anything substantially better than what we have now. Robust systems are not developed this way. They grow and evolve over time, as John Gall said:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
No other field of study or technology does this—throws out a large portion of what has been developed to do it again from scratch—because it does not work. Complex systems evolve from simple systems. Advanced ideas grow out of the extension and combination of established ideas. To get a different perspective on the development of ideas over time, let's take a moment and think about the planets.

In ancient times Ptolemy published the Almegist, laying out the motion of the planets in a geocentric universe where the sun, planets, and stars moved around the Earth on the surfaces of concentric spheres. This model held for many hundreds of years until 1543, when Copernicus published a heliocentric model of the universe that improved and simplified the predictions of planetary motion. It put the sun at the center of the universe and moved Earth out to the third orbiting planet.

Most people didn't give the heliocentric model much thought until Galileo invented the telescope and started making observations of the phases of Venus in 1610. The phases of Venus strongly disproved a pure geocentric model of the planets and paved the way, with much disagreement from the Church, for the heliocentric model. Kepler further developed the model with his three laws of planetary motion, showing that the planets actually trace out ellipses instead of circles, with defined properties of their motion around their orbits.

Isaac Newton unified the motion of the planets with the motion of objects on Earth with his universal law of gravity and the laws of motion. His laws and Kepler's laws of planetary motion led to the discovery of Neptune when the orbit of Uranus was found to be irregular and not fully described by these laws. Urbain Le Verrier calculated exactly where the new planet must be based on Uranus' orbit, and the new planet was found by Johann Gottfried Galle. This was a huge triumph of Kepler's and Newton's laws.

This model was not good enough to explain some peculiarities in the orbit of Mercury. Mercury is close enough to the Sun to experience relativistic effects of gravitation that we can observe with high-precision measurements. It wasn't until Einstein came up with his general theory of relativity that these deviations could be explained.

Over the course of centuries a great number of people developed and refined our model of the motion of the planets. At each step incremental changes were made that built on the work that came before. Indeed it was Isaac Newton that said "If I have seen further it is by standing on the shoulders of giants." If Newton were born today and grew up to be the same wickedly intelligent person that he was in his time, I have no doubt that he would completely grasp Einstein's work and the work of those who came after him, and Newton would be extending modern physics into new and unfathomable areas. The same is happening now with programming and computer science, but we are extending the solid foundation we have, not tearing it up and trying to start from scratch.

The people that invented our computing paradigms were geniuses, and to pass off their accomplishments as ignorant attempts at developing systems for the future is naive. People like Alan Turing and John von Neuman built an incredible foundation for the software systems we have today, and they also stood on the shoulders of giants that came before them. The systems we are using today have proven to be quite flexible and extensible. The fact that they're still around, yet much improved, should be evidence enough of that, and we will continue to evolve these systems in the future.

Text Vs. Graphics


Here is another peculiar stance in the article, this time on textual vs. graphical languages:
It is bizarre that we’re still expressing programs entirely with text 59 years later when the first interactive graphical display appeared 4 years later.
Language is the most natural, and yet most frustrating, way of expressing ideas that we have at our disposal. Language has been developing for tens of thousands of years. To think that we could suddenly replace it with something else in less than 60 years is bizarre. Writers have been struggling since words were invented to express their ideas through them. The best authors show that the written word is more than adequate. William Shakespeare, Ernest Hemingway, Stephen King, and countless other great authors have shown what language is capable of.

A picture may be worth a thousand words, but they are different words for every person that looks at them. Visual art by its very nature is subjective, so how are we supposed to develop a precise visual representation of a program without resorting to language semantics? Language may also be imprecise, but it can at least be made rigorous. Physics and mathematics have shown that to be true.

Using any of the few graphical languages out there for programs of significant size quickly shows how cumbersome they get. LabVIEW gets really messy really fast, and programs become incredibly rigid and hard to change. Don't even think about doing delegation or lambdas in it. MATLAB's Simulink is fairly similar and takes large numbers of symbols to represent relatively simple concepts. Digital hardware designers moved away from schematics in favor of textual HDLs as quickly as they could because of the overwhelming advantages of text for representing dense, complex hierarchical structures.

Most programming languages are text-based because text is superior to graphics for representing complex procedural and computational ideas.

Operating Systems


The author's views on operating systems were also confusing:
The bizzareness about operating systems is that we still accept unquestioningly that it’s a good idea to run multiple programs on a single computer with the conceit that they’re totally independent. Well-specified interfaces are great semantically for maintainability. But when it comes to what the machine is actually doing, why not just run one ordinary program and teach it new functions over time? Why persist for 50 years the fiction that every distinct function performed by a computer executes independently in its own little barren environment?
Is this not exactly what an OS already accomplishes? It's not the only feature of an OS, but an OS can certainly be thought of as "one ordinary program" that you teach "new functions over time." How have we not already achieved this?

Isolating programs running on top of the OS is a good thing. Dealing with programs that are aware of other programs in the system becomes mind-bending rather quickly. The nearly infinite additional complexity of this type of environment in the general case is something we rationally decided to avoid. We used to have systems that allowed programs to walk all over each other, not necessarily intentionally. One such system was called DOS. Do we really want to go back to something like that? I don't.

We now have preemptive multitasking and multi-core processors that run dozens of concurrent processes. In this kind of environment, virtualization is a key innovation that simplifies the programmer's task. Inherently multithreaded languages like Erlang are also a view into the future of programming, but one that builds on the past instead of discarding it.

Software Vs. Hardware


I don't have a problem with the advances that were chosen in the article. I think they were all extremely important and worthy of inclusion on the list. I don't think they were the only ones, though. David Wheeler has compiled an incredibly thorough and detailed list of the most important software innovations, and there are quite a few more than 8. Some happened well before 1955, and plenty happened after 1970. Discounting all other advances in programming as somehow being derivatives of the eight items picked for this article, or claiming they're substantially less important is too reductionist for me. The author starts to wrap up with this:
I find that all the significant concepts in software systems were invented/discovered in the 15 years between 1955 and 1970. What have we been doing since then? Mostly making things faster, cheaper, more memory-consuming, smaller, cheaper, dramatically less efficient, more secure, and worryingly glitchy. And we’ve been rehashing the same ideas over and over again.
And then in the next paragraph claims "Hardware has made so much progress." What kind of progress? The same kind of progress that he denigrates software systems for making. The primary advance in hardware over the past six decades can be summed up in two words: Moore's Law. A law that's getting pretty close to hitting a wall, either physical or economical.

The funny thing about this comparison is that the same argument could be made about hardware that he's making about software. Most high-performance microprocessor features in designs today were invented between 1955 and 1970. Branch prediction was considered in the design of the IBM Stretch in the late 1950s. Caches were first proposed in 1962. Tomasulo developed his famous algorithm for out-of-order execution in 1967. The first superscalar processor was in the CDC 6600 in 1965.

In fact, the CDC 6600 was a goldmine of hardware advances. The designers were some of the first to attempt longer pipelined processors. The 6600 was the first computer to have a load-store architecture. It was also arguably the first RISC processor. Essentially all modern processors are derivatives of the CDC 6600, and all we've been doing since then is making hardware faster, cheaper, wider, and with bigger caches.

I don't really believe that, though. Hardware has come a tremendous way since 1970. It has been an iterative process with each new innovation building on previous successes, and software is the same way. Almost all modern languages may be able to trace their roots to FORTRAN, but Ruby, Python, C#, Erlang, etc. are most certainly not FORTRAN. That would be equivalent to saying that a Porsche Carrera GT is basically a Ford Model T because they both have four wheels, an engine, and you can get the Porsche in black. They are not the same car. We've made a few advances in automobiles since then.

Days of Future Past


This brings me to one of the closing statements of the article, and one of its main underlying ideas:
Reject the notion that one program talking to another should have to invoke some “input/output” API. You’re the human, and you own this machine. You get to say who talks to what when, why, and how if you please. All this software stuff we’re expected to deal with – files, sockets, function calls – was just invented by other mortal people, like you and I, without using any tools we don’t have the equivalent of fifty thousand of.
As engineers, I think we fall for this type of thinking all too often, but it's exactly backwards. We don't design better systems because we know so much more than those that came before us. We design better systems because we can stand on the shoulders of giants. The systems that have survived the test of time did so because they were designed by wickedly smart people that made the designs flexible and adaptive enough to evolve into what they are today.

We have plenty of equally smart people alive today, but why waste their time reinventing the wheel? The economist Mark Thoma once quipped, "I've learned that new economic thinking means reading old books." The same applies to us as software engineers. We can design better systems because those older systems exist and we can build on them. We can design better systems because of the knowledge we gain from the experience of those who have gone before us. We can design better systems when we take the best ideas and tools from our history and combine them in new and interesting ways.

We should stop worrying about whether or not we're designing the next big innovation or inventing the next paradigm shift in software systems. The people that achieved these things before us did it because they were solving real-world problems. They didn't know how influential their discoveries would be at the time, and neither will we of our discoveries. The best we can do is solve the problems at hand the best way we can. When we look back in 50 years, we'll be amazed at what we came up with.

Perspectives on the Internet

If I look back over the last couple decades, it's quite clear that I have progressively used the internet to accomplish more and more of the things I do in life. I'm dependent on it to do my daily work, I do a significant amount of my shopping online, and I spend a fair amount of my leisure time reading, researching, and playing on the web.

It's also clear that the internet has many orders of magnitude more function and utility than the insignificant things that I do with it. Its inputs from other people are vast. Its capacity is enormous. And its uses, both in number and power, are incomprehensible. In some ways the internet can be thought of as an extension of yourself, and in other ways it is its own organism with its own emergent behavior, completely outside of anyone's control. This connection between each of us as individuals and the intricate global network tying us closer together is a fascinating thing to contemplate.

Enhancing the Individual


From the perspective of the individual, the internet provides a massive amount of additional memory storage. If we think of the human brain as a memory hierarchy like that in a computer, the fastest and smallest memory we have is short-term memory. It's generally accepted to have a capacity of a few seconds, or about 7±2 elements. That would be analogous to a processor's register set that's used for immediate processing and transfer of information.

The next level up in the human brain is also the last level contained within the brain—long-term memory. It's still relatively fast compared to how fast we can think, but it can be somewhat unreliable. Memories generally need to be recalled periodically or stored with strong emotions to be remembered for long periods of time. We generally need to go through many repetitions to learn things so that they will be reliably stored in long-term memory, and memories can get swapped out without any knowledge that they've been lost. This memory structure is a lot like the multilevel cache system in a processor, with each level storing more information for longer periods of time, but it takes longer to recall the information at higher levels.

After cache, a computer has a main memory store that contains more permanent information than the cache. It takes much longer to load information from this memory into the processor, but because it is so much larger than the caches, it is much more likely that the necessary information is there. Main memory has no analogue in the human brain, but we could think of it as all of the information we keep on hand about our lives in physical form: pictures, videos, notes, and other kinds of records.

The next level of storage in a computer is the hard drive. Until recently, there was no human equivalent to this level of storage. This is where the internet comes in, but it's much, much bigger than a single hard drive. It's more like a huge rack of hard drives—petabytes of information compared to the gigabytes of information in our physical records or the megabytes of information we remember in our own memories. It's an incredibly massive amount of information, and it takes a much longer time to find what you're looking for compared to recalling something from your own memory.

This vast amount of information is also largely unknown to you because it's not your own memories, so you need a good way to search through all of it to find what you need. Enter Google. Those of us who grew up before the internet have had to learn new strategies for searching this huge store of information, and while we're pretty good at it, the newer generations may be much more well equipped to deal with this new tool because they're growing up using it.

People are changing the way they remember information from remembering the details about something to remembering where and how to find it on the internet. Becoming more dependent on the internet could be seen as a disadvantage, but it also enables much wider access to much more information, if we can only find it, filter out misinformation, and interpret the right information correctly to put it to good use. The potential benefits of the internet as an extension of our own memories are truly awesome.

Evolving to the Internet


While the impact of the internet on the individual is impressive, that is far from the only way things are changing. The internet and mobile computing are the most recent big advances in human communication, and that impacts how we advance as a civilization. The ability to communicate is fundamental to our development and technological progress.

Communication is composed of two things—a medium for storage of information and a method of transmission of that information. Every advance in communication has improved both storage and transmission in some way. Storage is improved by increasing it's capacity and making it faster to access. Transmission is improved by increasing its availability and reach, increasing its bandwidth, and lowering its latency.

Developing a spoken language was one of the first dramatic improvements we made in communication. With a spoken language it is much easier to transfer ideas from one person to another, and we could more readily learn things from each other like where good hunting spots were or which plants were good to eat and which ones were not. The storage medium was the brain and transmission happened through voice, so both the amount of information and the number of people that could hear it was limited. Knowledge was passed from one tribe to the next and one generation to the next through stories and songs that could be easily remembered and recited.

Writing down our thoughts was a huge improvement over the oral tradition. Once we developed writing and drawing, we could put our thoughts down more permanently and the amount of information we could retain as a society went up dramatically. We could also make copies of that information and distribute it so that ideas had a much wider reach than they did before.

Written copies took a long time to produce, especially of longer works. An entire workforce of scholars and monks existed for the primary purpose of copying the Bible. The invention of the printing press completely changed that dynamic by making duplication cheap, fast, and reproducible. Suddenly ideas could be distributed to broad segments of the population, and people could read it first hand. All they had to do was learn how to read, a skill that most people didn't need to have previously. The printing press drastically increased people's access to information and shortened the connection between the author and their readership.

The telegraph, telephone, radio, and television all changed the medium of transmission from paper to electrical wires, increasing both the speed and reach of communication. Now ideas could be transmitted around the world nearly instantaneously. The telephone kept the connections one-to-one, but radio and television expanded that to one-to-many communication. The storage medium also improved with tape and film able to hold orders of magnitude more information than paper.

That brings us to the internet, which improved every aspect of communication. Storage capacities exploded with all of the hard drives in racks upon racks of servers, all connected together. Access times plummeted with all of the hardware and software developed to enable automated searching and retrieval of information on high-speed data lines. Transmission expanded to many-to-many connections. Anyone with access to the internet could both produce and consume information with infinite ease. Bandwidth continues to expand rapidly, and now transmission of text, audio, and video is widespread.

Mobile devices, wireless networks, and cell networks are improving communication even further by allowing you to access the internet wherever you go, as long as you can get a signal, of course. We are getting closer and closer to always being connected together, whether that's good or bad for us. We can choose to switch off, but the overall trend is that the human population is getting more connected for longer periods of time with higher bandwidth and lower latency. That isn't a new development with the internet, either. It has been happening incrementally with every advancement in communication.

A Global Network of People and Computers



It's interesting to step back and think about what the internet could be in an even broader sense. To do that, let's think first about atoms. Atoms communicate. They have storage that holds information such as their quantum state, their mass, and their velocity. They transmit information to other atoms through the electromagnetic, strong, weak, and gravitational forces. Everything that we experience in this universe is build upon this communication structure.

Now think about your own brain. It's made up of neurons that communicate. They store information chemically in their cell structure and structurally through connections to other neurons. They transmit information through electrical charge internally and through chemical processes from neuron to neuron. All of the thoughts and memories you have—your very consciousness—comes from the communication between neurons in your brain.

What does that mean for the internet? It's made up of massive amounts of storage, both in the form of hard drives and human brains, and trillions of incredibly fast connections between servers, devices, and human interfaces. Maybe the internet is already a form of global intelligence, but not the way it's normally portrayed in science fiction as a separate sentient artificial intelligence, whether for good or evil. Maybe the immense amount of storage and network connections, including us humans, that makes up the internet already makes a higher-order intelligence than the collection of individual people would alone.

Think about the emergent behaviors that are coming out of the internet. Social and political movements are happening almost daily now that are orchestrated through the internet. The global economy is utterly dependent on the internet to function and continue growing. Our culture is more strongly shaped by things that happen over the internet every year. Clay Shirky relates all kinds of these emergent behaviors in his books, and I'm sure we'll continue to see even more powerful examples in the future.

While the internet is made up of individual sentient beings and the artificial hardware and software they created, no individual has significant control over the internet or what happens through it. Sure, there are leaders that drive movements or social change, but even they don't have control over what will happen as a result of these movements.

The internet has other characteristics of living organisms in addition to this lack of central control. It's an extremely redundant system, and getting more so all the time. Taking out any individual element has essentially no effect on the functions of the system. Taking out major communication trunks or large server farms would cause problems, but individual components are expendable. It also has defense mechanisms to combat invasive attacks from viruses and worms. We think of these defenses as coming from the programmers fighting the viruses (and their creators), but the defenses are being incorporated into the system over time.

The internet heals itself when it suffers damage, both by repairing the damaged areas and routing around them through its redundant channels. Again, people do a lot of the repair work, but in this way of thinking, people are an integral part of the internet organism. We are part of what makes the entire system work, similar to how all of the interconnections of all of those individual neurons makes your brain work.

As more people and more hardware are added, the system continues to grow and change, with new capabilities developing as it reaches new sizes. What will it become in the future? Will it develop a higher-order intelligence that supersedes our own? Maybe that's happened already and we don't recognize it because it's so diffuse and distributed. We experience our own local neighborhood of the internet, but no one can fully comprehend its global behaviors and impacts. We are building something unlike anything we have done before. What incredible developments does the future hold?

My First 220V Public Charging Experience

Nissan Leaf charging port

I've always charged my Nissan Leaf using the 110V trickle charger that comes with the car. Recently, through my own forgetfulness, I needed to use a 220V public charging station, and my impression of the experience is mixed. I didn't have any problems with finding and using a charging station. That was easy. But I was surprised by what it did to my range.

Before getting too far into it, let's back up to the night before. I was coming home from work, and pulled into the driveway with 20% charge left. I remember thinking that I had to plug the car in because it was unlikely that I would make it to work and back the next day on that little charge. Then I remembered that my wife and kids were away at violin camp (those lucky ducks), and I was the only one left to bring the mail in so I better do that. That first thought about charging flitted right out of my brain. I parked the car, walked down to get the mail, and walked right back up into the house, leaving the Leaf unplugged in the garage.

I kid you not, my first thought the next morning when I woke up was OH CRAP! I forgot to plug my car in! Why is it that you vividly remember important things when it's far too late to do anything about them? Anyway, I rushed out to the garage in my skivvies to check, and sure enough, the car was distinctly missing its umbilical cord.

As I was getting ready for the day, I ran over options in my head. I could attempt to make it to work and back on the charge left, but it would be tight. The Leaf tends to lose charge more slowly at the end of the range, and I could drive more conservatively and probably be fine. But I would be much more comfortable if I could charge up at work. Luckily, I had gotten a couple of ChargePoint cards with my new Leaf. I had never made the effort to sign up with ChargePoint when I had my previous Leaf, but the salesman tipped me off that MG&E was doing a study of EV owners so I could charge for free if I signed up for their program.

I checked on the ChargePoint.com site, and there were a couple charging stations in a parking garage within easy walking distance of the office. It was time to give public charging a try. It's not that I was against it; I just never had the need to use it before and charging at home is so much more convenient. After checking the website one more time to make sure the charging stations were available, I was on my way.

Finding the stations was easy, but the first one I found was located in a handicapped parking zone. I'm not sure EV drivers and handicapped drivers are that well correlated right now, so I'm a bit confused on the utility of that setup. Looking a little further up the ramp, I found another station. There was a Ford Fusion PHEV plugged into the 110V trickle charger, but the space next to it was free. I pulled in, plugged in the 220V cord, and swiped my card. The car started charging without a hitch. Cool beans, I thought. I'll come back during lunch and see how it's doing.

When I came back, the car had finished charging to 80%. The meter showed that it had charged for exactly 4 hours. With the 3.3 kW charger, that would have been 13.2 kWh of charge, which is a bit low for charging from 12% to 80% based on my charging log. Normally I get about 4.2% per kWh of charging, which means it should have taken 16.2 kWh to charge that much. Still, I hadn't expected the car to be done charging when I went to check on it, and I didn't think much of the discrepancy. I was quite pleased as I drove over to my normal parking spot by the office and finished out the afternoon at work.

On my drive home I noticed the charge level dropping faster than normal. After only three miles it had dropped 6%. That was a little disconcerting. By the time I had run some errands and returned home, it had dropped 23% in 15 miles. Somewhere in the neighborhood of 15-18% would have been much more typical for that distance. Indeed, I had driven the same route under the same conditions a couple days earlier and only dropped 16% charge on that trip. What was going on?

I decided to not charge that night and see what happened on my drive the next day. I still had 57% charge remaining, so I wasn't too worried that I would get stranded. As it turns out, the battery behaved pretty normally from then on, and I drove 40 miles on 38% of charge before charging up again with my trickle charger. I drove the same 15 mile route again at 80% charge, and this time dropped 20%—not great, but better. By the next charge things were totally back to normal.

So what the heck happened to my battery in the 80% to 55% range of charge with the 220V Level 2 charger? I'd heard other Leaf owners claim that they lose charge faster at the top end of the range, and as they reached 50% and below, they could go more miles on the same decrease in charge. I always wondered why I didn't see something similar with my Leaf. My charge level has always decreased very linearly with miles until the very end, even the few times that I charged to 100%.

Here's what I think happens with the different chargers and the battery. You know how when you pour a beer from a tap with perfect pressure, you can easily fill the glass all the way up, getting beer within an eighth of an inch of the rim and a small amount of head? It's beautiful. The charge from a trickle charger is like that. The charge is flowing into the battery at a slow enough rate that the Lithium ions can be efficiently packed within the chemical structure of the electrodes, resulting in a nice, strongly charged battery over its full range.

Now think about what happens to a beer tap that has too much pressure in the lines. The beer pours too fast and gets churned up in the tap and the glass, resulting in lots of foamy beer with more empty space and less tasty beverage. The L2 charger is more like this because it's dumping charge into the battery much faster. The ions get churned up more, the battery heats up more, and the resulting charge is not as strong as with the trickle charger. You end up with a lot of head in your battery.

Of course, this is not really what's happening in the battery. The electrochemical process is a bit more complicated than that. It's an analogy, but a useful one. The charge at the top end of the range is definitely not as strong, or the battery is not as efficient in that range from an L2 charge. However you want to think about it, it's pretty clear that for the same energy usage, initially the charge level goes down faster when the battery is charged at 220V.

Having the public charging station available was great in this situation, but I wouldn't rely on L2 charging stations for daily charging needs. If you want to get the most out of your battery, you should be charging with the trickle charger whenever you can. It's better for your battery's health, and you'll go farther on a charge. I know I'll be sticking with the trickle charger for my Leaf. Happy charging!