Search This Blog

Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Are Computers Still a Bicycle for the Mind?

Steve Jobs had an enormous appreciation for the computer, believing it was the greatest human invention, and he commonly likened it to a bicycle for our minds. Here he is in one such explanation of this analogy:


He refined his delivery over the years, but the underlying analogy was always the same. The bicycle dramatically increases the efficiency of human locomotion, and likewise the computer dramatically increases the efficiency of human thought. While that is still the case when computers, the Internet, and increasingly Artificial Intelligence and Machine Learning are used as tools to leverage our innate abilities to solve huge, complex problems, they can also become other things for the mind that are not so useful. We are seeing it happen more and more that as computers proliferate, shrink in size, and become more convenient and ubiquitous, they stop being treated as a tool and start being treated as a toy or simply as a distraction. Maybe computers are becoming less like a bicycle for the mind and more like something else.

Tech Book Face Off: Physics of the Impossible Vs. The Physics of Star Trek

It's been a while since I've done a Tech Book Face Off. The idea here is to review a couple of books together and compare and contrast their ways of explaining something I want to learn about. Sometimes both books are good, sometimes neither, but reading at least two books on a subject is a great way to get multiple perspectives on it. We learn different things from different teachers, so more than one point of view can be invaluable for learning about something deeply. In this Tech Book Face Off, I'm going more for future tech than modern tech—future tech in the nearly (or actually) science fiction sense. We have Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel by Michio Kaku and The Physics of Star Trek by Lawrence M. Krauss to look at. Both books are as much popular physics books as they are books on technology, but they each take different approaches to exploring the ideas about the technology of the far future. They were also both a blast to read, with fascinating discussions about what could be possible and what is, as far as we know, quite impossible.

Physics of the Impossible front coverVS.The Physics of Star Trek front cover

Tech Book Face Off: The Shallows Vs. Thinking, Fast and Slow

After my book review on Pragmatic Thinking and Learning and How the Brain Learns, I received a recommendation to read another book, The Shallows by Nicholas Carr. I decided to go with it (thanks +Helton Moraes), and I ended up pairing this book with another popular book on how the brain works and how we humans think, Thinking, Fast and Slow by Daniel Kahneman. Through these books I have a personal goal (it's good to have a goal when reading) of finding ways to regain control of my mind and hopefully improve my thought processes. Do these books help clear a path to that goal? Let's see.

Design Patterns in Ruby front coverVS.Practical Object-Oriented Design in Ruby front cover

Less Friction Generates More Waste

Last week I explored how reducing friction could increase choice, thereby actually increasing friction in the end, giving us a paradox of choice because too much choice is overwhelming. Reducing friction can have another undesirable side-effect. When things get easier, it increases the amount of waste that's generated in a system.

This outcome may seem counterintuitive because in physical systems friction generates waste as heat, and reducing friction makes the system more efficient because less energy is lost in the form of heat. More insubstantial systems like the economy or civilization as a whole don't work exactly like physical systems, though. When you look at how our civilization has progressed, we seem to generate more and more waste as we reduce the amount of friction in our lives. Will this trend continue, and how will we deal with it?

Finding Optimal Friction

In the last twenty years, the Internet and mobile devices have reduced or eliminated friction in numerous industries. Obviously the communication sector has been dramatically affected, including telecom, music, television, and publishing industries. Now anyone with an Internet connection can put their stuff up on-line for the world to see, and new players like Netflix have been able to challenge the big networks for our prime time hours.

The Internet has leveled the playing field across the communication industries, and it's now easier than ever for competing producers to get their products in front of customers. That's one way to look at the concept of friction in markets, from the perspective of producers. Another way to look at friction is from the consumer's perspective, and that friction has been dramatically reduced, as well. From having all of the world's information literally at your fingertips to being able to buy nearly anything at the click of a button and having it shipped to your door, the Internet has gone a long way in removing friction from consumers' lives.

However, not all industries or aspects of our lives have been affected equally by the Internet, and sectors like energy and transportation still have a lot of friction that could be reduced with the right advances in technology. Energy production and automobiles are ripe for a technological revolution.

Reducing friction isn't the be-all and end-all for making our lives easier, though. Reducing friction comes with its own cost, and I think we sometimes forget how high that cost can be. We can end up wasting more time and energy in a frictionless environment due to distraction and an overwhelming amount of choice. Finding the right balance means recognizing where too much friction is wasting our energy so that we can target those inefficiencies and realizing where too little friction is wasting our time so that we can avoid those time sinks. It's a constant struggle as we push forward with technology.

Leaf Mileage Update with 14 Months of Data

It's time for my bi-yearly update on my Nissan Leaf experience. I'm on my second Leaf, having owned a 2012 Leaf SL for two and a half years before trading it in for a 2013 Leaf S. I've written extensively about both Leafs already, so I won't repeat myself too much here. Check out the EV tag for all the gory details. Suffice it to say, I love the Leaf, and after having driven an EV for three and a half years, I can't imagine going back to an ICE car. The Leaf is fun, torque-y, quiet, and oh-so comfortable to drive.

On Range and Battery Degradation


The one major issue with the Leaf is the capacity of the battery, coupled with how long it takes to charge it back up. It has a range of about 85-100 miles on a full charge in the summer, depending on driving conditions, so I'm limited to the city and the immediately surrounding area unless I do careful planning and have a lot of time. Those stars have not yet aligned, but I do enjoy zipping around Madison and coming back home to charge up in my garage. It's the essence of convenience. We have a Prius for the longer and far less frequent trips we need to take beyond a 40 mile radius of our house.

Because EVs are still a new and interesting technology, I keep records of my driving and charging so I can plot things like the change in range over temperature, battery efficiency, and estimated battery degradation over time. To read about the methodology I use, take a look at the two-year update of my 2012 Leaf or the first report of my 2013 Leaf. Basically, I track driving temperature, battery state-of-charge (SOC), mileage, and kWh consumed at the wall outlet. I always trickle charge off of a 110V outlet through a P4460 Kill-A-Watt power meter so I know exactly how much electricity I've used to charge the car.

Since the main question to answer about the Leaf is what kind of range it gets, I use my data to estimate the range I could get on every charge. I scale up the miles I drove to what it would be if I charged the battery to 100% and drove the car until the battery died. This is assuming the SOC is linear over the entire range, even though that doesn't seem to be exactly true. In my experience a 1% change in SOC will take you farther when the battery is mostly discharged than when it is mostly charged, but I don't have a good way to account for this so I assume it's linear. Then I plot these estimated ranges against the average temperature for each discharge cycle, and I get the following plot for 14 months worth of data:


This plot is interactive, so you can hover on points to get details and zoom in by selecting an area with the mouse. Clearly, the range has a significant dependence on temperature, with a range as low as 42 miles at sub-zero temperatures and as high as 110 miles in perfect summer weather. I very rarely use the air conditioner or heater, so range would be reduced from this data if climate control was used on especially hot or cold days. In fact, the outlier at 86°F and 78 miles of range was a day when I drove the family 53 miles with the air conditioner running to keep them comfortable. It was also a trip that was about half freeway driving at 65 mph, which further reduced the range. (For a great set of charts on the Leaf's range dependence on driving speed, check out the range charts at MyNissanLeaf.com.)

I split the data between the first 8 months and the last 6 months so we can see how the range has changed over time. Trend lines are shown for both sets of data, and the few outliers—one in 2014H2 and two in 2015H1—were ignored when calculating the trend lines. The two lines are practically indistinguishable at the warm end of the temperature range, and those points are further apart in time, taking place in the summer in 2014 and the beginning of summer in 2015. The 2014H2 range at the low temperatures is actually lower than the 2015H1 range even though the points at that end of the graph happened closer in time and the lower range points happened earlier, likely because it was slightly colder at the end of 2014 than at the beginning of 2015. Overall, it appears that the battery has had a negligible amount of degradation in the past 14 months.

I do what I can to keep my battery as healthy as possible, since a healthy battery will have a longer range over a longer period of time. To take care of the battery, I generally follow these guidelines:
  • Charge to 80%.
  • Do not charge if SOC is at 80% or above.
  • Avoid hitting the low battery warning at 17%, the very low battery warning at 8%, and turtle mode at the end of charge.
  • No DC Quick Charging.
  • Reduce the number of charging cycles by not charging every night.
  • Store the battery at a lower SOC, if possible, by not charging every night and delaying a charge if I know I'm not driving the next day.
  • Limit the depth of discharge (DOD) by charging before a trip that would take the SOC below 20%.
Limiting the number of charging cycles and limiting the DOD are in direct conflict, so it's a balancing act. I'm not sure what the best trade-off is between charging cycles and DOD, but I tend to err on the side of shallower DOD. My average DOD over the last 14 months has been 51%, meaning I normally drive until around 30% SOC and then charge up to 80%. I've gone as deep as 71% and as shallow as 17%. The following histogram shows the distribution of DOD cycles that my Leaf has had:

Leaf DOD Distribution Chart

On Energy Efficiency


That leaves the Leaf's energy efficiency left to look at. I measure the energy used at the wall outlet as well as keep a record of the on-board energy efficiency meter for each month of driving, so I can plot those over time. I can also calculate the charging efficiency from these two energy efficiency numbers, and all three series are plotted in the next chart:

Leaf Energy Efficiency bar graph

You can see that all three efficiencies got worse during the cold months of winter and have now recovered with the Leaf's efficiency meter reporting well over 5 miles/kWh as the weather has warmed up. I even set a new monthly record of 5.5 miles/kWh this month, as measured by the Leaf or 4.5 miles/kWh from the Kill-A-Watt meter at the wall. Charging efficiency for trickle charging is also up over 80% with the warmer weather. I'm not sure what the oscillating behavior of the charging efficiency was about last year, but it seems to have gone away for now. It possibly has to do with the coarseness of the Leaf's efficiency values.

I'm getting fairly good efficiency numbers with the type of commute that I have through the city of Madison, and so far I've used 1,740 kWh of electricity to drive 6,644 miles. Since I pay $0.18 per kWh, that's $313.20 total, or $0.047 per mile that I pay to charge my car. That's the equivalent of paying $1.41 per gallon of gas for a 30mpg car or $0.94 per gallon for a 20mpg car to drive the same distance. That's pretty nice even with the higher than national average price I pay for electricity (to support wind power).

Future EVs


The Leaf has been a great first EV experience for me, and I'm excited to see what the future holds for electric cars. So far the Nissan Leaf, Chevy Volt, and Tesla Model S have been the only practical EVs widely available, and they each serve different markets. The Leaf, being a pure EV with limited range, is a city commuter car. The Volt, with its gas generator, is the PHEV for people that need a full-range vehicle. The Model S is the EV for those lucky individuals that have $100k+ to burn on a car. Now the BMW i3 has entered the ring as well, and it's a combination of the other three cars—the electric range of the Leaf, the gas generator of the Volt, and some of the luxury of the Model S at a little higher price than the Leaf or the Volt.

These EVs have made some significant advances over the past four years, and soon it looks like there will be some bigger leaps forward. Rumors are surfacing that Nissan will increase battery capacity in the Leaf 25% for the 2016 model year, and double it for the 2017 model year. Chevy is getting ready to release the all-electric Bolt with a 200 mile range, and they're increasing the battery capacity of the Volt as well. Tesla is getting close to releasing the Model X SUV, and the mass-market Model 3 with a 200-mile range and a $35k base price will follow, hopefully in 2018. The next couple years are going to be interesting for EVs with at least three affordable cars becoming available with a 200-mile driving range. Hopefully other manufacturers will get in the game, too, and we'll have even more options to choose from. That kind of range could be a game-changer for EVs. I can't wait.

A Microsecond in the Life of a Line of Code

We sit atop a large technology stack. Every layer in that stack is an abstraction of the layer below it that hides details and makes it easier to do more work with less effort. We have built up a great edifice of abstractions, and most people understand and work within a couple levels of this technology stack at most. No matter where you work in the stack, it's worthwhile to know something about what's going on below you.

Last week I talked about the importance of knowing the basics. Part of knowing the basics can be thought of as learning more about some of the layers of abstraction below your normal layer of expertise. I certainly cannot do justice to every layer of the computing tech stack, nor can I cover any layer in much detail, but it should still be fun to take a journey through the stack, stopping at each layer to see what goes on there.

We'll follow one line of code from a Ruby on Rails application and see how incredibly tall the tech stack has become. I only picked Rails as a starting point because it's one of the higher-level frameworks in a high-level language, and I happen to know it. Other choices exist, but it gives us something to focus on instead of talking in generalities all the way down. I'm also focusing on one line of code instead of following an operation like an HTTP request because it's fascinating that all of the actions on each of these layers are happening simultaneously, whereas an HTTP request has more sequential components to it. Maybe I can cover an HTTP request another time.

Let's get started on this incredible microsecond in the life of a line of code. With every level that we descend, things are getting more physical, and keep in mind all of these actions are happening at the same time on the same piece of silicon. Let's start peeling back the layers.

Ruby on Rails Application

Here we are on the top of the tech stack. Things are pretty cushy up here, and we can express an awful lot of work with only a few lines of code. For our example, we'll use this line of code:
respond_with @user
This piece of code sits in a controller that handles an HTTP POST request, takes the appropriate actions for the request, and renders a response to send back to the web browser. Because we're at such a high level of abstraction, this one line of code does a tremendous amount of work. To see what it does, we'll have to peel away a layer and look in the Rails framework.

Ruby on Rails Framework

The respond_with method call is a call into the Ruby on Rails framework, and here's what the source code looks like:
# File actionpack/lib/action_controller/metal/
# mime_responds.rb, line 390
    def respond_with(*resources, &block)
      if self.class.mimes_for_respond_to.empty?
        raise "In order to use respond_with, first you " + 
              "need to declare the formats your controller " + 
              "responds to in the class level."
      end

      if collector = retrieve_collector_from_mimes(&block)
        options = resources.size == 1 ? {} : 
                    resources.extract_options!
        options = options.clone
        options[:default_response] = collector.response
        (options.delete(:responder) || self.class.responder).
          call(self, resources, options)
      end
    end
First, the method checks that it knows what mime-type to generate the response for, and if there is none, it throws an error. Next, it calls another method deeper inside the Rails framework that executes the block of code provided with the call to respond_with, if there is one, and returns an object that knows the response type. Finally, a call is made at the end to render the response from a template in the Rails application.

I'm glossing over a lot of stuff here, so if this explanation doesn't make sense, don't worry about it. I'm not trying to analyze Rails in depth, so we can just enjoy the view as we pass by. Many more method calls are happening inside the methods called from this method, and it's all thin layers of abstraction contained within Rails and the other gems that it depends on. Gems are collections of Ruby code that are packaged up into convenient chunks of functionality that can be included in other Ruby programs, and they could be considered a layer in and of themselves. We'll group all of those thin layers into one for the purposes of this discussion, otherwise this post will go on forever. The important thing to keep in mind is that a ton of stuff is going on at each level, and the shear number of moving parts that are flying around as we continue down is astonishing.

Let's focus in on one line of code and move down to the next layer.

Ruby

This line looks fairly simple:
options = resources.size == 1 ? {} : resources.extract_options!
Ruby on Rails is written in a programming language called Ruby. (Bet you didn't already know that from the name, amiright?) The line of code we're looking at is a line of Ruby code. What does it do? It assigns a local variable called options one of two values depending on the size of resources. If the size is one, then options is an empty hash table, otherwise it gets assigned to the options that can be found in resources.

Ruby is made up of a set of thin layers of abstraction much like Rails, but with Ruby the layers are made up of the standard library and the core language features. The call to resources.size is actually a call to a standard library method for the Array class that is referred to as Array#size. Normally a language and its standard library go together, so we'll consider them one layer in the stack.

Ruby is an interpreted language, which means there is another program running—called the interpreter—that reads every line of code in a Ruby program file and figures out what it means and what it needs to do during run time. An interpreter can be implemented in a few different ways. In particular, Ruby has interpreters written in C (the MRI interpreter), Java (JRuby for the JVM), and Ruby (Rubinius—how's that for recursion). To make thing a bit more interesting, lets look at the JRuby implementation.

JRuby Interpreter

The JRuby interpreter reads files of Ruby code, parses it, and executes instructions to do whatever the Ruby code says should be done. Since JRuby is written mostly in Java, it's compiled Java code that's doing the work of figuring out array sizes, deciding which values to use, and assigning hash tables to local variables in our line of code above. An interpreter has a ton of decisions to make, and a lot of code is being executed to make those higher-level Ruby statements happen.

What is essentially happening in the interpreter is that code in one language is getting translated into code in another language using a third language. In this case, Ruby code is translated into Java bytecode using Java. Java bytecode is compiled Java code that is similar to an assembly language. If we assume with our example line of Ruby code that the conditional assignment operator is implemented in the JRuby interpreter as a Java method with references for the empty hash and the options hash, then the method might look like this:
public Object conditionalAssign(int lhsCompare,
                                int rhsCompare,
                                Object trueObj,
                                Object falseObj) {
  if (lhsCompare == rhsCompare) {
    return trueObj;
  } else {
    return falseObj;
  }
}
This Java method is returning a reference to an object dependant on the outcome of the equality comparison. That's what we want for the Ruby conditional assignment operator, but the interpreter wouldn't be outputting Java code, it would be outputting Java bytecode. The equivalent Java bytecode for the conditionalAssign method is this:
0: iload_1
1: iload_2
2: if_icmpne     7
5: aload_3
6: areturn
7: aload_4
8: areturn
This Java bytecode runs on a virtual machine, which brings us to the next layer in our tech stack.

JVM

The JVM is a virtual machine that models a physical processor in software so it can take in instructions represented as Java bytecode and output the right assembly code for the processor it was running on. The original idea with the JVM was that a software program could be compiled once for the JVM, and then it could run on any hardware that had the JVM implemented on it. This idea of write once, run anywhere didn't quite turn out as the JVM proponents hoped, but the JVM still has a lot of value because many languages can run on it and it runs on many hardware platforms.

Much like the JRuby interpreter, the JVM is doing a translation, this time from bytecode to assembly code, and the JVM is most likely written in yet another language—C. If the JVM happened to be running on an x64 processor, it might emit assembly code that looks like this:
 .globl conditionalAssign
 .type conditionalAssign, @function
conditionalAssign:
.LFB0:
 .cfi_startproc
 pushq %rbp
 .cfi_def_cfa_offset 16
 .cfi_offset 6, -16
 movq %rsp, %rbp
 .cfi_def_cfa_register 6
 movl %edi, -4(%rbp)
 movl %esi, -8(%rbp)
 movq %rdx, -16(%rbp)
 movq %rcx, -24(%rbp)
 movl -4(%rbp), %eax
 cmpl -8(%rbp), %eax
 jne .L2
 movq -16(%rbp), %rax
 jmp .L3
.L2:
 movq -24(%rbp), %rax
.L3:
 popq %rbp
 .cfi_def_cfa 7, 8
 ret
 .cfi_endproc
I know, things are starting to get ugly, but we're definitely making our way deep into the stack now. This is the kind of code that the actual physical microprocessor understands, but before we get to the processor, the assembly code has to get from a hard disk onto that processor, and that requires memory.

The Memory Hierarchy

The memory hierarchy of a modern processor is deep and complicated. We'll assume there's enough memory so that the entire state of the program—all of its program code and data—can be held in it without paging anything to disk. That simplifies things a bit since we can ignore things like disk I/O, virtual memory, and the TLB (translation look-aside buffer). Those things are very important to modern processors, so remember that they exist, but we'll focus on the rest of the memory hierarchy.

Computer motherboard diagram

The main goal of memory is to get instructions and data to the processor as fast as possible to keep it well fed. If the processor doesn't have the next instruction or piece of data that it needs, it will stall, and that's wasted cycles. Let's focus on a single assembly instruction at this point, the jne .L2 instruction. This instruction is short-hand for jump-not-equal and the target is the .L2 label. We'll get into what it does in the next layer. Right now we only need to know that the instruction isn't actually represented as text in memory. It's represented as a string of 1s and 0s called bits, and depending on the processor, instructions could be 16, 32, or 64 bits in length. Some processors even have variable length instructions.

So the processor needs this jne instruction and it's loaded into the memory. Normally it starts out in main program memory (DDR in the diagram above), which is very large but much slower than the processor. It then makes its way down the hierarchy to the L3, L2, and L1 caches on the processor. Each cache level is faster and smaller than the previous one, and depending on the processor, it may have less levels of caching. Each cache has a policy to decide whether to keep or remove instructions and data, and it has to keep track of whether cache lines have been written to or are stale. It all gets extremely complicated, but the basic idea is that the lowest level of cache should hold the instructions and data that are most often or most recently used. Ideally the L1 cache will always have the next instruction that the processor needs because it normally runs at or very near the processor's clock speed.

The Microprocessor

Once this jne instruction reaches the processor, it needs to be executed. For that to happen, the processor needs to know what the instruction is and where its inputs and outputs are. The instruction is decoded to figure this out. The processor looks at the series of 1s and 0s of the jne instruction and decides that it needs to look at some flags that were set by the previous instruction, and if the equal flag is set to 0, it will jump to the target address specified by the label .L2.

Not all instructions are decoded into a single operation. Some instructions do more work than is reasonable to do all at once. These instructions are broken up into more bite-sized chunks of work, called microcode, before they are executed. Then all of these instructions are fed to the next layer of abstraction.

The Processor Pipeline

Instructions are no longer executed on a processor in a single clock cycle unless we're talking about a very simple processor, like a 16-bit microcontroller. Instead, some small amount of work is done for each instruction on each clock cycle. Fetching operands, executing operations, writing results, and even the initial decode steps are part of the pipeline. Breaking the work up into these pipeline stages allows the clock speed to be faster because it's no longer limited by the longest instruction, but by the longest pipeline stage. Naturally, hardware architects try to keep pipeline stages well balanced.

Intel Nehalem Microarchitecture
"Intel Nehalem arch" by Appaloosa - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Intel_Nehalem_arch.svg#mediaviewer/File:Intel_Nehalem_arch.svg

Modern processors also have stages that figure out if multiple instructions can be executed at once and feed them to multiple execution units. These dispatch stages can even decide to execute later instructions ahead of earlier ones if their dependencies are satisfied. In today's processors literally hundreds of instructions could be in flight at once and starting and finishing out of order. If you thought you knew what your code micro-optimizations were doing for performance, you might want to reconsider that position. The processor is way ahead of you.

Our original Rails code is completely unrecognizable at this point, but we're not done, yet.

The Pipeline Stage

A pipeline stage consists of some combinational logic that runs between clock cycles, and a set of memory elements, called flip-flops, that store the results of that logic for the next pipeline stage. The results are latched into the flip-flops on every clock edge.

Combinational logic can do all kinds of operations, including logical operations, arithmetic, and shifting. This logic is normally described using an HDL, (Hardware Description Language) like Verilog, that resembles other programming languages, but with a concept of a clock and statements executing in parallel between clock edges. A ton of stuff is happening at once in Verilog simulations because all combinational logic in the processor evaluates its inputs as soon as they change.

MIPS pipeline stages

But the processor isn't executing Verilog. The Verilog is synthesized into digital logic gates, and those are what make up combinational logic and our next layer of abstraction.

Digital Logic Gates

The basic digital logic gates are NAND, NOR, and NOT. NAND stands for NOT-AND and NOR stands for NOT-OR. Why aren't AND and OR gates fundamental? It turns out that transistor circuits that implement logic gates inherently invert their outputs from 0 to 1 and 1 to 0, so AND and OR gates require an extra NOT gate on the output to invert the result again.

Digital logic gates and truth tables

Many other logic gates can be built up from these three basic gates, and a digital synthesis library—used by the synthesizer to select gates to build up the pipeline stages—can consist of hundreds or even thousands of variations of logic gates. All of these gates need to be connected together to make a processor. Sometimes it's done by hand for regular structures like memory or ALUs (Arithmetic Logic Units), and for other blocks it's done with an automated tool called a Place-and-Router. Suffice it to say, this stuff gets complicated in a hurry.

We're now ready to leave the land of ones and zeros because we've finally made it to the transistor.

The Transistor

Digital logic gates are normally made up of CMOS (Complimentary Metal-Oxide Semiconductor) transistors. These transistors come in two varieties—NMOS and PMOS. They both have three terminals: the source, drain, and gate. A fourth connection always exists below transistors, and that is referred to as the bulk connection. If the gate voltage is greater than or equal to the source voltage for a PMOS or less than or equal to the source voltage for an NMOS, the transistor turns off. If the gate voltage goes in the opposite direction, the transistor turns on.

CMOS NAND gate

This schematic shows a NAND gate that is made up of four transistors. Two PMOS have their sources connected to Vdd (the digital supply voltage), and two NMOS are stacked together with the bottom one's source connected to ground. When an input voltage is at Vdd, that corresponds to a 1, and an input voltage at ground corresponds to a 0. In the case of the NAND gate, if either input is at ground, one of the NMOS will turn off and one of the PMOS will turn on, pulling the output up to Vdd. If both inputs are at ground, both NMOS turn on and both PMOS turn off, pulling the output to ground. This behavior exactly matches what a NAND gate should do.

To understand how a transistor works, we need to descend another level.

Semiconductor Physics

A transistor is made up of a silicon substrate, a polysilicon gate sitting on top of a silicon oxide insulating layer, and two regions on either side of the gate that are doped (enriched) with ions that make a P-N junction. The doped regions are the source and drain of the transistor.

A P-N junction makes a diode where electrons will flow from the N-type side to the P-type side, but not the other way around. A PMOS needs to sit inside an extra well that is N-type so that the source and drain aren't shorted to the substrate and each other.

CMOS transistor cross sections

Considering just the NMOS, when the gate voltage is less than or equal to the source voltage (normally at ground), the source and drain are not connected and electrons can't flow from the source to the drain through the substrate. When the gate voltage rises enough above the source voltage, a channel forms between the source and drain, allowing electrons to flow and pulling down the drain voltage. The behavior of the PMOS is similar with the voltages reversed.

The Tower of Knowledge

There you have it. We've reached the bottom of the tech stack. This huge pile of technology and abstractions is sitting atop some basic materials and physics. When building an application in a high-level framework it is truly astounding how much other technology we depend on. The original respond_with method call in a Rails application is the tip of a very large iceberg.

Every layer in this stack is much more complex than I've described. You can read books, take courses, and spend years learning about the details of each layer and how to design within it. If you spend all of your time in one layer, it would probably be worthwhile to take a look at what's happening at least one or two layers below you to get a better understanding of how the things work that you depend on. It's one way of learning the basics.

It may also be worth learning about the layer above where you work so that you can better appreciate the issues that designers in that layer have to deal with all the time. The better these layers of technology can interact and the better the abstractions become, the more progress we'll make designing new technologies. Knowledge is power, and there's an awful lot of knowledge to be had.

Better Mileage Data with the 2013 Nissan Leaf

I've now had a 2013 Nissan Leaf for nearly 8 months, and temperatures here in Madison, WI have gone below zero, so I have a nice amount of data to share on this newer model. The 2013 Leaf S replaced the 2012 Leaf SL I had previously, and it includes a state-of-charge (SOC) percentage display on the dash that was lacking in the older models. This SOC reading is a major improvement to the unreliable GOM (Guess-O-Meter, or miles-to-fully-discharged meter) that I used before in my data collection.

I haven't invested in any other kind of meter for measuring the Leaf's battery state because I wanted to treat the car more like a normal driver would. Measuring internal battery messages off of the car's CANbus was decidedly outside of normal driver behavior. I did purchase a P4460 Kill-A-Watt power meter for measuring the amount of electricity that is consumed by charging the car. The on-board energy efficiency meter doesn't take charging losses into account, and I wanted to know exactly how much electricity the car is using. I'll report on those numbers as well.

Before getting into the numbers, I will say that I still greatly enjoy driving the Leaf. The 80kW electric motor is nice and torque-y, with zippy performance for around town and pleasing acceleration when jumping on the freeway. The power is especially evident when scaling steep hills, as the Leaf tears up inclines as if they aren't even there. And the ride is always smooth and super quiet.

The handling so far this winter has been pretty good as well. The traction and stability control and the ABS all work when they need to, and the car's low center of gravity from the under-carriage mounted battery helps quite a bit, too. The one thing that could be improved in snow is the stock tires. They don't have the best traction, and the other safety systems have to compensate when the tires slip. I'll probably finish out the winter with them since the tread is still pretty new, but next winter I'm going to switch to snow tires. We're using winter tires on the Prius this year, and they've had an insignificant effect on mileage, so I expect the benefits of using them on the Leaf to greatly outweigh the minor range hit that I'll take.

Data Collection Methodology


Collecting data on the Leaf was fairly straightforward. After every charging cycle, I would log the date, the charge percentage, and the accumulated kWh on the Kill-A-Watt meter. I nearly always charge to 80% unless I know I'm going on a long drive the next day. I was under the impression that the lower charge level was better for the battery. Although, I now hear that new Leafs will no longer have the 80% setting option. I'm not sure if that's because Nissan is trying to avoid consumer confusion, or because there really is no negative impact to the battery when charging to 100%. It seems reasonable that the latter could be true as long as the battery isn't charged until it gets below 80% SOC. The use case for a car battery is much different than for a laptop battery, where keeping it plugged in wears out the battery because it continually charges to 100% during use.

After each drive I keep a record of the %SOC, the odometer reading, and the outside temperature as reported on the dash. I won't charge every night if I don't need to, and I often bring the battery down to 10-20% before charging. I don't normally go below that since there isn't much useful range left for me to get to work and back again at that point. I have a new job with a shorter 16-mile round-trip commute through town instead of my old 23-mile round-trip commute on the beltline, so I can now go three or four days between charges in the summer. There's probably a trade-off between less depth-of-charge and less charging cycles for better battery life, but I have no idea where the optimal point is so I go for less charging cycles.

Once I have a good amount of data logged, I transfer it to Google Sheets to calculate range, average temperature, and miles/kWh. Estimating range is much easier with the %SOC numbers because all I have to do is subtract start and end odometer readings and divide by the %SOC used for those miles. I'm assuming that the miles/%SOC is linear for this calculation, but I'm not likely to push the limit to squeeze a few extra miles out of the battery at the end of the range, so assuming the same miles/%SOC over the entire range is acceptable to me. I'm still getting a reasonable estimate of range over many charging cycles.

I calculate an average temperature for each charging cycle by taking the average of temperatures for each driving segment weighted by miles driven in each segment. Temperature has a big effect on range, so getting an accurate value amidst big Midwest temperature swings is important. I'm sure that the temperature during charging also has an effect, but I don't know how I would estimate this effect without monitoring temperature during every charging cycle. I'm not set up to do that so I ignore that effect. Besides, charging temperature is heavily correlated with driving temperature, even though the Leaf is always charging in a garage. The garage is unheated so it's always a milder form of the outside environment.

The miles/kWh are calculated two ways. The car's measure of efficiency is recorded from the dash, and I let the meter run a measurement for a month before recording the value and resetting the meter. I also calculate the wall-to-wheels efficiency by dividing the miles travelled in a month by the kWh usage for that month from the Kill-A-Watt meter. I can then divide the wall-to-wheels value by the battery-to-wheels value to get an estimate of charging efficiency.

That's how I collect and massage the data, so let's take a look at what we've got.

What is the Real Range of a Leaf?


That is the most common question I get when I talk with people about the Leaf, and for good reason. Everyone knows EV ranges are limited right now, and the answer is it depends, of course. One thing it depends greatly on is temperature. Here's how much my Leaf's range changed with temperature in the last 8 months:


It's an interactive chart so you can zoom and get info on specific points with the mouse. Clearly, temperature is the dominant effect, and the range is cut in half over the temperature range. While I was getting about 100 miles of range at 75°F, I am getting only about 50 miles at 0°F. Luckily, I don't have to drive far to work. This plot does include a mix of stop-and-go city driving and free-way driving at around 55 mph. I did take the Leaf on the interstate once at 65 mph, but only for a short while and it doesn't noticeably show in the chart.

Two features to note in this chart are the outlier at 86°F and the wider variation in driving range at both ends of the temperature range, especially below 40°F. Regarding the outlier, this point happened to be a trip I took with the rest of the family to a high school graduation party. It was hot and raining so there was extra road resistance and I ran the A/C to keep everyone comfortable. The combination of extra weight, road resistance, and constant A/C resulted in about a 20% drop in range, which doesn't surprise me.

The wider variation at the high temperature range is likely due to there being more data points over a wider range of driving conditions. Then as the temperature dropped in the fall, the points followed a more linear curve into the colder temperatures of winter.

The main reason for the wide variation at cold temperatures is probably due to a number of reasons. Depending on temperature and humidity, I have to use the defroster more or less and sometimes I use the heated seats (although the seats don't seem to impact range much). If there's snow on the roads, that adds resistance and lowers efficiency. Lower temperatures happen to coincide with less daylight and inclement weather, so I use the headlights more and usage varies a bit more depending on the weather. Since I have normal headlights instead of the LED headlights that come as an option, they use more battery power. Finally, traffic varies more in the winter, and if I'm stuck in traffic, that amplifies all of the other losses, resulting in even more variation at cold temperatures.

Despite all of these variations, I was amazed at how much more linear this data is than the data from my 2012 Leaf, using the GOM to estimate range instead of the %SOC on the 2013 Leaf. Here is the scatter plot of the 2012 Leaf data for comparison:

Scatter plot of 2012 Leaf estimated range vs. temperature

This plot has many more data points, but it still looks like it has much more variation than the 2013 Leaf data does. It will also be interesting to see if the 2013 Leaf maintains its approximately 100 mile range next summer, since that seems to be better than the 2012 Leaf was while the lower temperature ranges are roughly equivalent. The regular headlights of the 2013 Leaf could be part of the reason for it not being more efficient than the 2012 Leaf in the winter.

Overall, I'm quite happy with the data I'm getting from the 2013 Leaf. I'm a bit less enthusiastic about the steep drop in range over temperature, but when I compare it to our Prius, the change in efficiency is not all that different. We normally get 55+ mpg from the Prius in the summer, but on one of those bitter cold winter days I only got 28 mpg. The big difference with the Prius is that its normal range is about 500 miles on a full tank. Cutting that range in half still leaves plenty of range to get where you need to go. When EVs have 300+ mile ranges on a charge, it won't be as big of a deal when the range drops in the winter.

How Much Does it Cost to Charge?


This is the second most common question I get about the Leaf. So far I've driven 3,811 miles and measured 983 kWh of electricity use from the wall. With an electricity usage rate of $0.18/kWh, it's cost me $177 to drive that 3,811 miles. If I compare that to a car that gets 30 mpg, it would be like paying $1.39 for a gallon of gas. The price of gas has dropped quite a bit, but it hasn't dropped quite that far. Also, paying for electricity has the advantage of being a relatively fixed rate. It doesn't change nearly as much as the price of gas, and gas prices have been much higher in the past and probably will be higher in the future.

Beyond the absolute cost of charging the Leaf, it's interesting to look at the charging efficiency. I always charge with the 110V trickle charger (except once) since I have plenty of time at night, and I've never had a problem finishing a charge before driving the next day. Using a Kill-A-Watt power meter at the wall outlet and the on-board energy efficiency meter in the Leaf, I can measure the wall-to-wheels and battery-to-wheels efficiency, respectively. After doing this for 8 months and grouping the data by month, I get the following results (September and October are combined because of a long vacation where the Leaf sat idle):

Leaf Energy Efficiency bar graph

The charging efficiency is easily calculated by dividing the wall-to-wheels efficiency by the battery-to-wheels efficiency, and it hovers around 80%, dropping slightly to 75% in November. I'm not quite sure why that happened. Another behavior that this chart shows is that the drop in energy efficiency does not fully explain the drop in range at lower temperatures. If that were the case, then the Leaf should have a range of about 80 miles in the winter, but I was averaging more like 60 miles for the last couple months. This discrepancy must mean that both the energy efficiency and the battery capacity drops with temperature. While the usable battery capacity is 19-20 kWh in the summer, it dropped to 15 kWh or less in the cold, accounting for about half of the range loss.

Because of the range loss in the cold, the Leaf is definitely not the best car choice for everyone. If you live in a cold climate, you have to be careful to make sure you have enough range to get where you need to go or have a backup plan when the temperature drops too far. My commute is plenty short, so it works quite well for me. I love driving around in a smooth, fast, quiet car. I look forward to driving it everyday, and I couldn't imagine going back to an ICE car willingly. Once battery capacity catches up with our needs, we'll be looking to get out of our Prius and into a longer-range EV. In the mean time, I'll be enjoying the Leaf and will continue to collect data to see how it performs over time. It will be interesting to see what next summer brings.

Are Patents Still Useful?

Patents used to have a well-defined role in the economy and product innovation. That is no longer the case in some technological fields, like software. Now it seems like patents have been usurped and distorted by large corporations to protect their monopolies and stifle innovations that might threaten their business. The small entrepreneur that was originally supposed to be protected is now at a major disadvantage when forced to compete with huge patent war chests and highly-paid corporate lawyers. The patent system is in dire need of change, especially because of how technology is advancing and how execution is trumping ideas in the online world of SaaS.

The Purpose of Patents


Originally, patents were an important way to protect patent holders from competitors that would steal their product ideas and put product clones on the market without needing to go through potentially expensive R&D efforts. Copying product ideas was a way to try to get a free lunch—easy for the thief, hard on the inventor, and quite damaging to the economy.

If the government didn't try to prevent this copying of ideas, then there would be no incentive for businesses to invest in R&D. Any business that did so would be spending an awful lot of money to develop a product that anyone could turn around and produce without the initial R&D costs, once the product was on the market and was reverse engineered.

With a patent system in place, the original inventor has some time to sell a product before others are allowed to copy it. The patent holder isn't given exclusive rights to produce the invention, but he or she can attempt to enforce the patent when someone has potentially violated it within the term of the patent (usually 20 years in the US). This system would seem to resolve the problem of unscrupulous competitors, but it creates its own set of problems.

The Problem with Patents


The most obvious problem with the patent system is that it created the devious monster known as the patent troll. Some people try to set up companies with a business model of licensing and litigation based on patents that they have no intention of designing into products themselves. They come up with ideas and file patents for them with the most general, wide-reaching claims they can get. Then they try to find small companies (and sometimes large companies) that maybe violate those patents, and send the victims letters that threaten lawsuits if they don't pay licensing fees for the patents they are allegedly infringing on. The aggressor in this situation is the patent troll.

One example of a high-profile company engaging in these tactics is Rambus, Inc. They develop and license high-speed memory interfaces for microprocessors and DRAM, and they've spent plenty of time over the last 15 years in lawsuits with all kinds of semiconductor companies from Samsung to Micron to Nvidia. Instead of making memory products, they design and document memory interfaces and file lots of patents so they can then go after the companies that actually make real products.

The only reason this kind of business model can work is because it's not required to produce the invention to be issued a patent for it. If the original intent of patents was to protect the inventor from theft of her ideas until she had sufficient time to bring a product to market, then this use of patents to license technology instead of building it is a gross distortion of how patents should be used. It wastes resources in court and discourages companies from developing new technologies.

Another related problem with patents happens when companies build up huge war chests of patents and then threaten other companies with lawsuits for patent infringement whenever and wherever they can. The classic example of this problem is when IBM got Sun Microsystems to license some of their patents by threatening to sue over some weak patent infringement claims. It was thinly veiled extortion on the part of IBM's lawyers. Sun paid up because IBM had thousands of patents at their disposal, and IBM could easily outlast Sun in court.

Timothy B. Lee has a great article in the Washington Post on this problem of large patent holders, and he summarizes a reform proposal that may have made some progress on solving this problem. The article's a year old, but still quite relevant. If anything, patent war chests have gotten bigger, and are acting more towards stifling innovation.

Companies are in an arms race to build up a large enough patent portfolio to protect themselves from other companies, even if they have no intention of suing for patent infringement themselves. The danger is that once a company has this large collection of broad patents and a small competitor comes along, the temptation to sue to protect their business is too great.

Patent war chests end up making it very difficult for small companies to innovate because they don't have the resources to make sure they aren't violating any patents or to pay licensing fees and litigation costs if they do. This doesn't mean that small companies should be allowed to blatantly violate patents, but so many patents are either too generic, frivolous, or obvious, or they are already invalid due to prior art that was not considered at filing. There should be an easy, inexpensive way to cleanse the system of these worthless patents that only serve to entrench the old guard and restrict the progress of new ideas.


The Irrelevance of Patents


While arguably patents are still useful for companies that make physical products because of the development and manufacturing time it takes to bring them to market, software companies—especially SaaS companies—are under very different constraints. A modern software startup can go from square one to launching an internet service to an exponentially growing user base in a matter of months. The speed at which these companies need to advance to stay ahead of the competition makes the normal 20-year term of a patent look like an eternity.

In fact, many companies that make physical products can move nearly as fast as SaaS companies because of the rapid prototyping capabilities that are now available for circuit boards, plastics, and other types of hardware. If companies are depending on 15-year old patents to protect their business instead of developing new technologies to compete in the here and now, they're probably not going to last much longer, and their patents are a frictional force on their competitors without actually improving competition. Maybe having a shorter term for patents that can be implemented rapidly, 5 years let's say, would go quite a way towards fixing this problem.

Another reason patents can be so restrictive is that many of them are obvious solutions to the problems you would encounter when developing a product. Especially with software, solutions that are novel and unique are quite rare. Most of the time you're building on top of a massive stack of other technologies, extending it a little more in obvious ways to solve your specific problem. Very few people are inventing new data structures and algorithms that would actually warrant a patent.

Finally, patents are ideas, and ideas are worthless. It doesn't matter how many patents a company holds. The real value of those patents is in the products that company builds and how well they execute in the market. Google, Amazon, Facebook, Twitter, and all the other big internet service companies probably have thousands upon thousands of patents, but at the end of the day, they didn't get where they are because of their patents. They got where they are because of how well they executed as businesses. So why do they have so many patents? They probably feel like they have to, to protect their investments of time and money in their ideas. In reality their patents may be irrelevant at best, and extra baggage that's holding them back at worst. Tesla decided to get rid of all this baggage, and hopefully help the entire auto industry develop EVs faster in the process, by open-sourcing all of their patents.

Exactly how much are these patent war chests holding back the advancement of technology, I wonder. Patents definitely had their uses in the past of protecting a company's ideas while they brought their products to market. Now that products are being developed so quickly, and standing still can be the death knell of a company, patents should change to reflect this new environment. If patents are still useful for software companies, they probably won't be for much longer.

How Would You Organize 180 Million Websites?

There are currently about 180 million active websites on the internet. Finding what you need is going to be a challenge. Finding the website that meets your needs exactly and gives you a great experience is even harder. Organizing and finding stuff on the web has become a massive industry, with Google, Facebook, and Twitter battling for your precious time to best give you what you're looking for.

It's an extremely hard problem, and Google's method works pretty well for me so I was intrigued when I came across this post by Roy Pessis on how Google is killing the web. He laments about all the awesome websites he finds and how hard it is to find them and recall them when you need them:
Every week I find at least one site that blows my mind. I get excited about how this service could evolve into something big, it’s potential to grow into a billion dollar business, and how it can change the face of the Internet.

But you won’t find these great sites on the first page of Google results—you might not find them on the first 10. As a result, these services, some of them genuinely life-changing, get lost in the dark recesses of the Internet. Even when you find these gems, you probably won’t think to access them the next time you log on. Their biggest challenge is finding a large enough audience to create a habit around their product.
It's a commendable goal to want to improve the web experience and connect people with the companies that can best help them with their needs. If a service could show me the websites that would most efficiently and effectively help me do what I want right now, that would be beyond excellent.

This article really got me thinking about how the web could be better, but then a funny thing happened. I got stuck on the enormity of the problem. There is not one, but three main challenges to overcome—challenges that the big internet companies are attacking in various ways and doing a pretty good job of solving already. Any new solution is going to have to do better at all of these issues than the solutions that are currently out there, and that's a lot more difficult than convincing people that there should be a better way to find what they're looking for on the web.

How do you find exactly what you're looking for?


Finding the handful of websites that would best help you among the 180 million websites out there is hard enough, but to do it quickly, billions of times per day for hundreds of millions of users is shockingly difficult. Every user's idea of what they're looking for has its own context. Different websites will align better with different users' needs, even when they deal with the same topics. Finding the best match for everyone has a significant amount of irreducible complexity.

Each of the major internet companies deals with this complexity in a different way. Google attempts to match people to websites with keyword search. They index the web, find the keywords you're looking for in text and links, and return a ranked list of results. The whole process is much more complicated than that, of course, but it's a logical way to look for something in such a massive amount of information.

Facebook takes a different tact. They figure you'll be interested in the same types of things that your friends are interested in. You're likely to want to read or watch the things your friends find, create, and post, so your Facebook feed attempts to show you things from your friends' posts that are likely to interest you. This is not so much directed searching as finding what you're looking for through serendipity. You can find a lot of things you're interested in this way, but not likely what you're looking for right now.

Twitter uses yet another approach. It's similar to Facebook in that you follow other people and see a feed of their posts, but it's much more transient and you see all of the posts, as well as posts by others that respond to those posts. Choosing who to follow based on what you want to see is much more important here. If you carefully select who you follow, you'll have a well-curated feed of highly relevant links, comments, and discussions related to your interests. You do have to put in the time, and like Facebook, you probably wouldn't look to Twitter as a resource for immediate problems. But you can find a lot of valuable stuff this way over time.

Things aren't completely segmented along these lines, and each of these companies uses elements of the other approaches to help you find what you're looking for. Each of them provides a markedly different experience and makes different choices for the trade-offs involved. While none of them are perfect, they all get the job done fairly effectively, and each of them works better in certain situations.

How do you remember what you've already found?


Once you find something valuable on the web, you probably want to save it for later use. If you found it through Google, you may be able to use the same search terms to find it again the next time, if you can remember how you did it. It's even harder to find old stuff on Facebook, and it's nearly impossible on Twitter.

If you want to use the web like your desktop or tablet and store things for frequent use, then you need to "install" the websites you use most with bookmarks or a website like delicious.com. Personally, I use Firefox bookmarks, and they work pretty well. I keep them organized in folders, and I have access to them on any device that has Firefox installed. I can see how they don't scale well, though, and with hundreds of bookmarks, I'm starting to depend more on the search feature.

I don't know how to make bookmarks scale better, but desktops and tablets suffer from the same problem with installed apps. I know people who have installed 200+ apps on their smartphone and are in the same predicament. They can't find what they need when they need it. They need search. Having all of your apps on your desktop, just one click away, doesn't help if you can't find the ones you need in the sea of apps you never use. The desktop isn't really a solved problem. It's a different problem. Trying to make the web more like the desktop isn't going to solve any of the web's problems.

The real problem here is that once you get past a few dozen apps or bookmarks or whatever, it's hard to remember where you put them when you need them unless you've done a great job organizing them yourself. At a certain number of things, it's easier to resort to search. The web is way past that number, so the default is search.

I find that I use search more on the desktop now because it works so well for the web. I reserve the prime real estate on my taskbar for the dozen programs I use the most, and similarly, I have less than a dozen pinned tabs in Firefox for my most-used websites. Keeping more things than this available at once just isn't useful.

How do the best sites get noticed?


I'm sure we've all had the same experience of finding an awesome website, and then wondering why it was so hard to find or why we didn't find it sooner. These websites should be easy to find, right? Everyone should be using them because they're so awesome! But everyone has a different idea of what makes a great website, and there are a lot of different interests out there.

The most popular websites gained their popularity over time, and lots of websites benefit from network effects. They become more useful as more people use them. Sites like Facebook, Twitter, Amazon, and Stackoverflow depend on the sheer volume of users to make the sites better. It takes time and effort to build a site from small beginnings, and a site with lots of potential is much different than a site with millions of users. Not every awesome website is going to make that transition.

There's also the issue of competing in a world where the power law rules. Ben Thompson has an excellent article on how newspapers are suffering from the availability of content:
Most of what I read is the best there is to read on any given subject. The trash is few and far between, and the average equally rare.
This, of course, is made possible by the Internet. No longer are my reading choices constrained by time and especially place.
This property applies to all websites, though. It's hard to get noticed unless you're the best because people don't have time to look at much more than a few sources for any given topic. They're going to devote their precious time to the sites that are the most likely to give them good returns on their time investment. That typically means it's the popular sites that get the traffic. To get popular, sites need to have great content and great promotion strategies, or they'll get lost in the sea of other sites.

Every once in a while a new channel comes along that allows new websites to promote themselves easily and get popular, but that only works until the new channel gets saturated. Facebook and Twitter are recent examples. It may seem like app stores are a good model that could be used to promote websites because they've worked so well for smart phone and tablet apps. They've got reviews and ratings, and if you get promoted by Apple or Google, your app can really make it big. But there's still a lot of crappy apps out there with a small amount of great apps to find. iOS now has over 1.2 million apps and Android has well over 1.3 million apps. At those numbers it's not much easier to get noticed in an app store than it is on the web, no matter what the app store is like.

I would absolutely love a better web browsing experience. I think everyone would. I would love to find the best sources on any given topic or task instantaneously without any search effort. Who wouldn't? But who is judging what "best" is? My definition of best is almost guaranteed to be different than anyone else for a large selection of things. Aggregating opinions through ratings can go a long way, but what about the websites that go unnoticed that might be perfect for me? I wouldn't know unless I tried all of the options, and I don't have the time or the inclination to do that for most things.

I'm willing to give up some choice to Google or Amazon in exchange for expediency and something that satisfies my needs—something that is good enough. Taking into account the magnitude of content that is being sifted through, the current browser experience is more than good enough. I would welcome a better solution, but it's probably not going to replace the ones that are already out there. A new solution is going to have to make its own choices on the trade-offs, and it's going to have to first figure out how to organize those 180 million websites.