In my last post I wrote an
epic review of two books that delved into the nature of intelligence and the limits of computation:
Gödel, Escher, Bach and
The Annotated Turing. Both books sparked all kinds of new ideas about artificial intelligence (AI), especially
GEB. I tried to stick to the material in the books for the review, but now it's time to dig in and explore some of the ideas those books spawned in my mind about AI. These ideas boil down to three main questions that define the scope of issues surrounding AI.
- What is the nature of intelligence?
- Is artificial general intelligence possible?
- What could happen if we create AGI?
These are big, complex questions that plenty of smart people are trying to answer for various applications. The specific reference to artificial
general intelligence is there to distinguish it from the numerous examples of artificial
narrow intelligence that we already have, such as chess programs, simulations, equation solvers, and search algorithms that do things better than us humans, but only in a narrowly defined task. AGI is a type of intelligence that we have not yet achieved with computers as of yet.
The implications of these questions are fascinating. The answers to the first question will define how we would recognize intelligence and what we're aiming for with AGI. The answer to the second question is almost certainly yes, but much more is behind it than a simple yes/no answer. The emergence of an AGI that meets the answers to the first question would show the positive result of the second question. The answer to the third question is extremely hard to foresee, and the possibilities get extremely gnarly when coupled with the property of exponential growth. The actual result will most likely determine our future as a species. Heavy thoughts. We'll dig into the first question in this post and cover the other two in subsequent posts.