In surveys of AI “experts” on when we are going to get to human level intelligence in our AI systems, I am usually an outlier, predicting it will take ten or twenty times longer than the second most pessimistic person surveyed. Others have a hard time believing that it is not right around the corner given how much action we have seen in AI over the last decade.
Could I be completely wrong? I don’t think so (surprise!), and I have come up with an analogy that justifies my beliefs. Note, I started with the beliefs, and then found an analogy that works. But I think it actually captures why I am uneasy with the predictions of just two, or three, or seven decades, that abound for getting to human level intelligence. It’s a more sophisticated and detailed version of the story about how building longer and longer ladders will not get us to the Moon.
The analogy is to heavier than air flight.
All the language that follows is expressing that analogy.
Starting in 1956 our AI research got humans up in the air in hot air balloons, in tethered kites, and in gliders. About a decade ago Deep Learning cracked the power to weight ratio of engines that let us get to heavier than air powered flight.
If we look at the arc of the 20th century, heavier than air flight transformed our world in major ways.
It had a major impact on war in just its first two decades, and four decades in it completely transformed how wars were fought, and continued that transformation for the rest of the century.
From the earliest days it changed supply chains for goods with high ratios of value to mass. First it was with mail–the speed of sending long messages (telegraphs were good only for short messages), and then later the speed of getting bank instructions and letters of credit, and receipts, all paper based, around the globe transforming commerce into a global enterprise. Later it could be used for the supply chains of intrinsically high value goods, including digital devices, and even bluefin tuna.
It also transformed human mobility, giving rise to global business even in the PZ1 era. But it also gave the richest parts of the world a new level of personal mobility and a new sort of leisure for them. Airplanes caused overcrowding in Incan Citadels and palaces of divine kings, steadily getting worse as more countries and larger populations reached the wealth threshold of personal air travel. Who knew? I doubt that this implication was on the mind of either Wilbur or Orville.
Note that for the bulk of the 20th century most heavier than air flight used a human pilot on board, and even today that is still true for the majority of flying vehicles. The same for AI. Humans will still be in the loop for AI applications, at one level or another, for many decades.
This analogy between AI and heavier than air flight embodies two things that I firmly believe about AI.
First, there is tremendous potential for our current version of AI. It will have enormous economic, cultural, and geopolitical impact. It will change life for ordinary people over the next many decades. It will create great riches for some. It will lead to different world views for many. Like heavier than air flight it will transform our world in ways in which we can not yet guess.
Second, we are not yet done with the technology by a long shot. The airplanes of 1910 (and AI of today) in hindsight look incredibly dangerous, their use was really only for daredevils without too much regard for fatal consequences, and no one but a complete nutcase flies a 1910 technology airplane any more. But the airplanes did get better, and the technology underwent radical changes. No modern airplane engine works at all like those of 1903 or even 1920. Deep Learning, too, will be replaced over time. It will seem quaint, and barely viable in retrospect. And woefully energy inefficient.
But what about human level intelligence? Well, my friends, I am afraid that in this analogy that lies on the Moon. No matter how much we improve on our basic architecture of our current AI over the next 100 years, we are just not going to get there. We’re going to need to invent something else. Rockets. To be sure, the technologies used in airplanes and rockets are not completely disjoint. But rockets are not just over developed heavier than air flying machines. They use a different set of principles, equations, and methodologies.
And now the real kicker for this analogy. We don’t yet know how deep is our gravity well. We don’t yet know how to build human level intelligence. Our current approaches to AI (and neuroscience) might be so far off that in retrospect it will seen to have been so wacky that it will be classified as “not even wrong”. (For those who do not know the reference that is a severe diss.)
Our gravity well? For a fixed density of its innards the surface gravity of a planet is proportional to its radius or diameter. If our Earth was twice its actual diameter, gravity would be twice what it is for us, and today’s chemical rockets probably wouldn’t have enough oomph to get people into orbit. Our space faring aspirations would be much harder to achieve, even at the tiny deployed level we have so far, 117 years after our first manned heavier than air flights. But could it be worse than two times? Earth with a 32,000 mile diameter, rather than our 8,000 mile diameter and gosh, chemical rockets might never be enough (and we would probably be much shorter and squatter…). That is how little we know today. We don’t know what the technology of human level intelligence is going to look like, so we can’t estimate how hard it is going to be to achieve. Two hundred years ago no one could have given a convincing argument on whether chemical powered rockets would get us to the Moon or not. We didn’t know the relationships between chemical reactions and Earth’s gravity that now give us the answer.
That human level intelligence up there on the Moon is going to be out of reach for a long, long time. We just do not know for how long, but we surely know that we are not going to fly there on winged aircraft.
1 Pre Zoom.
Nice analogy. You might enjoy my take on (very roughly) the same issue:
https://web.eecs.umich.edu/~kuipers/opinions/AI-progress.html
Thanks Ben. A good read. And a great set of references to the early overhype. They are very helpful!!
I think the analogy with flight, and its unpredictable effects on society, is good. Also the analogy with the moon rocket, requiring still a lot more development, is good. What is missing in the analogy is what is in between. In the moon case that is empty space (not very interesting), in the AI case there is a lot of interesting stuff between where we are and human-level AI.
We will see these intermediates in the coming years: interesting bits of technology with interesting applications and interesting fragments of minds, that give us further insights on what a mind is.
Ignoring the interesting space between here and human-level-AI risks slipping towards magical thinking: that the mind is something with some magic vital force that can be achieved all in one go (getting over gravity well) or not at all. If we instead think of the mind as purely mechanical then we should expect to be able to construct interesting parts of that machine.
Perhaps a better analogy is the synthesis of organic molecules and bits of living systems; originally believed to be impossible and requiring some vital force. Slowly and gradually the technology progresses to build more complex bits of life. AI might develop more like this, rather than a quantum leap to the moon.
The existence of interesting and/or useful intermediates can have negative and positive effects. Negative: encouraging the building of longer ladders. Positive: encouraging the building of components of the final moon rocket which have some use on their own for an intermediate.