Blog

How Much Things Can Change

rodneybrooks.com/how-much-things-can-change/

This post is about how much things can change in the world over a lifetime. I’m going to restrict my attention to science, though there are many parallels in technology, human rights, and social justice.

I was born in late 1954 so I am 65 years old. I figure I have another 30 years, with some luck, of active intellectual life. But if I look backward and forward within my family, I knew as an adult some of my grandparents who were born late in the nineteenth century, and I expect that both I will know some of my own grandchildren when they are adults, and that they will certainly live into the twenty second century. My adult to adult interactions with members of my own direct genetic line will span five generations and well over two hundred years from the beginning to the end of their collective lives, from the nineteenth to the twenty second century.

I’m going to show how shocking the changes have been in science throughout just my lifetime, how even more shocking the changes have been since my grandparents were born, and by induction speculate on how much more shock there will be during my grandchildren’s lifetimes. All people who I have known.

Not everything will change, but certainly some things that we treat as truth and obvious today will no longer seem that way by early next century. We can’t know exactly which of them will be discarded, but I will put up quite a few candidates. I would be shocked if at least some of them have not fallen by the wayside a century from now.

How Has Science Changed Since Relatives I knew were born?

My oldest grandparent was born before the Michelson-Morley experiment of 1887 that established that the universe was not filled with aether, but rather with vacuum.

Even when all four of my grandparents were first alive neither relativity nor quantum mechanics had been thought of.  Atoms with a nucleus of protons and neutrons, surrounded by a cloud of electrons were unknown. X rays and other radiation had not been detected.

The Earth was thought to be just 20 to 40 million years old, and it wasn’t until after I was born that the current estimates were first published.

It was two months after my father was born that Edwin Hubble described galaxies and declared that we lived in one of many, in our case the Milky Way. While my father was young the possibility that some elements might be fissile was discovered, along with the idea that some might be fusible. He was an adult back from the war when the big bang theory of the universe was first proposed, and it is only in the last 30 years that alternatives were largely shouted down.

In my lifetime we started out with nine planets, went down to eight, and now have observed thousands of them in nearby star systems. Plate tectonics, which first revealed that continents were not historically statically in place and explained both earthquakes and volcanoes were first hypothesized after I was born.

Crick and Watson determined the structure of DNA just the year before I was born. I was a toddler when Crick hypothesized the DNA to RNA translation and transcription mechanism, in school when the first experimental results showing how it might work came in, and in college before it was mostly figured out. Then it was realized that most animal and plant DNA does not code for proteins, and so it was labeled as junk DNA. Gradually over time other functions for that DNA have been discovered and it is now called  non-coding DNA. All its functions have still not been worked out.

I was in graduate school when it was figured out that the split of all life on Earth into prokaryotes (cells without a nucleus) and eukaryotes (cells with a nucleus) was inadequate. All animals, plants, and fungi, belong to the latter class. But in fact there are two very distinct sorts of prokaryotes, both single celled, the bacteria and the archaea. The latter were completely unknown until the first ones were found in 1977. Now the tree of life has archaea and eukaryotes branching off from a common point on a different branch than the bacteria. We are more closely related to the unknown archaea than we are to bacteria. We had completely missed a major type of living organism; the archaea on Earth have a combined mass of more than three times that of all animals. We were just plain unaware of them–they are predominantly located in deep subsurface environments, so admittedly they were not hiding in plain sight.

In just the last few years we have realized that human bodies contain ten times more cells that are bacteria, than they contain cells that have our DNA in them, though the bacterial mass is only about 3% of our body weight. Before that we thought we were mostly us.

The physical structure of neurons and the way they connect with each other was first discovered when my grandparents were already teenagers and young adults. The rough way that neurons operate was elucidated in the decade before my birth, but the first paper that laid out a functional explanation of how neurons could operate in an ensemble did not occur until I was already in kindergarten (in the “What the Frog’s Eye Tells the Frog’s Brain paper”). Half the cells in brains, the glia, were long thought to be physical support and suppliers of nutrients to the neurons, playing no direct role in what neural systems did. In the second half of the twentieth century we have come to understand that they play a role in neurotransmission, and modulate all sorts of behavior of the neurons. There is still much to be learned. More recently the role of small molecules diffusing locally in the brain have been shown to also affect how neurons operate.

What is known in science about cosmology, physics, the mechanisms of life, and neuroscience had changed drastically since my grand parents were born, and has continued to change right up until today. Our scientific beliefs have not been static, and have constantly evolved.

Will science continue to change?

It seems entirely unlikely to me that my grandchildren will one day be able to say that up until they were born scientific ideas came and went with accepted truth regularly changing, but since they were born science has been very stable in its set of accepted truths.

Things will continue to change. Below I have put a few things that I think could change from now into the beginning of the next century. I am not saying that any particular one of these will be what changes. And I would be very surprised if more than half of these will be adopted. But I have selected the ideas that currently gnaw at me and do not feel as solid as some other ideas in science. Some will no doubt become more solid. But it will not surprise me so much if any individual one of these turns into accepted wisdom.

Cosmology:

  • There is no dark matter.
  • The Universe is not expanding.
  • The big bang was wrong.

Physics:

  • There is a big additional part of quantum mechanics to be understood.
  • String theory is bogus.
  • The many worlds interpretation is decided to be confused and discarded.

Life:

  • We discover that there is a common ancestor to archaea, bacteria, and eukarya, a fourth domain of life, that still exists in some places on Earth–and it is clearly a predecessor to the three that we know about now in that it does not have the full modern DNA version of genetics, but instead is a mixture of DNA and RNA based genetics, or purely RNA, or perhaps purely PNA, and has a simplified ancestral coding scheme.
  • We discover life elsewhere in the solar system and it is clearly not related to life on Earth. It is different, with different components and mechanisms, and its abiogenesis was clearly independent of the one on Earth.
  • We detect life on a planet that we can observe in a nearby solar system.
  • We detect an unambiguously artificial signal from much further away.

Neuroscience:

  • We discover the principles of a rich control system in plants that is not based on neurons, but nevertheless explains the complex behaviors of plants that we can observe when we speed up videos of them operating in the world. So much for electro-centrism!
  • We move away from computational neuroscience with a new set of metaphors that turn out to have both better explanatory power and provide tools that explain otherwise obtuse aspects of neural systems.
  • We find not just a different metaphor, but actual new mechanisms in neural systems of which we have not previously been aware, and they become dominant in our understanding of the brain.
How to have impact in the world

If you want to be a famous scientist then figure one of these out. You will redirect major intellectual pursuits of mankind. But, it will be a long lonely road.

 

Peer Review

rodneybrooks.com/peer-review/

This blog is not peer reviewed at all.  I write it, I put it out there, and people read it or not. It is my little megaphone that I alone control.

But I don’t think anyone, or at least I hope that no-one, thinks that I am publishing scientific papers here.  They are my opinion pieces, and only worthwhile if there are people who have found my previous opinions to have turned out to be right in some way.

There has been a lot of discussion recently about peer review. This post is to share some of my experiences with peer review, both as an author and as an editor, from three decades ago.

In my opinion peer review is far from perfect. But with determination new and revolutionary ideas can get through the peer review process, though it may take some years. The problem is, of course, that most revolutionary ideas are wrong, so peer review tends to stomp hard on all of them. The alternative is to have everyone self publish and that is what is happening with the arXiv distribution service. Papers are getting posted there with no intent of ever undergoing peer review, and so they are effectively getting published with no review. This can be seen as part of the problem of populism where all self proclaimed experts are listened to with equal authority, and so there is no longer any expertise.

My Experience with Peer Review as an Author

I have been struggling with a discomfort about where the herd has been headed in both Artificial Intelligence (AI) and neuroscience since the summer of 1984. This was a time between my first faculty job at Stanford and my long term faculty position at MIT. I am still concerned and I am busy writing a longish technical book on the subject–publishing something as a book gets around the need for full peer review, by the way…

When I got to MIT in the fall of 1984 I shifted my research based on my concerns. A year later I was ready to talk about the what I was doing, and submitted a journal paper describing the technical idea and an initial implementation. Here is one of the two reviews.

It was encouraging, but both it and a second review recommended that the paper not be published. That would have been my first rejection.  However, the editor, George Bekey, decided to publish it anyway, and it appeared as:

Brooks, R. A. “A Robust Layered Control System for a Mobile Robot, IEEE Journal of Robotics and Automation, Vol. 2, No. 1, March 1986, pp. 14–23; also MIT AI Memo 864, September 1985.

Google Scholar reports just under 12,000 citations of this paper, my most cited paper ever. The approach to controlling robots, the subsumption architecture that it proposed led directly to the Roomba, a robot vacuum cleaner, which with over 30 million sold is the most produced robot ever. Furthermore the control architecture was formalized over the years by a series of researchers, and its descendant, behavior trees, is now the basis for most video games. (Both Unity and Unreal use behavior trees to specify behavior.) The paper still has multi billion dollar impact every year.

Most researchers who stray, believing the herd is wrong, end up heading off in their own wrong direction. I was extraordinarily lucky to choose a direction that has had incredible practical impact.

However, I was worried at a deeper intellectual level, and so almost simultaneously started writing about the philosophical underpinnings of research in AI, and how my approach differed. There the reviews were more brutal, as is shown in a review here:

This was a a review of lab memo AIM-899, Achieving Artificial Intelligence through Building Robots which I had submitted to a conference.This paper was the first place that I talked about the possibility of robot vacuum cleaners as an example of how the philosophical approach I was advocating could lead to new practical results.

The review may be a little hard to read in the image above. It says:

This paper is an extended, wandering complaint that the world does not view the author’s work as the salvation of mankind.

There is no scientific content here; little in the way of reasoned argument, as opposed to petulant assertions and non-sequiturs; and ample evidence of ignorance of the literature on these questions. The only philosopher cited is Dreyfus–but many of the issues raised have been treated more intelligibly by others (the chair definition problem etc. by Wittgenstein and many successors; the interpreted toy proscription by Searle; the modularity question by Fodor; the multiple behaviors ideas by Tinbergen; and the constructivist approach by Glymour (who calls it computational positivism). The argument about evolution leaks all over, and the discussion on abstraction indicates the author has little understanding of analytic thought and scientific investigation.

Ouch! This was like waving a red flag at a bull. I posted this and other negative reviews on my office door where they stayed for many years. By June of the next year I had added to it substantially, and removed the vacuum cleaner idea, but kept in all the things that the reviewer did not like, and provocatively retitled it Intelligence Without Representation. I submitted the paper to journals and got further rejections–more posts for my door. Eventually its fame had spread to the point that the Artificial Intelligence Journal, the mainstream journal of the field, published it unchanged (Artificial Intelligence Journal (47), 1991, pp. 139–159) and it now has 6,900 citations. I outlasted the criticism and got published.

That same year at the major international conference IJCAI: International Joint Conference on Artificial Intelligence I was honored to win the Computers and Thought award, quite a surprise to me, and I think to just about every one else. With that honor came an invitation to have a paper in the proceedings without the six page limit that applied to everyone else, and without the peer review process that applied to everyone else. My article was twenty seven pages long, double column, a critical review article of the history of AI, also with a provocative and complementary title, Intelligence Without Reason, (Proceedings of 12th Int. Joint Conf. on Artificial Intelligence, Sydney, Australia, August 1991, pp. 569–595). It now has over 3,100 citations.

My three most cited papers were either rejected under peer review or accepted with no peer review.  So I am not exactly a poster child for peer reviewed papers.

My Experience With Peer Review As an Editor

In 1987 I co-founded a journal, the International Journal of Computer Vision. It was published by Kluwer as a hardcopy journal for many years, but now it is run by Springer and is totally online. It is now in its 128th volume, and has had many hundreds of issues. I co-edited the first seven volumes which together had a total of twenty eight issues.

The journal has a very strong reputation and consistently ranks in the top handful of places to publish in computer vision, itself a very hot topic of research today.

As an editor I soon learned a lot of things.

  1. If a paper was purely theoretical with lots of equations and no experiments involving processing an image it was much more likely to get accepted than a paper which did have experimental results. I attributed this to people being unduly impressed by mathematics (I had a degree in pure mathematics and was not as easily impressed by equations and complex notation). I suspected that many times the reviewers did not fully read and understand the mathematics as many of them had very few comments about the contents of such papers. If, however, a paper had experiments with real images (and back then computers were so slow it was rarely more than a handful of images that had been processed), the same reviewers would pick apart the output, faulting it for not being as good as they thought it should be.
  2. I soon learned that one particular reviewer would always read the mathematics in detail, and would always find things to critique about the more mathematical papers. This seemed good. Real peer review. But soon I realized that he would always recommend rejection. No paper was ever up to his standard. Reject! There were other frequent rejecters, but none as dogmatic as this particular one.
  3. Likewise I found certain reviewers would always say accept. Now it was just a matter of me picking the right three referees for almost any paper and I could know whether the majority of reviewers would recommend acceptance or rejection before I had even sent the paper off to be reviewed. Not so good.
  4. I came to realize that the editor’s job was real, and it required me to deeply understand the topic of the paper, and the biases of the reviewers, and not to treat the referees as having the right to determine the fate of the paper themselves. As an editor I had to add judgement to the process at many steps along the way, and to strive for the process to improve the papers, but also to let in ideas that were new. I now came to understand George Bekey and his role in my paper from just a couple of years before.

Peer reviewing and editing is a lot more like the process of one on one teaching than it is of processing the results of a multiple choice exam. When done right it is about coaxing the best out of scientists, and encouraging new ideas to flourish and the field to proceed.

The UPSHOT?

Those who think that peer review is inherently fair and accurate are wrong. Those who think that peer review necessarily suppresses their brilliant new ideas are wrong. It is much more than those two simple opposing tendencies.

Peer review grew up in a world where there were many fewer people engaging in science than today. Typically an editor would know everyone in the world who had contributed to the field in the past, and would have enough time to understand the ideas of each new entrant to the field as they started to submit papers. It relied on personal connections and deep and thoughtful understanding.

That has changed just due to the scale of the scientific endeavor today, and is no longer possible in that form.

There is a clamor for double blind anonymous review, in the belief that that produces a level playing field. While in some sense that is true, it also reduces the capacity for the nurturing of new ideas. Clamorers need to be careful what they wish for–metaphorically it reduces them to competing in a speed trial, rather than being appreciated for virtuosity. What they get in return for zeroing the risk of being rejected on the basis of their past history or which institution they are from is that they are condemned to forever aiming their papers at the middle of a field of mediocrity, with little chance for achieving greatness.

Another factor is that the number of new journals has changed. Institutions, and sometimes whole countries, decide that the way for them to get a better name for themselves is to have a scientific journal, or thirty. They set them up and put one of their local people who has no real understanding of the flow of ideas in the particular field at the global scale, as editor. Now editing becomes a mechanical process, with no understanding of the content of the paper or the qualifications of who they ask to do the reviews. I know this to be true as I regularly get asked to review papers in fields in which I have absolutely no knowledge, by journal editors that I have never heard of, nor of their journal, nor its history. I have been invited to submit a review that can not possibly be a good review. I must induce that other reviews may also not be very good.

I don’t have a solution, but I hope my observations here might be interesting to some.

What Networks Will Co-Evolve With AI and Robotics?

rodneybrooks.com/what-networks-will-co-evolve-with-ai-and-robotics/

Again and again in human history networks spanning physical geography have both enabled and been enabled by the very same innovations. Networks are the catalysts for the innovations and the innovations are the catalysts for the networks. This is autocatalysis at human civilization scale.

The Roman empire brought for people within its expanding borders long distance trade, communication, peace and stability. Key to this was the network of roads, many of which survive as the routes of modern transportation systems, and ports. And the stability of that network was made possible by the things that the empire brought.

The Silk Road, a network of trade routes, enabled many civilizations that themselves supported the continued existence of those trade routes.

In the eighteenth century England’s network of canals enabled both the delivery of raw materials, coal for power, and access to ports for the finished goods, enabling the industrial revolution with the invention of factories. The canals were built on the wealth of the factory owners who formed syndicates to build those canals.

Later the train network enhanced and replaced the canal network in England. And building a train network in the United States enabled large scale farming in the mid-west to have access to markets on the east coast, and later to ports on both coasts to make the US a major source of food. At the same time, a second network, the telegraph, was overlaid on the same physical route system, first to operate the train network itself and later to form the basis of new forms of communications.

As the later telephone networks were built they ushered in the world of commerce and general business became the principal industry instead of farming. And as business grew the need for more extensive telephone networks with more available lines grew with it.

When Henry Ford started mass producing automobiles he realized that a network of roads was necessary for the masses to have somewhere to drive. And as there were more and more roads the demand for automobiles increased. As a side effect the roads came to replace much of the rail network for moving goods around the country.

The personal computer of the 1980’s was not ubiquitous in ordinary households until it was coupled to the second generation data packet network that had started out as a reliable communications network for the military and for sharing scarce computer resources in academia. The pull on network bandwidth lead to rapid growth of the Internet, and that enabled the World Wide Web, a network of information overlaid on the data packet network, that gave a real impetus for more people to own their own computer.

As commerce started to be carried out on the Web, demand rose even more, and ultimately large data centers needed to be built as the backend of that commerce system. Then those data centers got offered to other businesses and cloud computing became a network of computational resources, on top of what had been a network for moving data from place to place.

Cloud computing enabled the large scale training needed for deep networks, a computational technique very vaguely inspired by the network of neurons in the brain. Deep networks are what many people call AI today. Those networks and their demands for computation for training are driving the growth of the cloud computing network, and a world wide network of low paid piece workers who label data needed to drive the training, using the substrate of the Web network to move the data around, and to get paid.

Are we at the end game for AI driving networks? Or when we can get past the very narrow capabilities of deep networks to new AI technologies will there be new networks that arise and are autocatalytic with the new AI?

And what about robotics?

The disruptions to world supply chains from COVID-19 are only just beginning to be seen–there will be turbulence in many areas later in 2020. The exceptionally lean supply chains we have lived with over the last few years (relying on a network of shipping routes that rely on the standardization of shipping containers to grow and interact) are likely to feel pressure to get a little fatter. That is likely to increase the demand for robotics and automation in those supply chains, a phenomenon that we have already see starting over the last few years.

Another lesson which may be drawn from the current pandemic is that more automation is needed in healthcare, as trained medical professions have been pushed to their limits of endurance, besides being in personal mortal peril at times.

So what might be the new networks that arise over the next few years, demanded by the way we change automation, and supported by that very change?

Here are a few ideas, none of which seem particularly compelling at the moment, certainly not in comparison to Roman roads or to the Internet itself:

A commerce network of data sets, and of sets of weights for networks trained on large data sets.

A physical network of supply points, down to the city or county level, for major robot components; mobile bases for indoors or outdoors, legged tracked and wheeled; various sensor packages; human-robot interaction displays and sensors; and all sorts of arms with different characteristics. These can be assembled plug and play to produced appropriate robots as needed to respond to all sorts of emergency needs.

A network of smart sensors embedded in almost everything in our lives, which lives on top of the current Internet–this is already getting built and is called IoT (Internet of Things).

A new supply network of both partially finished goods (e.g., standard embedded processor boards) and materials (seventy eight different sorts of raw stock for 3D printers a generation or two out) so that much more manufacturing can be done closer to end customers, using automation and robots.

An automated distribution network down to the street corner level in cities, with short term storage units on the sidewalk (probably a little bigger than the green storage units that the United States Postal Service has on street corners throughout US cities). Automated vehicles would supply these, perhaps at off peak traffic times, and then smaller neighborhood sidewalk robots would distribute to individual houses, or people could come to pick up.

I’m not particularly proud of or happy with any of these ideas. But based on history over the last 2,000 plus years I am confident that some sort of new networks will soon arise. Please do not let my lack of imagination dissuade you from the idea that new forms of networks will be coming.

Predictions Scorecard, 2020 January 01

rodneybrooks.com/predictions-scorecard-2020-january-01/

On January 1st, 2018, I made predictions (here) about self driving cars, Artificial Intelligence and machine learning, and about progress in the space industry. Those predictions had dates attached to them for 32 years up through January 1st, 2050.

I made my predictions because at the time I saw an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.

As part of self certifying the seriousness of my predictions I promised to review them, as made on January 1st, 2018, every following January 1st for 32 years, the span of the predictions, to see how accurate they were.

On January 1st, 2019, I posted my first annual self appraisal of how well I did. This post, today, January 1st, 2020, is my second annual self appraisal of how well I did–I have 30 more annual appraisals ahead of me. I think in the two years since my predictions, there has been a general acceptance that certain things are not as imminent or as inevitable as the majority believed just then. So some of my predictions now look more like “of course”, rather than “really, that long in the future?” as they did then.

This is a boring update. Despite lots of hoopla in the press about self driving cars, Artificial Intelligence and machine learning, and the space industry, this last year, 2019, was not actually a year of big milestones. Not much that will matter in the long run actually happened in 2019.

Furthermore, this year’s summary indicates that so far none of my predictions have turned out to be too pessimistic. Overall I am getting worried that I was perhaps too optimistic, and had bought into the hype too much. There is only one dated prediction of mine that I am currently worried may have been too pessimistic–I won’t name it here as perhaps I will turn out to be right after all.

Repeat of Last Year’s Explanation of Annotations

As I said last year, I am not going to edit my original post, linked above, at all, even though I see there are a few typos still lurking in it. Instead I have copied the three tables of predictions below from last year’s update post, and have simply added a total of six comments to the fourth column. As with last year I have highlighted dates in column two where the time they refer to has arrived.

I tag each comment in the fourth column with a cyan colored date tag in the form yyyymmdd such as 20190603 for June 3rd, 2019.

The entries that I put in the second column of each table, titled “Date” in each case, back on January 1st of 2018, have the following forms:

NIML meaning “Not In My Lifetime, i.e., not until beyond December 31st, 2049, the last day of the first half of the 21st century.

NET some date, meaning “No Earlier Than” that date.

BY some date, meaning “By” that date.

Sometimes I gave both a NET and a BY for a single prediction, establishing a window in which I believe it will happen.

For now I am coloring those statements when it can be determined already whether I was correct or not.

I have started using LawnGreen (#7cfc00) for those predictions which were entirely accurate. For instance a BY 2018 can be colored green if the predicted thing did happen in 2018, as can a NET 2019 if it did not happen in 2018 or earlier. There are five predictions now colored green, the same ones as last year, with no new ones in January 2020.

I will color dates Tomato (#ff6347) if I was too pessimistic about them. No Tomato dates yet. But if something happens that I said NIML, for instance, then it would go Tomato, or if in 2020 something already had happened that I said NET 2021, then that too would have gone Tomato.

If I was too optimistic about something, e.g., if I had said BY 2018, and it hadn’t yet happened, then I would color it DeepSkyBlue (#00bfff). None of these yet either. And eventually if there are NETs that went green, but years later have still not come to pass I may start coloring them LightSkyBlue (#87cefa).

In summary then: Green splashes mean I got things exactly right. Red means provably wrong and that I was too pessimistic. And blueness will mean that I was overly optimistic.

So now, here are the updated tables.

Self Driving Cars

No predictions have yet been relevant for self driving cars, but I have augmented one comment from last year in this first table.  Also, see some comments right after this title.

Prediction
[Self Driving Cars]
Date2018 CommentsUpdates
A flying car can be purchased by any US resident if they have enough money.NET 2036There is a real possibility that this will not happen at all by 2050.
Flying cars reach 0.01% of US total cars.NET 2042That would be about 26,000 flying cars given today's total.
Flying cars reach 0.1% of US total cars.NIML
First dedicated lane where only cars in truly driverless mode are allowed on a public freeway.
NET 2021This is a bit like current day HOV lanes. My bet is the left most lane on 101 between SF and Silicon Valley (currently largely the domain of speeding Teslas in any case). People will have to have their hands on the wheel until the car is in the dedicated lane.
Such a dedicated lane where the cars communicate and drive with reduced spacing at higher speed than people are allowed to driveNET 2024
First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.NET 2022The pick up and drop off points will not be parking spots, but like bus stops they will be marked and restricted for that purpose only.20190101 Although a few such services have been announced every one of them operates with human safety drivers on board. And some operate on a fixed route and so do not count as a "taxi" service--they are shuttle buses. And those that are "taxi" services only let a very small number of carefully pre-approved people use them. We'll have more to argue about when any of these services do truly go driverless. That means no human driver in the vehicle, or even operating it remotely.

20200101
During 2019 Waymo started operating a 'taxi service' in Chandler, Arizona, with no human driver in the vehicles. While this is a big step forward see comments below for why this is not yet a driverless taxi service.
Such "taxi" services where the cars are also used with drivers at other times and with extended geography, in 10 major US citiesNET 2025A key predictor here is when the sensors get cheap enough that using the car with a driver and not using those sensors still makes economic sense.
Such "taxi" service as above in 50 of the 100 biggest US cities.NET 2028It will be a very slow start and roll out. The designated pick up and drop off points may be used by multiple vendors, with communication between them in order to schedule cars in and out.
Dedicated driverless package delivery vehicles in very restricted geographies of a major US city.NET 2023The geographies will have to be where the roads are wide enough for other drivers to get around stopped vehicles.
A (profitable) parking garage where certain brands of cars can be left and picked up at the entrance and they will go park themselves in a human free environment.NET 2023The economic incentive is much higher parking density, and it will require communication between the cars and the garage infrastructure.
A driverless "taxi" service in a major US city with arbitrary pick and drop off locations, even in a restricted geographical area.
NET 2032This is what Uber, Lyft, and conventional taxi services can do today.
Driverless taxi services operating on all streets in Cambridgeport, MA, and Greenwich Village, NY. NET 2035Unless parking and human drivers are banned from those areas before then.
A major city bans parking and cars with drivers from a non-trivial portion of a city so that driverless cars have free reign in that area.NET 2027
BY 2031
This will be the starting point for a turning of the tide towards driverless cars.
The majority of US cities have the majority of their downtown under such rules.NET 2045
Electric cars hit 30% of US car sales.NET 2027
Electric car sales in the US make up essentially 100% of the sales.NET 2038
Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.NIMLThere might be some small demonstration projects, but they will be just that, not real, viable mass market services.
First time that a car equipped with some version of a solution for the trolley problem is involved in an accident where it is practically invoked.NIMLRecall that a variation of this was a key plot aspect in the movie "I, Robot", where a robot had rescued the Will Smith character after a car accident at the expense of letting a young girl die.

Chandler is a suburb of Phoenix and is itself the 84th largest city in the US. With apologies to residents of Chandler, I do not think that it comes to mind as a major US city for most Americans. Furthermore, the service has so far not been open to the public, but instead started with just a few hundred people (out of a population of about one quarter of a million residents) who had previously been approved to use the service when there was a human safety driver on board. These riders are banned from talking about when things go wrong so we really don’t know how well the systems works. Over 2019 the number of riders has grown to 1,500 monthly users, and a total of about 100,000 rides. Recently there has been an announcement that a phone app will make the service available to more users.

BUT, while there is no human driver in the taxi there is a remote human safety driver for all rides, as detailed in this story. While the humans can monitor more than one vehicle at a time, obviously there is a scaling issue, and the taxis are not truly autonomous. To make them so would be a big step. Also the taxis do not operate when it is raining. That would be the peak usage time for taxis in most cities. But they just don’t operate in the rain.

So… no self driving taxi service yet, even in a relatively small city with a population density many times less than that of major US cities.

The last twelve months have seen a real shakeout in expectations for deployment of self driving cars.  Companies are realizing that it is much harder than the came to believe for a while, and that there are many issues beyond simply “driving”, that need to be addressed.  I previously talked about a some of those issues in on this blog in January and June of 2017.

To illustrate how predictions have been slipping, here is a slide that I made for talks based on a snapshot of predictions about driverless cars from March 27, 2017. The web address still seems to give the same predictions with a couple more at the end that I couldn’t fit on my slide. In parentheses are the years the predictions were made, and in blue are the dates for when the innovation was predicted to happen.

Recently I had added some arrows to this slide. The skinny red arrows point to dates that have passed without the prediction coming to pass. The fatter orange arrows point to cases where company executives have since come out with updated predictions that are later than the ones given here. E.g., in the fourth line from the bottom, the Daimler chairman had said in 2014 that fully autonomous vehicles could be ready by 2025. In November of 2019 the chairman announced a reality check on self driving cars, as one can see in numerous online stories. Here is the first paragraph of one report on his remarks:

Mercedes-Benz parent Daimler has taken a “reality check” on self-driving cars. Making autonomous vehicles safe has proven harder than originally thought, and Daimler is now questioning their future earnings potential, CEO Ola Kaellenius told Reuters and other media.

Other reports of the same story can be found here and here.

None of the original predictions have come to pass, and those still standing are getting rather sparse.

<rant>

At the same time, however, there have been more outrageously optimistic predictions made about fully self driving cars being just around the corner. I won’t name names, but on April 23rd of 2019, i.e., less than nine months ago, Elon Musk said that in 2020 Tesla would have “one million robo-taxis” on the road, and that they would be “significantly cheaper for riders than what Uber and Lyft cost today”. While I have no real opinion on the veracity these predictions, they are what is technically called bullshit. Kai-Fu Lee and I had a little exchange on Twitter where we agreed that together we would eat all such Tesla robo-taxis on the road at the end of this year, 2020.

</rant>

Artificial Intelligence and Machine Learning

I had not predicted any big milestones for AI and machine learning for the current period, and indeed there were none achieved.

We have seen certain proponents be very proud of how much more compute they have, growing at many times what Moore’s Law at its best would provide. I think it is fair to say that the results of all that computing since 2012 are not very impressive when compared to what a single human brain, powered at just 20 Watts has been able to achieve in the same time frame — one just has to look at someone who’s 20th birthday is today, January 1st, 2020, and compare what they know now and what they can achieve now to what they could do in 2012.

And there has even been a little backlash about the carbon footprint that modern ML data sets cause in training. There are even tools and best practices for cutting down the carbon footprint of your ML research. People can argue about the details, but no one can make a case that the energy usage is not many orders of magnitude more than used by the meat machine inside people’s heads, and that human performance is way more impressive than any machine performance to date. People get fooled all the time by the slick marketing around each new achievement by the machine learning companies, but when you poke them you see that the achievements are rather pathetic compared to human performance.

Without any retraining make a Go playing program compete against a human on a 25 by 25 board, or even an 18 by 18 board. Or change all the colors of the pixels in a Quake Three Arena, or change the screen resolution, and humans will adapt seamlessly while the ML trained systems will have to start from zero again.

While ML conference attendance has gone up by a factor of 20 or so, the results are not so interestingly more powerful in terms of impact they have on the real world.

Right after the Artificial Intelligence and machine learning table I have some links to back up today’s assertion in it that there are more blog posts pushing back on DL as being all we will need to get to human level (whatever that might mean) Artificial Intelligence.

Prediction
[AI and ML]
Date2018 CommentsUpdates
Academic rumblings about the limits of Deep Learning
BY 2017
Oh, this is already happening... the pace will pick up.20190101 There were plenty of papers published on limits of Deep Learning. I've provided links to some right below this table. 20200101
Go back to last year's update to see them.
The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play.
BY 2018
20190101 Likewise some technical press stories are linked below. 20200101
Go back to last year's update to see them.
The popular press starts having stories that the era of Deep Learning is over.BY 202020200101 We are seeing more and more opinion pieces by non-reporters saying this, but still not quite at the tipping point where reporters come at and say it. Axios and WIRED are getting close.
VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning".NET 2021I am being a little cynical here, and of course there will be no way to know when things change exactly.
Emergence of the generally agreed upon "next big thing" in AI beyond deep learning.NET 2023
BY 2027
Whatever this turns out to be, it will be something that someone is already working on, and there are already published papers about it. There will be many claims on this title earlier than 2023, but none of them will pan out.
The press, and researchers, generally mature beyond the so-called "Turing Test" and Asimov's three laws as valid measures of progress in AI and ML.NET 2022I wish, I really wish.
Dexterous robot hands generally available.NET 2030
BY 2040 (I hope!)
Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc.Lab demo: NET 2026
Expensive product: NET 2030
Affordable product: NET 2035
What is easy for humans is still very, very hard for robots.
A robot that can provide physical assistance to the elderly over multiple tasks (e.g., getting into and out of bed, washing, using the toilet, etc.) rather than just a point solution.NET 2028There may be point solution robots before that. But soon the houses of the elderly will be cluttered with too many robots.
A robot that can carry out the last 10 yards of delivery, getting from a vehicle into a house and putting the package inside the front door.Lab demo: NET 2025
Deployed systems: NET 2028
A conversational agent that both carries long term context, and does not easily fall into recognizable and repeated patterns.Lab demo: NET 2023
Deployed systems: 2025
Deployment platforms already exist (e.g., Google Home and Amazon Echo) so it will be a fast track from lab demo to wide spread deployment.
An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse.NET 2030I will need a whole new blog post to explain this...
A robot that seems as intelligent, as attentive, and as faithful, as a dog.NET 2048This is so much harder than most people imagine it to be--many think we are already there; I say we are not at all there.
A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans.NIML

There are outlets now for non-journalists, perhaps practitioners in a scientific field, to write position papers that get widely referenced in social media. These position papers are often forerunners of what the popular press will soon start reporting.

During 2019 we saw many, many well informed such position papers/blogposts. We have seen explanations on how machine learning  has limitations on when it makes sense to be used and that it may not be a universal silver bullet.  There have been posts that deep learning may be hitting limits as it has no common sense. We have seen questions about the practical value of the results of deep learning on game playing as game playing is precisely where we have massive amounts of completely relevant data–problems in the real world more commonly have very little data and reasoning from other domains is imperative to figuring out how to make progress on the problem. And we have seen warnings that all the over-hype of machine and deep learning may lead to a new AI winter when those tens of thousands of jolly conference attendees will no longer have grants and contracts to pay for travel to and attendance at their fiestas.

I am very concerned about what will happen when the current machine/deep learning bubble bursts. We have seen the bursting of hype bubbles decimate AI research before. The self driving cars bubble and its bubble bursting having a potential negative impact in AI research also worries me.

Space

There were no target dates that have been hit or missed in the last year in the space launch domain, but I have made a couple of update comments in the following table, and then follow it with details in the text below.

Prediction
[Space]
Date2018 CommentsUpdates
Next launch of people (test pilots/engineers) on a sub-orbital flight by a private company.
BY 2018
20190101 Virgin Galactic did this on December 13, 2018.

20200101 On February 22, 2019, Virgin Galactic had their second flight, this time with three humans on board, to space of their current vehicle. As far as I can tell that is the only sub-orbital flight of humans in 2019. Blue Origin's new Shepard flew three times in 2019, but with no people aboard as in all its flights so far.
A few handfuls of customers, paying for those flights.NET 2020
A regular sub weekly cadence of such flights.NET 2022
BY 2026
Regular paying customer orbital flights.NET 2027Russia offered paid flights to the ISS, but there were only 8 such flights (7 different tourists). They are now suspended indefinitely.
Next launch of people into orbit on a US booster.
NET 2019
BY 2021
BY 2022 (2 different companies)
Current schedule says 2018.20190101 It didn't happen in 2018. Now both SpaceX and Boeing say they will do it in 2019.

20200101 Both Boeing and SpaceX had major failures with their systems during 2019, though no humans were aboard in either case. So this goal was not achieved in 2019. Both companies are optimistic of getting it done in 2020, as they were for 2019. I'm sure it will happen eventually for both companies.
Two paying customers go on a loop around the Moon, launch on Falcon Heavy.
NET 2020
The most recent prediction has been 4th quarter 2018. That is not going to happen.20190101 I'm calling this one now as SpaceX has revised their plans from a Falcon Heavy to their still developing BFR (or whatever it gets called), and predict 2023. I.e., it has slipped 5 years in the last year.
Land cargo on Mars for humans to use at a later date
NET 2026SpaceX has said by 2022. I think 2026 is optimistic but it might be pushed to happen as a statement that it can be done, rather than for an pressing practical reason.
Humans on Mars make use of cargo previously landed there.NET 2032Sorry, it is just going to take longer than every one expects.
First "permanent" human colony on Mars.NET 2036It will be magical for the human race if this happens by then. It will truly inspire us all.
Point to point transport on Earth in an hour or so (using a BF rocket).NIMLThis will not happen without some major new breakthrough of which we currently have no inkling.
Regular service of Hyperloop between two cities.NIMLI can't help but be reminded of when Chuck Yeager described the Mercury program as "Spam in a can".

During a ground test of the SpaceX Crewed Dragon capsule, on April 20th, 2019, it exploded catastrophically. This delayed the SpaceX program so that no manned test could be done in 2019. SpaceX traced the problem to a valve failure when starting up the capsule abort engines, needed during launch if the booster rocket is undergoing failure. They currently have a test scheduled for early 2020 where these engines will be ignited during a launch so that the capsule can safely fly away from the launch vehicle.

In December of 2019 Boeing had a major test of its CST-100 Starliner capsule, and ended up with both a failure and a success for the mission. It was supposed to be the final unmanned test of the vehicle, and was planned to dock with the International Space Station (ISS) and then do a soft landing on the ground. It launched on December 20th and achieved orbit, but due to software failures it was the wrong orbit and there was not enough fuel left to get it to the ISS. This was a major failure. On the other hand it achieved a major success in doing a soft landing in New Mexico on December 22nd.

Other Hype Magnets

I have not felt qualified to talk about the hype impact for both quantum computing and block chain. Just at the end of 2019 there was a very interesting blog post by Scott Aaronson, a true expert and theoretical contributor to the field of quantum computing, on how to read announcements about quantum computing results. I recommend it.

Guest Post by Phillip Alvelda: Pondering the Empathy Gap

rodneybrooks.com/guest-post-by-phillip-alveda-pondering-the-empathy-gap/

[Phillip Alvelda is an old friend from MIT, and CEO of Brainworks.]

Pondering how to close what seems to be a rapidly widening empathy gap here in the U.S. and globally.

I used to just be resigned to the fact that many of my white friends who had never felt, or experienced discrimination directed at themselves seem incapable of seeing or recognizing implicit, or even explicit, bias directed at others. I didn’t used to think of these people as mean or racist…just oblivious through lack of direct experience.

But now, with a nation inflamed by our own government inciting and validating hatred and bigotry, with brown asylum seekers and children dying in mass US internment camps, and LGBTQ and women’s’ rights under mounting assault, the discrimination has literally turned lethal. And the empathy gap is enabling these crimes against humanity to continue and grow in the US now, just like the silent majority in Weimar Germany allowed the Jewish genocide to advance.

I’ve come to see supporters of this corrupt and criminal administration as increasingly complicit in the ongoing crimes. It is no longer just a matter of not seeing discrimination that doesn’t impact your family directly.

Trump supporters and anyone who supports any of his Republican enablers must now find some way to look past the growing reports of discrimination, minority voter suppression and gerrymandering, hate crimes, repression, the roll back of women’s and LGBTQ rights, a measurable biased justice system, mass internment camps, and now even the murder of the weak and vulnerable kidnapped children that commit no crime other than to follow our own ancestors to seek freedom and opportunity in the US….. This growing mass of willfully blind conservatives have abandoned fair morality, and are direct enablers of evil.

We are now in an era I never thought to see in the US, when government manufactured propaganda is purposely driving the dehumanization of women, LGBTQ people, and people of color. The US empathy gap is widening rapidly. How can we fight these dark divisive forces and narrow the gap, when our polarized society can’t even agree on measurable objective realities like the climate crisis?

Otherwise, I fear the U.S. is on a path to dissolve into at least two countries, divided along a border between those states who value empathy and seek an inclusive and pluralistic future society, and those who seek to retreat to tribal protectionism of historical rights for a shrinking privileged majority.

That this struggle rises now really baffles me. Consider the world’s obviously increasing wealth and abundance, with declining poverty and starvation and increasing access to virtually unlimited renewable energy. The need for tribal dominance to horde resources is dissapearing. The need for borders to protect resources that are no longer scarce, is vanishing.

Just imagine if all of our military and arms spending, all of the money we spend enforcing borders and limiting access to food and medicine and energy and education were instead directed towards sharing this abundance!

Pluralism and empathy are clearly the answer. How can we get more people to realize this despite the onslaught of vitriol and tribal Incitement from the likes of Fox News?

AGI Has Been Delayed

rodneybrooks.com/agi-has-been-delayed/

very recent article follows in the footsteps of many others talking about how the promise of autonomous cars on roads is a little further off than many pundits have been predicting for the last few years. Readers of this blog will know that I have been saying this for over two years now. Such skepticism is now becoming the common wisdom.

In this new article at The Ringer, from May 16th, the author Victor Luckerson, reports:

Elon Musk, the driverless car is always right around the corner. At an investor day event last month focused on Tesla’s autonomous driving technology, the CEO predicted that his company would have a million cars on the road next year with self-driving hardware “at a reliability level that we would consider that no one needs to pay attention.” That means Level 5 autonomy, per the Society of Automotive Engineers, or a vehicle that can travel on any road at any time without human intervention. It’s a level of technological advancement I once compared to the Batmobile.

Musk has made these kinds of claims before. In 2015 he predicted that Teslas would have “complete autonomy” by 2017 and a regulatory green light a year later. In 2016 he said that a Tesla would be able to drive itself from Los Angeles to New York by 2017, a feat that still hasn’t happened. In 2017 he said people would be able to safely sleep in their fully autonomous Teslas in about two years. The future is now, but napping in the driver’s seat of a moving vehicle remains extremely dangerous.

When I saw someone tweeting that Musk’s comments meant that a million autonomous taxis would be on the road by 2020, I tweeted out the following:

Let’s count how many truly autonomous (no human safety driver) Tesla taxis (public chooses destination & pays) on regular streets (unrestricted human driven cars on the same streets) on December 31, 2020. It will not be a million. My prediction: zero. Count & retweet this then.

I think these three criteria need to be met before someone can say that we have autonomous taxis on the road.

The first challenge, no human safety driver, has not been met by a single experimental deployment of autonomous vehicles on public roads anywhere in the world. They all have safety humans in the vehicle. A few weeks ago I saw an autonomous shuttle trial along the paved beachside public walkways at the beach on which I grew up, in Glenelg, South Australia, where there were two “two onboard stewards to ensure everything runs smoothly” along with eight passengers. Today’s demonstrations are just not autonomous. In fact in the article above Luckerson points out that Uber’s target is to have their safety drivers intervene only once every 13 miles, but they are way off that capability at this time. Again, hardly autonomous, even if they were to meet that goal. Imagine having a breakdown of your car that you are driving once every 13 miles–we expect better.

And if normal human beings can’t simply use these services (in Waymo’s Phoenix trial only 400 pre-approved people are allowed to try them out) and go anywhere that they can go in a current day taxi, then really the things deployed will not be autonomous taxis. They will be something else. Calling them taxis would be redefining what a taxi is. And if you can just redefine words on a whim there is really not much value to your words.

I am clearly skeptical about seeing autonomous cars on our roads in the next few years. In the long term I am enthusiastic. But I think it is going to take longer than most people think.

In response to my tweet above, Kai-Fu Lee, a very strong enthusiast about the potential for AI, and a large investor in Chinese AI companies, replied with:

If there are a million Tesla robo-taxis functioning on the road in 2020, I will eat them. Perhaps @rodneyabrooks will eat half with me?

I readily replied that I would be happy to share the feast!

Luckerson talks about how executives, in general, are backing off from their previous predictions about how close we might be to having truly autonomous vehicles on our roads.  Most interestingly he quotes Chris Urmson:

Chris Urmson, the former leader of Google’s self-driving car project, once hoped that his son wouldn’t need a driver’s license because driverless cars would be so plentiful by 2020. Now the CEO of the self-driving startup Aurora, Urmson says that driverless cars will be slowly integrated onto our roads “over the next 30 to 50 years.”

Now let’s take note of this. Chris Urmson was the leader of Google’s self-driving car project, which became Waymo around the time he left, and is the CEO of a very well funded self-driving start up. He says “30 to 50 years”. Chris Urmson has been a leader in the autonomous car world since before it entered mainstream consciousness. He has lived and breathed autonomous vehicles for over ten years. No grumpy old professor is he. He is a doer and a striver. If he says it is hard then we know that it is hard.

I happen to agree, but I want to use this reality check for another thread.

If we were to have AGI, Artificial General Intelligence, with human level capabilities, then certainly it ought to be able to drive a car, just like a person, if not better. Now a self driving car does not need to have general human level intelligence, but a self driving car is certainly a lower bound on human level intelligence.  Urmson, a strong proponent of self driving cars says 30 to 50 years.

So what does that say about predictions that AGI is just around the corner? And what does it say about it being an existential threat to humanity any time soon. We have plenty of existential threats to humanity lining up to bash us in the short term, including climate change, plastics in the oceans, and a demographic inversion. If AGI is a long way off then we can not say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and when it does show up it will be in a world that we can not yet predict.

Do people really say that AGI is just around the corner? Yes, they do…

Here is a press report on a conference on “Human Level AI” that was held in 2018. It reports that 37\% of respondents to a survey at that conference said they expected human level AI to be around in 5 to 10 years. Now, I must say that looking through the conference site I see more large hats than cattle, but these are mostly people with paying corporate or academic jobs, and 37\% of them think this.

Ray Kurzweil still maintains, in Martin Ford’s recent book that we will see a human level intelligence by 2029–in the past he has claimed that we will have a singularity by then as the intelligent machines will be so superior to human level intelligence that they will exponentially improve themselves (see my comments on belief in magic as one of the seven deadly sins in predicting the future of AI). Mercifully the average prediction of the 18 respondents for this particular survey was that AGI would show up around 2099.  I may have skewed that average a little as I was an outlier amongst the 18 people at the year 2200. In retrospect I wish I had said 2300 and that is the year I have been using in my recent talks.

And a survey taken by the Future of Life Institute (warning: that institute has a very dour view of the future of human life, worse than my concerns of a few paragraphs ago) says were are going to get AGI around 2050.

But that is the low end of when Urmson thinks we will have autonomous cars deployed. Suppose he is right about his range. And suppose I am right that  autonomous driving is a lower bound on AGI, and I believe it is a very low bound. With these very defensible assumptions then the seemingly sober experts in Martin Ford’s new book are on average wildly optimistic about when AGI is going to show up.

AGI has been delayed.

 

A Better Lesson

rodneybrooks.com/a-better-lesson/

Just last week Rich Sutton published a very short blog post titled  The Bitter Lesson. I’m going to try to keep this review shorter than his post. Sutton is well known for his long and sustained contributions to reinforcement learning.

In his post he argues, using many good examples, that over the 70 year history of AI, more computation and less built in knowledge has always won out as the best way to build Artificial Intelligence systems. This resonates with a current mode of thinking among many of the newer entrants to AI that it is better to design learning networks and put in massive amounts of computer power, than to try to design a structure for computation that is specialized in any way for the task. I must say, however, that at a two day work shop on Deep Learning last week at the National Academy of Science, the latter idea was much more in vogue, something of a backlash against exactly what Sutton is arguing.

I think Sutton is wrong for a number of reasons.

  1. One of the most celebrated successes of Deep Learning is image labeling, using CNNs, Convolutional Neural Networks, but the very essence of CNNs is that the front end of the network is designed by humans to manage translational invariance, the idea that objects can appear anywhere in the frame. To have a Deep Learning network also have to learn that seems pedantic to the extreme, and will drive up the computational costs of the learning by many orders of magnitude.
  2. There are other things in image labeling that suffer mightily because the current crop of CNNs do not have certain things built in that we know are important for human performance. E.g., color constancy. This is why the celebrated example of a traffic stop sign with some pieces of tape on it is seen as a 45 mph speed limit sign by a certain CNN trained for autonomous driving. No human makes that error because they know that stop signs are red, and speed limit signs are white. The CNN doesn’t know that, because the relationship between pixel color in the camera and the actual color of the object is a very complex relationship that does not get elucidated with the measly tens of millions of training images that the algorithms are trained on. Saying that in the future we will have viable training sets is shifting the human workload to creating massive training sets and encoding what we want the system to learn in the labels. This is just as much building knowledge in as it would be to directly build a color constancy stage. It is sleight of hand in moving the human intellectual work to somewhere else.
  3. In fact, for most machine learning problems today a human is needed to design a specific network architecture for the learning to proceed well. So again, rather than have the human build in specific knowledge we now expect the human to build the particular and appropriate network, and the particular training regime that will be used. Once again it is sleight of hand to say that AI succeeds without humans getting into the loop. Rather we are asking the humans to pour their intelligence into the algorithms in a different place and form.
  4. Massive data sets are not at all what humans need to learn things so something is missing. Today’s data sets can have billions of examples, where a human may only require a handful to learn the same thing. But worse, the amount of computation needed to train many of the networks we see today can only be furnished by very large companies with very large budgets, and so this push to make everything learnable is pushing the cost of AI outside that of individuals or even large university departments. That is not a sustainable model for getting further in intelligent systems. For some machine learning problems we are starting to see a significant carbon foot print due to the power consumed during the learning phase.
  5. Moore’s Law is slowing down, so that some computer architects are reporting the doubling time in amount of computation on a single chip is moving from one year to twenty years. Furthermore the breakdown of Dennard scaling back in 2006 means that the power consumption of machines goes up as they perform better, and so we can not afford to put even the results of machine learning (let alone the actual learning) on many of our small robots–self driving cars require about 2,500 Watts of power for computation–a human brain only requires 20 Watts. So Sutton’s argument just makes this worse, and makes the use of AI and ML impractical.
  6. Computer architects are now trying to compensate for these problems by building special purpose chips for runtime use of trained networks. But they need to lock in the hardware to a particular network structure and capitalize on human analysis of what tricks can be played without changing the results of the computation, but with greatly reduced power budgets. This has two drawbacks. First it locks down hardware specific to particular solutions, so every time we have a new ML problem we will need to design new hardware. And second, it once again is simply shifting where human intelligence needs to be applied to make ML practical, not eliminating the need for humans to be involved in the design at all.

So my take on Rich Sutton’s piece is that the lesson we should learn from the last seventy years of AI research is not at all that we should just use more computation and that always wins. Rather I think a better lesson to be learned is that we have to take into account the total cost of any solution, and that so far they have all required substantial amounts of human ingenuity. Saying that a particular solution style minimizes a particular sort of human ingenuity that is needed while not taking into account all the other places that it forces human ingenuity (and carbon footprint) to be expended is a terribly myopic view of the world.

This review, including this comment, is seventy six words shorter than Sutton’s post.

Predictions Scorecard, 2019 January 01

rodneybrooks.com/predictions-scorecard-2019-january-01/

On January 1st, 2018, I made predictions (here) about self driving cars, Artificial Intelligence and machine learning, and about progress in the space industry. Those predictions had dates attached to them for 32 years up through January 1st, 2050.

So, today, January 1st, 2019, is my first annual self appraisal of how well I did. I’ll try to do this every year for 32 years, if I last that long.

I am not going to edit my original post, linked above, at all, even though I see there are a few typos still lurking in it. Instead I have copied the three tables of predictions below. I have changed the header of the third column in each case to “2018 Comments”, but left the comments exactly as they were, and added a fourth column titled “Updates”. In one case I fixed a typo (about self driving taxis in Cambridgeport and Greenwich Village) in the left most column. I have started highlighting the dates in column two where the time they refer to has arrived, and I am starting to put comments in the updates fourth column.

I will tag each comment in the fourth column with a cyan colored date tag in the form yyyymmdd such as 20190603 for June 3rd, 2019.

The entries that I put in the second column of each table, titled “Date” in each case, back on January 1st of 2018, have the following forms:

NIML meaning “Not In My Lifetime, i.e., not until beyond December 31st, 2049, the last day of the first half of the 21st century.

NET some date, meaning “No Earlier Than” that date.

BY some date, meaning “By” that date.

Sometimes I gave both a NET and a BY for a single prediction, establishing a window in which I believe it will happen.

For now I am coloring those statements when it can be determined already whether I was correct or not.

I have started using LawnGreen (#7cfc00) for those predictions which were entirely accurate. For instance a BY 2018 can be colored green if the predicted thing did happen in 2018, as can a NET 2019 if it did not happen in 2018 or earlier. There are five predictions now colored green.

I will color dates Tomato (#ff6347) if I was too pessimistic about them. No Tomato dates yet. But if something happens that I said NIML, for instance then it would go Tomato, or if in 2019 something already had happened that I said NET 2020, then that too would go Tomato.

If I was too optimistic about something, e.g., if I had said BY 2018, and it hadn’t yet happened, then I would color it DeepSkyBlue (#00bfff). None of these yet either. And eventually if there are NETs that went green, but years later have still not come to pass I may start coloring them LightSkyBlue (#87cefa).

In summary then: Green splashes mean I got things exactly right. Red means provably wrong and that I was too pessimistic. And blueness will mean that I was overly optimistic.

So now, here are the updated tables. So far none of my predictions have been at all wrong–there is only one direction to go from here!

No predictions have yet been relevant for self driving cars, but I have added one comment in this first table.

Prediction
[Self Driving Cars]
Date2018 CommentsUpdates
A flying car can be purchased by any US resident if they have enough money.NET 2036There is a real possibility that this will not happen at all by 2050.
Flying cars reach 0.01% of US total cars.NET 2042That would be about 26,000 flying cars given today's total.
Flying cars reach 0.1% of US total cars.NIML
First dedicated lane where only cars in truly driverless mode are allowed on a public freeway.
NET 2021This is a bit like current day HOV lanes. My bet is the left most lane on 101 between SF and Silicon Valley (currently largely the domain of speeding Teslas in any case). People will have to have their hands on the wheel until the car is in the dedicated lane.
Such a dedicated lane where the cars communicate and drive with reduced spacing at higher speed than people are allowed to driveNET 2024
First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.NET 2022The pick up and drop off points will not be parking spots, but like bus stops they will be marked and restricted for that purpose only.20190101 Although a few such services have been announced every one of them operates with human safety drivers on board. And some operate on a fixed route and so do not count as a "taxi" service--they are shuttle buses. And those that are "taxi" services only let a very small number of carefully pre-approved people use them. We'll have more to argue about when any of these services do truly go driverless. That means no human driver in the vehicle, or even operating it remotely.
Such "taxi" services where the cars are also used with drivers at other times and with extended geography, in 10 major US citiesNET 2025A key predictor here is when the sensors get cheap enough that using the car with a driver and not using those sensors still makes economic sense.
Such "taxi" service as above in 50 of the 100 biggest US cities.NET 2028It will be a very slow start and roll out. The designated pick up and drop off points may be used by multiple vendors, with communication between them in order to schedule cars in and out.
Dedicated driverless package delivery vehicles in very restricted geographies of a major US city.NET 2023The geographies will have to be where the roads are wide enough for other drivers to get around stopped vehicles.
A (profitable) parking garage where certain brands of cars can be left and picked up at the entrance and they will go park themselves in a human free environment.NET 2023The economic incentive is much higher parking density, and it will require communication between the cars and the garage infrastructure.
A driverless "taxi" service in a major US city with arbitrary pick and drop off locations, even in a restricted geographical area.
NET 2032This is what Uber, Lyft, and conventional taxi services can do today.
Driverless taxi services operating on all streets in Cambridgeport, MA, and Greenwich Village, NY. NET 2035Unless parking and human drivers are banned from those areas before then.
A major city bans parking and cars with drivers from a non-trivial portion of a city so that driverless cars have free reign in that area.NET 2027
BY 2031
This will be the starting point for a turning of the tide towards driverless cars.
The majority of US cities have the majority of their downtown under such rules.NET 2045
Electric cars hit 30% of US car sales.NET 2027
Electric car sales in the US make up essentially 100% of the sales.NET 2038
Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.NIMLThere might be some small demonstration projects, but they will be just that, not real, viable mass market services.
First time that a car equipped with some version of a solution for the trolley problem is involved in an accident where it is practically invoked.NIMLRecall that a variation of this was a key plot aspect in the movie "I, Robot", where a robot had rescued the Will Smith character after a car accident at the expense of letting a young girl die.

Right after the Artificial Intelligence and machine learning table I have some links to back up my assertions.

Prediction
[AI and ML]
Date2018 CommentsUpdates
Academic rumblings about the limits of Deep Learning
BY 2017
Oh, this is already happening... the pace will pick up.20190101 There were plenty of papers published on limits of Deep Learning. I've provided links to some right below this table.
The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play.
BY 2018
20190101 Likewise some technical press stories are linked below.
The popular press starts having stories that the era of Deep Learning is over.BY 2020
VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning".NET 2021I am being a little cynical here, and of course there will be no way to know when things change exactly.
Emergence of the generally agreed upon "next big thing" in AI beyond deep learning.NET 2023
BY 2027
Whatever this turns out to be, it will be something that someone is already working on, and there are already published papers about it. There will be many claims on this title earlier than 2023, but none of them will pan out.
The press, and researchers, generally mature beyond the so-called "Turing Test" and Asimov's three laws as valid measures of progress in AI and ML.NET 2022I wish, I really wish.
Dexterous robot hands generally available.NET 2030
BY 2040 (I hope!)
Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc.Lab demo: NET 2026
Expensive product: NET 2030
Affordable product: NET 2035
What is easy for humans is still very, very hard for robots.
A robot that can provide physical assistance to the elderly over multiple tasks (e.g., getting into and out of bed, washing, using the toilet, etc.) rather than just a point solution.NET 2028There may be point solution robots before that. But soon the houses of the elderly will be cluttered with too many robots.
A robot that can carry out the last 10 yards of delivery, getting from a vehicle into a house and putting the package inside the front door.Lab demo: NET 2025
Deployed systems: NET 2028
A conversational agent that both carries long term context, and does not easily fall into recognizable and repeated patterns.Lab demo: NET 2023
Deployed systems: 2025
Deployment platforms already exist (e.g., Google Home and Amazon Echo) so it will be a fast track from lab demo to wide spread deployment.
An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse.NET 2030I will need a whole new blog post to explain this...
A robot that seems as intelligent, as attentive, and as faithful, as a dog.NET 2048This is so much harder than most people imagine it to be--many think we are already there; I say we are not at all there.
A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans.NIML

With regards to academic rumblings about deep learning, in 2017 there was a new cottage industry in attacking deep learning by constructing fake images for which a deep learning network gave high scores for ridiculous interpretations. These are known as adversarial attacks on deep learning, and some defenders counter claim that such images will never arrive in practice.

But then in 2018 others found images that were completely natural that fooled particular deep learning networks. A group of researchers from Auburn University in Alabama show how an otherwise well trained network can just completely misclassify objects with unusual orientations, in ways which no human would get wrong at all. Here are some examples:

We humans can see why or how a network might get the first one wrong for instance. It is a large yellow object across a snowy road. But other clues, like the size of the person standing in front of it immediately get us to understand that it is a school bus on its side across the road, and we are looking at its roof.

And here is a paper from researchers at York University and the University of Toronto (both in Toronto) with this abstract:

We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this “object transplanting”. Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena.

In all their images a human can easily see that an object (e.g., an elephant, say, and hence the very clever title of the paper, “The Elephant in the Room”) has been pasted on to a real scene, and both understand the real scene and identify the object pasted on. The deep learning network can often do neither.

Other academics took to more popular press outlets to express their concerns that the press was overhyping deep learning, and showing what the limits are in reality. There was a piece by Michael Jordan of UC Berkeley in Medium, an op-ed in the New York Times by Gary Marcus and Ernest Davis of NYU and a story on the limits of Google Translate in the Atlantic by Douglas Hofstadter of Indiana University at Bloomington.

As for stories in the technical press there were many that sounded warning alarms about how deep learning was not necessarily going to the greatest most important technical breakthrough in the history of mankind. I must admit, however, that more than 99% of the popular press stories did lean towards that far fetched conclusion, especially in the headlines.

Here is PC Magazine talking about the limits in language understanding, Forbes magazine on the overhyping of deep learning. A national security newsletter quotes a Nobel prizewinner on AI:

Intuition, insight, and learning are no longer exclusive possessions of human beings: any large high-speed computer can be programed to exhibit them also.

This was said by Herb Simon in 1958. The newsletter goes on to warn that over hype is nothing new in AI and that it could well lead to another AI winter. Harvard Magazine reports on the dangers applying a an inadequate AI system to decision making about humans. And many many outlets reported on an experimental Amazon recruiting tool that learned biases against women candidates from looking at how humans had evaluated CVs.

The press is not yet fully woke with regard to AI, and deep learning in particular, but there are signs and examples of wokeness showing up all over.


Developments in space were the most active for this first year, and fortunately both my optimism and pessimism were well place and were each rewarded.

Prediction
[Space]
Date2018 CommentsUpdates
Next launch of people (test pilots/engineers) on a sub-orbital flight by a private company.
BY 2018
20190101 Virgin Galactic did this on December 13, 2018.
A few handfuls of customers, paying for those flights.NET 2020
A regular sub weekly cadence of such flights.NET 2022
BY 2026
Regular paying customer orbital flights.NET 2027Russia offered paid flights to the ISS, but there were only 8 such flights (7 different tourists). They are now suspended indefinitely.
Next launch of people into orbit on a US booster.
NET 2019
BY 2021
BY 2022 (2 different companies)
Current schedule says 2018.20190101 It didn't happen in 2018. Now both SpaceX and Boeing say they will do it in 2019.
Two paying customers go on a loop around the Moon, launch on Falcon Heavy.
NET 2020
The most recent prediction has been 4th quarter 2018. That is not going to happen.20190101 I'm calling this one now as SpaceX has revised their plans from a Falcon Heavy to their still developing BFR (or whatever it gets called), and predict 2023. I.e., it has slipped 5 years in the last year.
Land cargo on Mars for humans to use at a later date
NET 2026SpaceX has said by 2022. I think 2026 is optimistic but it might be pushed to happen as a statement that it can be done, rather than for an pressing practical reason.
Humans on Mars make use of cargo previously landed there.NET 2032Sorry, it is just going to take longer than every one expects.
First "permanent" human colony on Mars.NET 2036It will be magical for the human race if this happens by then. It will truly inspire us all.
Point to point transport on Earth in an hour or so (using a BF rocket).NIMLThis will not happen without some major new breakthrough of which we currently have no inkling.
Regular service of Hyperloop between two cities.NIMLI can't help but be reminded of when Chuck Yeager described the Mercury program as "Spam in a can".

 

[FoR&AI] Steps Toward Super Intelligence IV, Things to Work on Now

rodneybrooks.com/forai-steps-toward-super-intelligence-iv-things-to-work-on-now/

[This is the fourth part of a four part essay–here is Part I.]

We have been talking about building an Artificial General Intelligence agent, or even a Super Intelligence agent. How are we going to get there?  How are we going get to ECW and SLP? What do researchers need to work on now?

In a little bit I’m going to introduce four pseudo goals, based on the capabilities and competences of children. That will be my fourth big list of things in these four parts of this essay.  Just to summarize so the numbers and lists don’t get too confusing here is what I have described and proposed over these four sub essays:

Part I
4 Previous approaches to AI
Part II2 New Turing Test replacements
Part III
7 (of many) things that are currently hard for AI
Part IV4 Ways to make immediate progress

But what should AI researchers actually work on now?

I think we need to work on architectures of intelligent beings, whether they live in the real world or in cyber space. And I think that we need to work on structured modules that will give the base compositional capabilities, ground everything in perception and action in the world, have useful spatial representations and manipulations, provide enough ability to react to the world on short time scales, and to adequately handle ambiguity across all these domains.

First let’s talk about architectures for intelligent beings.

Currently all AI systems operate within some sort of structure, but it is not the structure of something with ongoing existence. They operate as transactional programs that people run when they want something.

Consider AlphaGo, the program that beat 18 time world Go champion, Lee Sedol, in March of 2016. The program had no idea that it was playing a game, that people exist, or that there is two dimensional territory in the real world–it didn’t know that a real world exists. So AlphaGo was very different from Lee Sedol who is a living, breathing human who takes care of his existence in the world.

I remember seeing someone comment at the time that Lee Sedol was supported by a cup of coffee. And Alpha Go was supported by 200 human engineers. They got it processors in the cloud on which to run, managed software versions, fed AlphaGo the moves (Lee Sedol merely looked at the board with his own two eyes), played AlphaGo’s desired moves on the board, rebooted everything when necessary, and generally enabled AlphaGo to play at all. That is not a Super Intelligence, it is a super basket case.

So the very first thing we need is programs, whether they are embodied or not, that can take care of their own needs, understand the world in which they live (be it the cloud or the physical world) and ensure their ongoing existence. A Roomba does a little of this, finding its recharger when it is low on power, indicating to humans that it needs its dust bin emptied, and asking for help when it gets stuck. That is hardly the level of self sufficiency we need for ECW, but it is an indication of the sort of thing I mean.

Now about the structured modules that were the subject of my second point.

The seven examples I gave, in Part III, of things which are currently hard for Artificial Intelligence, are all good starting points. But they were just seven that I chose for illustrative purposes. There are a number of people who have been thinking about the issue, and they have come up with their own considered lists.

Some might argue, based on the great success of letting Deep Learning learn not only spoken words themselves but the feature detectors for early processing of phonemes that we are better off letting learning figure everything out. My point about color constancy is that it is not something that naturally arises from simply looking at online images. It comes about in the real world from natural evolution building mechanisms to compensate for the fact that objects don’t actually change their inherent color when the light impinging on them changes. That capability is an innate characteristic of evolved organisms whenever it matters to them. We are most likely to get there quicker if we build some of the important modules ahead of time.

And for the hard core learning festishists here is a question to ask them. Would they prefer that their payroll department, their mortgage provider, or the Internal Revenue Service (the US income tax authority) use an Excel spreadsheet to calculate financial matters for them, or would they trust these parts of their lives to a trained Deep Learning network that had seen millions of examples of spreadsheets and encoded all that learning in weights in a network? You know what they are going to answer. When it  comes to such a crunch even they will admit that learning from examples is not necessarily the best approach.

Gary Marcus, who I quoted along with Ernest Davis about common sense in Part III, has talked about his list of modules1 that are most important to build in. They are:

  • Representations of objects
  • Structured, algebraic representations
  • Operations over variables
  • A type-token distinction
  • A capacity to represent sets, locations, paths, trajectories, obstacles and enduring individuals
  • A way of representing the affordances of objects
  • Spatiotemporal contiguity
  • Causality
  • Translational invariance
  • Capacity for cost-benefit analysis

Others will have different explicit lists, but as long as people are working on innate modules that can be combined within a structure of some entity with an ongoing existence and its own ongoing projects, that can be combined within a system that perceives and acts on its world, and that can be combined within a system that is doing something real rather than a toy online demonstration, then progress will be being made.

And note, we have totally managed to avoid the question of consciousness. Whether either ECW or SLP need to conscious in any way at all, is, I think, an open question. And it will remain so as long as we have no understanding at all of consciousness. And we have none!

HOW WILL WE KNOW IF WE ARE GETTING THERE?

Alan Turing introduced The Imitation Game, in his 1950 paper Computing Machinery and Intelligence. His intent was, as he said in the very first sentence of the paper, to consider the question “Can Machines Think?”. He used the game as a rhetorical device to discuss objections to whether or not a machine could be capable of “thinking”. And while he did make a prediction of when a machine would be able to play the game (a 70% change of fooling a human that the machine was a human in the year 2000), I don’t think that he meant the game as a benchmark for machine intelligence.

But the press, over the years, rather than real Artificial Intelligence researchers, picked up on this game and it became known as the Turing Test. For some, whether or not a machine could beat a human at this parlor game, became the acid test of progress in Artificial Intelligence. It was never a particularly good test, and so the big “tournaments” organized around it were largely ignored by serious researchers, and eventually pretty dumb chat bots that were not at all intelligent started to get crowned as the winners.

Meanwhile real researchers were competing in DARPA competitions such as the Grand Challenge, Urban Grand Challenge (which lead directly to all the current work on self driving cars), and the Robot Challenge.

We could imagine tests or competitions being set up for how well an embodied and a disembodied Artificial Intelligence system perform at the ECW and SLP tasks. But I fear that like the Turing Test itself these new tests would get bastardized and gamed. I am content to see the market choose the best versions of ECW and SLP–unlike a pure chatterer that can game the Turing Test, I think such systems can have real economic value. So no tests or competitions for ECWs and SLPs.

I have never been a great fan of competitions for research domains as I have always felt that it leads to group think, and a lot of effort going into gaming the rules. And, I think that specific stated goals can lead to competitions being formed, even when none may have been intended, as in the case of the Turing Test.

Instead I am going to give four specific goals here. Each of them is couched in terms of the competence of capabilities of human children of certain ages.

  • The object recognition capabilities of a two year old.
  • The language understanding capabilities of a four year old.
  • The manual dexterity of a six year old.
  • The social understanding of an eight year old.

Like most people’s understanding of what is pornography or art there is no formal definition that I want to use to back up these goals. I mean them in the way that generally informed people would gauge the performance of an AI system after extended interaction with it, and assumes that they would also have had extended interactions with children of the appropriate age.

These goals are not meant to be defined by “performance tests” that children or an AI system might take. They are meant as unambiguous levels of competence. The confusion between performance and competence was my third deadly sin in my recent post about the mistakes people make in understanding how far along we are with Artificial Intelligence.

If we are going to make real progress towards super, or just every day general, Artificial Intelligence then I think it is imperative that we concentrate on general competence in areas rather than flashy hype bait worthy performances.

Down with performance as a measure, I say, and up with the much fuzzier notion of competence as a measure of whether we are making progress.

So what sort of competence are we talking about for each of these for cases?

2 year old Object Recognition competence. A two year old already has color constancy, and can describe things by at least a few color words. But much more than this they can handle object classes, mapping what they see visually to function.

A two year old child can know that something is deliberately meant to function as a chair even if it is unlike any chair they have seen before. It can have a different number of legs, it can be made of different material, its legs can be shaped very oddly, it can even be a toy chair meant for dolls. A two year old child is not fazed by this at all. Despite having no visual features in common with any other chair the child has ever seen before the child can declare a new chair to be a chair. This is completely different from how a neural network is able to classify things visually.

But more than that, even, a child can see something that is not designed to function as a chair, and can assess whether the object, or location can be used as a chair. The can see a rock and decide that it can be sat upon, or look for a better place where there is something that will functionally act as a seat back.

So two year old children have sophisticated understandings of classes of objects. Once, while I was giving a public talk, a mother felt compelled to leave with her small child who was making a bit of a noisy fuss. I called her back and asked her how old the child was. “Two” came the reply. Perfect for the part of the talk I was just getting to. Live, with the audience watching I made peace with the little girl and asked if she could come up on stage with me. Then I pulled out my key ring, telling the audience that this child would be able to recognize the class of a particular object that she had never seen before. Then I held up one key and asked the two year old girl what it was. She looked at me with puzzlement. Then said, with a little bit of scorn in her voice, “a key”, as though I was an idiot for not knowing what it was. The audience loved it, and the young girl was cheered by their enthusiastic reaction to her!

But wait, there is more! A two year old can do one-shot visual learning from multiple different sources. Suppose a two year old has never been exposed to a giraffe in any way at all. Then seeing just one of a hand drawn picture of a giraffe, a photo of a giraffe, a stuffed toy giraffe, a movie of a giraffe, or seeing one in person for just a few seconds, will forever lock the concept of a giraffe into that two year old’s mind. That child will forever be able to recognize a giraffe as a giraffe, whatever form it is represented in. Most people have never seen a live giraffe, and none have ever seen a live dinosaur, but the are easy for anyone to recognize.

Try that, Deep Learning.  One example, in one form!

4 year old Language Understanding competence. Most four year old children can not read or write, but they can certainly talk and listen. They well understand the give and take of vocal turn-taking, know when they are interrupting, and know when someone is interrupting them. They understand and use prosody to great effect, along with animation of their faces, heads and whole bodies. Likewise they read these same cues from other speakers, and make good use of both projecting and detecting gaze direction in conversations amongst multiple people, perhaps as side conversations occur.

Four year old children understand when they are in conversation with someone, and (usually) when that conversation has ended, or the participants have changed. If there are three of four people in a conversation they do not need to name who they are delivering remarks to, nor to hear their name at the beginning of an utterance in order to understand when a particular remark is directed at them–they use all the non-spoken parts of communication to make the right inferences.

All of this is very different from today’s speech with agents such as the Amazon Echo, or Google Home. It is also different in that a four year old child can carry the context generated by many minutes of conversation. They can understand incomplete sentences, and can generate short meaningful interjections of just a word or two that make sense in context and push forward everyone’s mutual understanding.

A four year old child, like the remarkable progress in computer speech understanding over the last five years due to Deep Learning, can pick out speech in noisy environments, tuning out background noise and concentrating on speech directed at them, or just what they want to hear from another ongoing conversation not directed at them. They can handle strong accents that they have never heard before and still extract accurate meaning in discussions with another person.

They can deduce gender and age from the speech patterns of another, and they are finely attuned to someone they know speaking differently than usual. They can understand shouted, whispered, and sung speech. They themselves can sing, whisper and shout, and often do so appropriately.

And they are skilled in the complexity of sentences that they can handle. They understand many subtleties of tense, they can talk in and understand hypotheticals. Then can engage in and understand nonsense talk, and weave a pattern of understanding through it. They know when the are lying, and can work to hide that fact in their speech patterns.

They are so much more language capable than any of our AI systems, symbolic or neural.

6 year old Manual Dexterity competence. A six year old child, unless some super prodigy, is not able to play Chopin on the piano. But they are able to do remarkable feats of manipulation, with their still tiny hands, that no robot can do. When they see an object for the first time they fairly reliably estimate whether they can pick it up one handed, two handed, or two arms and whole body (using their stomach or chests as an additional anchor region), or not at all. For a one handed grasp they preshape their hand as they reach towards it having decided ahead of time what sort of grasp they are going to use. I’m pretty sure that a six old can do all these human grasps:

[I do not know the provenance of this image–I found it at a drawing web site here.] A six year old can turn on faucets, tie shoe laces, write legibly, open windows, raise and lower blinds if they are not too heavy, and they can use chopsticks in order to eat, even with non-rigid food. They are quite dexterous. With a little instruction they can cut vegetables, wipe down table tops, open and close food containers, open and close closets, and lift stacks of flat things into and out of those closets.

Six year old children can manipulate their non-rigid clothes, fold them, though not as well as skilled adult (I am not a skilled adult in this regard…), manipulate them enough to put them on and off themselves, and their dolls.

Furthermore, they can safely pick up a cat and even a moderately sized dog, and often are quite adept and trustworthy picking up their very young siblings. They can caress their grandparents.

They can wipe their bums without making a mess (most of the time).

ECW will most likely need to be able to do all these things, with scaled up masses (e.g., lifting or dressing a full sized adult which is beyond the strength capabilities of a six year old child).

We do not have any robots today that can do any of these things in the general case where a robot can be placed in a new environment with new instances of objects that have not been seen before, and do any of these tasks.

Going after these levels of manipulation skill will result in robots backed by new forms of AI that can do the manual tasks that we expect of humans, and that will be necessary for giving care to other humans.

8 year old Social Understanding competence. By age eight children are able to articulate their own beliefs, desires, and intentions, at least about concrete things in the world. They are also able to understand that other people may have different beliefs, desires, and intentions, and when asked the right questions can articulate that too.

Furthermore, they can reason about what they believe versus what another person might believe and articulate that divergence. A particular test for this is known as the “false-belief task”. There are many variations on this, but essentially what happens is that an experimenter lets a child see a person make an observation of a person seeing that Box A contains, say, a toy elephant, and that Box B is empty. That person leaves the room, and the experimenter then, in full sight of the child moves the toy elephant to Box B. They then ask the child which box contains the toy elephant, and of course the child says Box B. But the crucial question is to ask the child where the person who left the room will look for the toy elephant when they are asked to find it after they have come back into the room. Once the child is old enough (and there are many experiments and variations here) they are able to tell the experimenter that the person will look in Box A, knowing that is based on a belief the person has which is now factually false.

There is a vast literature on this and many other aspects of understanding other people, and also a vast literature on testing such knowledge for very young children but also for chimpanzees, dogs, birds, and other animals on what they might understand–without the availability of language these experiments can be very hard to design.

And there are many many aspects of social understanding, including inferring a person’s desire or intent from their actions, and understanding why they may have those desires and intents. Some psychological disorders are manifestations of not being able to make such inferences. But in our normal social environment we assume a functional capability in many of these areas about others with whom we are interacting. We don’t feel the need to explain certain things to others as surely they will know from what they are observing. And we also observe the flow of knowledge ourselves and are able to make helpful suggestions as we see people acting in the world. We do this all the time, pointing to things, saying “over there”, or otherwise being helpful, even to complete strangers.

Social understanding is the juice that makes us humans into a coherent whole. And, we have versions of social understanding for our pets, but not for our plants. Eight year old children have enough of it for much of every day life.

Improvement in Competence will lead the way

These competencies of two, four, six, and eight year old children will all come into play for ECW and SLP. Without these competencies, our intelligent systems will never seem natural or as intelligent as us. With these competencies, whether they are implemented in ways copied from humans or not (birds vs airplanes) our intelligent systems will have a shot at appearing as intelligent as us. They are crucial for an Artificial Generally Intelligent system, or for anything that we will be willing to ascribe Super Intelligence to.

So, let’s make progress, real progress, not simple hype bait, on all four of these systems level goals. And then, for really the first time in sixty years we will actually be part ways towards machines with human level intelligence and competence.

In reality it will just be a small part of the way, and even less of the way to towards Super Intelligence.

It turns out that constructing deities is really really hard. Even when they are in our own image.



1 Innateness, AlphaZero, and Artificial Intelligence“, Gary Marcus, submitted to arXiv, January 2018.

[FoR&AI] Steps Toward Super Intelligence III, Hard Things Today

rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/

[This is the third part of a four part essay–here is Part I.]

If we are going to develop an Artificial Intelligence system as good as a human, an ECW or SLP say, from Part II of this essay, and if we want to get beyond that, we need to understand what current AI can hardly do at all. That will tell us where we need to put research effort, and where that will lead to progress towards our Super Intelligence.

The seven capabilities that I have selected below start out as concrete, but get fuzzier and fuzzier and more speculative as we proceed. It is relatively easy to see the things that are close to where we are today and can be recognized as things we need to work on. When those problems get more and more solved we will be living in different intellectual world than we do today, dependent on the outcomes of that early work. So we can only speak with conviction about the short term problems where we might make progress.

And by short term, I mean the things we have already been working on for forty plus years, sometimes sixty years already.

And there are lots of other things in AI that are equally hard to do today. I just chose seven to give some range to my assertion that there is lots to do.

1. Real perception

Deep Learning brought fantastic advances to image labeling. Many people seem to think that computer vision is now a solved problem. But that is nowhere near the truth.

Below is a picture of Senator Tom Carper, ranking member of the U.S. Senate Committee  on Environment and Public Works, at a committee hearing held on the morning of Wednesday June 13th, 2018, concerning the effect of emerging autonomous driving technologies on America’s roads and bridges.

He is showing what is now a well known particular failure of a particular Deep Learning trained vision system for an autonomous car. The stop sign in the left has a few carefully placed marks on it, made from white and black tape. The system no longer identifies it as a stop sign, but instead thinks that is a forty five mile per hour speed limit sign. If you squint enough you can sort of see the essence of a “4” at the bottom of the “S” and the “T”, and sort of see the essence of a “5” at the bottom of the “O” and the “P”.

But really how could a vision system that is good enough to drive a car around some of the time ever get this so wrong? Stop signs are red! Speed limit signs are not red. Surely it can see the difference between signs that are red and signs that are not red?

Well, no. We think redness of a stop sign is an obvious salient feature because our vision systems have evolved to be able to detect color constancy. Under different lighting conditions the same object in the world reflects different colored light–if we just zoom in on a pixel of something that “is red”, it may not have a red value in the image from the camera. Instead our vision system uses all sorts of cues, including detecting shadows, knowing things about what color a particular object “should” be, and local geometric relationships between measured colors in order for our brain to come up with a “detected color”. This may be very different from the color that we get from simply looking at the red/green/blue values of pixels in a camera image.

The data sets that are used to train Deep Learning systems do not have detailed color labels for little patches of the image. And the computations for color constancy are quite complex, so they are not something that the Deep Learning systems simply stumble upon.

Look at the synthetic image of a 5\times5 checker below, produced by Professor Ted Adelson at MIT. We can see it is and say it is a checkerboard because it is made up of squares that alternate between black and white, or at least relatively darker and lighter. But wait, they are not squares in the image at all.  They are squished. Our brain is extracting three dimensional structure from this two dimensional image, and guessing that it is really a flat plane of squares that is at a non-orthogonal angle to our line of sight–that explains the consistent pattern of squishing we see. But wait, there is more. Look closely at the two squished squares that are marked “same” in this image. One is surely black and one is surely white. Our brains will not let us see the truth, however, so I have done it for your brain.

Here I grabbed a little piece of image from the top (black) square on the left and the bottom (white) square in the middle.

            

In isolation neither is clearly black nor white. Our vision system sees a shadow being cast by the green cylinder and so lightens up our perception of the one we see as a white square. And it is surrounded by even darker pixels in the shadowed black squares, so that adds to the effect. The third patch above is from the black square between the two labeled as the same and is from the part of that square which falls in the shadow. If you still don’t believe me print out the image and then cover up all but the regions inside the two squares in question. They will then pop into being the same shade of grey.

For more examples like this see the blue (but red) strawberries from my post last year on what is it like to be a robot?.

This is just one of perhaps a hundred little (or big) tricks that our perceptual system has built for us over evolutionary time scales. Another one is extracting prosody from people’s voices, compensating automatically for background noise, our personal knowledge of that person and their speech patterns, and more generally from simply knowing their gender, age, what their native language is, and perhaps knowing where they grew up. It is effortless for us, but it is something that lets us operate in the world with other people, and limits the extent of our stupid social errors. Another is how we are able to estimate space from sound, even when listening over a monaural telephone channel–we can tell when someone is in a large empty building, when they are outside, when they are driving, when they are in wind, just from qualities of the sound as they speak. Yet another is how we can effortlessly recognize people a from picture of their face, less than 32 pixels on a side, including often a younger version of them that we never met, nor have seen in photos before. We are incredibly good at recognizing faces, and despite recent advances we are still better than our programs. The list goes on.

Until ECW and SLP have the same hundred or so tricks up their sleeves they are not going to understand the world in the way that we do, and that will be critically important as they are not going to be able to relate to our world in the way that we do, and so neither of them will be able to do their assigned tasks. They will come off as blatant doofuses. When doddering Rodney, struggling for a noun that he can’t retrieve, says to ECW “That red one, over there!” it will not do ECW much good unless it can map red to something that may not appear red at all in terms of pixels.

2. Real Manipulation

I can reach my hand into my pants pocket and pull out my car keys blindly and effortlessly. I am not letting a robot near my pants pocket any time soon.

Dexterous manipulation has turned out to be fiendishly hard, and making dexterous hands no easier. People always ask me what would it take to make significant progress. If I knew that I would have tried it long ago.

Soon after I arrived at the Stanford Artificial Intelligence Laboratory in 1977 I started programming a couple of robot arms. Below is a picture of the “Gold Arm”, one of the two that I programmed, in a display case at one of the entrances to the Computer Science Department building at Stanford. Notice the “hand”, parallel fingers that slide together and apart. That was all we had for hands back then.

And below is a robot hand that my company was selling forty years later, in 2017. It is the same fundamental mechanical design (a ball screw moving the two fingers of a parallel jaw gripper together and apart, with some soft material on the inside of the fingers (it has fallen off one finger in the 1977 robot above)). That is all we have now. Not much has happened practically with robot hands for the last four decades.

Beyond that, however, we can not make our robot hands perform anywhere near the tasks that a human can do. In fulfillment centers, the places that pack our orders for online commerce, the movement to a single location of all the items to be packed for a given order has been largely solved. Robots bring shelves full of different items to one location. But then a human has to pick the correct item off of each shelf, and a human has to pack them into a shipping box, deciding what packing material makes sense for the particular order. The picking and packing has not been solved by automation. Despite the fact that there is economic motivation, as there was for turning lead into gold, that is pushing lots of research into this area.

Even more so the problem of manipulating floppy materials, like fabrics for apparel manufacture, or meat to be carved, or humans to be put to bed, has had very little progress. Our robots just can not do this stuff. That is alright for SLP but a big problem for ECW.

By the way, I always grimace when I see a new robot hand being showed off by researchers, and rather than being on the end of a robot arm, the wrist of the robot hand is in the hands of a human who is moving the robot hand around. You have probably used a reach grabber, or seen someone else use one. Here is a random image of one that I grabbed (with my mouse!) off an e-commerce website:

If you have played around with one of these, with its simple plastic two fingers and only one grasping motion, you will have been much more dexterous than any robot hand in the history of robotics. So even with this simple gripper, and a human brain behind it, and with no sense of touch on the distal fingers, we get to see how far off we are with robot grasping and manipulation.

3. Read a Book

Humans communicate skills and knowledge through books and more recently through “how to” videos. Although you will find recent claims that various “robots”, or AI systems can learn from a video or from reading a book, none of these demonstrations have the level of capability of a child, and  the approaches people are taking are not likely to generalize to human level competence. We will come back to this point shortly.

But in the meantime, here is what an AI system would need to be able to do if it were to have human level competence at reading books in general. Or truly learn skills from watching a video.

Books are not written as mathematical proofs where all the steps are included. Actually mathematical proofs are not written that way either. We humans fill in countless steps as we read, incorporating our background knowledge into the understanding process.

Why does this work? It is because humans wrote the books and implicitly know what background information all human readers will have. So they write with the assumption that they understand what the humans reading the book will have as background knowledge. So surely an AI system reading a book will need to have that same background.

“Hold on”, the machine learning “airplanes not birds” fanboys say! We should expect Super Intelligences to read books written for Super Intelligences, not those ones written for measly humans. But that claim, of course, has two problems. First, if it really is a Super Intelligence it should be able to understand what mere humans can understand. Second, we need to get there from here, so somehow we are going to have to bootstrap our Super progeny, and the ones writing the books for the really Super ones will first need to learn from books written for measly humans.

But now, back to this background knowledge. It is what we all know about the world and can expect one another to know about the world. For instance, I don’t feel the need to explain to you right now, dear reader, that the universe of intelligent readers and discussants of ideas on Earth at this moment are all members of the biological species Homo Sapiens. I figure you already know that.

This could be called “common sense” knowledge. It is necessary for so much of our (us humans) understanding of the world, and it is an assumed background in all communications between humans. Not only that, it is an enabler of how we make plans of action.

Two NYU professors, Ernest Davis (computer science) and Gary Marcus (psychology and neural science) have recently been highlighting just how much humans rely on common sense to understand the world, and what is missing from computers. Besides their recent opinion piece in the New York Times on Google Duplex they also had a long article2 about common sense in a popular computer science magazine. Here is the abstract:

Who is taller, Prince William or his baby son Prince George? Can you make a salad out of a polyester shirt? If you stick a pin into a carrot, does it make a hole in the carrot or in the pin? These types of questions may seem silly, but many intelligent tasks, such as understanding texts, computer vision, planning, and scientific reasoning require the same kinds of real-world knowledge and reasoning abilities. For instance, if you see a six-foot-tall person holding a two-foot-tall person in his arms, and you are told they are father and son, you do not have to ask which is which. If you need to make a salad for dinner and are out of lettuce, you do not waste time considering improvising by taking a shirt out of the closet and cutting it up. If you read the text, “I stuck a pin in a carrot; when I pulled the pin out, it had a hole,” you need not consider the possibility “it” refers to the pin.

As they point out, so called “common sense” is important for even the most mundane tasks we wish our AI systems to do for us. They enable both Google and Bing to do this translation: “The telephone is working. The electrician is working.” in English, becomes “Das Telefon funktioniert. Der Elektriker arbeitet.” in German. The two meanings of “working” in English need to be handled differently in German, and an electrician works in one sense, whereas a telephone works in another sense. Without this common sense, somehow embedded in an AI system, it is not going to be able to truly understand a book. But this example is only a tiny one step version of common sense.

Correctly translating even 20 or 30 words can require a complex composition of little common sense atoms. Douglas Hofstadter pointed out in a recent Atlantic article places where things in short order can get just too complicated for Google translate, despite deep learning have enabled the process. In his examples it is context over many sentences that get the systems into trouble. Humans handle these cases effortlessly.  Even four years olds (see Part IV of this post).

He says, when comparing how he translates to how Google translates:

Google Translate is all about bypassing or circumventing the act of understanding language.

I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental “halo” has been realized—only when the elusive bubble of meaning is floating in my brain—do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising.

In the second paragraph he touches on the idea of gaining meaning from running simulations of scenes in his head. We will come back to this in the next item of hardness for AI.  And elsewhere in the article he even points out how when he is translating he uses Google search, a compositional method that Google translate does not have access to.

Common sense lets a program, or a human, prune away irrelevant considerations. A program may be able to exhaustively come up with many many options about what a phrase or a situation could mean, all the realms of possibility. What common sense can do is quickly reduce that large set to a much smaller set of plausibility, and beyond that narrow things down to those cases with significant probability. From possibility to plausibility to probability. When my kids were young they used to love to tease dad by arguing for possibilities as explanations for what was happening in the world, and tie me into knots as I tried to push back with plausibilities and probabilities. It was a great game.

This common sense has been a a long standing goal for symbolic artificial intelligence. Recently the more rabid Deep Learners have claimed that their systems are able to learn aspects of common sense, and that is sometimes a little bit true. But unfortunately it does not come out in a way that is compositional–it usually requires a human to interpret the result of an image or a little movie that the network generates in order for the researchers to demonstrate that it is common sense. The onus, once again is on the human interpreter. Without composition, it is not likely to be as useful or as robust as the human capabilities we see in quite small children.

The point here that simply reading a book is very hard, and requires a lot of what many people have called “common sense”. How that common sense should be engendered in our AI systems is a complex question that we will return to in Part IV.

Now back to claims that we already have AI systems that can read books.

Not too long ago an AI program outperformed MIT undergraduates on the exam for Freshman calculus. Some might think that that means that soon AI programs will be doing better on more and more classes at MIT and that before too long we’ll have an AI program fulfilling the degree requirements at MIT. I am confident that it will take more than fifty years. Supremely confident, and not just because an MIT undergraduate degree requires that each student pass a swimming test. No, I am supremely confident on that time scale because the program, written by Jim Slagle1 for his PhD thesis with Marvin Minsky, outperformed MIT students in 1961. 1961! That is fifty seven years ago already. Mainframe computers back then were way less than what we have now in programmable light switches or in our car key frob. But an AI program could beat MIT undergraduates at calculus back then.

When you see an AI program touted as having done well on a Japanese college entrance exam, or passing a US 8th grade science test, please do not think that the AI is anywhere near human level and going to plow through the next few tests. Again this is one of the seven deadly sins of mistaking performance on a narrow task, taking the test, for competence at a general level. A human who passes those tests does it in a human way that means that they have a general competence around the topics in the test. The test was designed for humans and inherent in the way it is designed it extracts information about the competence of a human who took the test. And the test designers did not even have to think about it that way. It is just they way they know how to design tests. (Although we have seen how “teaching to the test” degrades that certainty even for human students, which is why any human testing regime eventually needs to get updated or changed completely.) But that test is not testing the same thing for an AI system. Just like a stop sign with a few pieces of tape on it may not look at all like a stop sign to a Deep Learning system that is supposed to drive your car.

At the same time the researchers, and their institutional press offices, are committing another of the seven deadly sins. They are trying to demonstrate that there system is able to “read” or “understand” by demonstrating preformance on a human test (despite my argument above that the tests are not valid for machines), and then they claim victory and let the press grossly overgeneralize.

4. Diagnose and Repair Stuff

If ECW is going to be a useful elder care robot in a home it out to be able to figure out when something has gone wrong with the house. At the very least it should be able to know which specialist to call to come and fix it. If all it can do is say “something is wrong, something is wrong, I don’t know what”, we will hardly think of it as Super Intelligent. At the very least it should be able to notice that the toilet is not flushing so the toilet repair person should be called. Or that a light bulb is out so that the handy person should be called. Or that there is no electricity at all in the house so that should be reported to the power company.

We have no robots that could begin to do these simple diagnosis tasks. In fact I don’t know of any robot that would realize when the roof had blown off a house that they were in and be able to report that fact. At best today we could expect a robot to detect that environmental conditions were anomalous and shut themselves down. But in reality I think it is more likely that they would continue trying to operate (as a Roomba might after it has run over a dog turd with its rapidly spinning brushes–bad…) and fail spectacularly.

But more than what we referred to as common sense in the previous section, it seems that when humans diagnose even simply problems they are running some sort of simulation of the world in their heads, looking at possibilities, plausibilities, and probabilities. It is not exact the 3D accurate models that traditional robotics uses to predict the forces that will be felt as a robot arm moves along a particular trajectory (and thereby notice when it has hit something unexpected and the predictions are not borne out by the sensors). It is much sloppier than that, although geometry may often be involved. And it is not the simulation as a 2D movie that some recent papers in Deep Learning suggest is the key, but instead is very compositional across domains. And it often uses metaphor. This simulation capability will be essential for ECW to provide full services as a trusted guardian of the home environment for an elderly person. And SLP will need such generic simulations to check out how its design for people flow will work in its design of the dialysis ward.

Again, our AI systems and robots may not have to do things exactly the way we do them, but they will need to have the same general competence as, or more than, humans if we are going to think of them as being as smart as us.

Right now there are really no systems that have either common sense or this general purpose simulation capability. That is not to say that people have not worked on these problems for a long long time.  I was very impressed by a paper on this topic at the very first AI conference I ever went to, IJCAI 77 held at MIT in August 1977.  The paper was by Brian Funt, and was WHISPER: A Problem-Solving System Utilizing Diagrams and a Parallel Processing Retina. Funt was a post doc at Stanford with John McCarthy, the namer of Artificial Intelligence and the instigator of the foundational 1956 workshop at Dartmouth. And McCarthy’s first paper on “Programs with Common Sense” was written in 1958. We have known these problems are important for a long long time. People have made lots of worthwhile progress on them over the last few decades.They still remain hard and unsolved and not read for prime time deployment in real products.

“But wait”, you say.  You have seen a news release about a robot building a piece of IKEA furniture. Surely that requires common sense and this general purpose simulation. Surely it is already solved and Super Intelligence is right around the corner. Again, don’t hold your breath–fifty years is a long time for a human to go without oxygen. When you see such a demo it is with a robot and a program that has been worked on by many graduate students for many months. The pieces were removed from the boxes by the graduate students (months ago). They have run the programs again, and again, and again, and finally may have one run where it puts some parts of the furniture together. The students were all there, all making sure everything went perfectly. This is completely different from what we might expect from ECW, taking delivery of some IKEA boxes at the door, carrying them inside (with no graduate students present), opening the boxes and taking out the famous IKEA instructions and reading them. And then putting the furniture together.

It would be very helpful if ECW could do these things. Any robot today put in this situation will fail dismally on many of the following steps (and remember, this is a robot in a house that the researchers have never seen).

  • realizing there is a delivery being made at the house
  • getting the stuff up any steps and inside
  • actually opening the boxes without knowing exactly what is inside and without damaging the parts
  • finding the instructions, and manipulating the paper to see each side of each page
  • understanding the instructions
  • planning out where to place the pieces so that they are available in the right order
  • manipulating two or three pieces at once when they need to be joined
  • finding and retrieving the right tools (screwdrivers, hammers to tap in wooden dowels)
  • doing that finely skilled manipulation

Not one of these subtasks can today be done by a robot in some unknown house with a never before seen piece of IKEA furniture, and without a team of graduate students having worked for month on the particular instance of that subtask in the particular environment.

When academic researchers say they have solved a problem, or demonstrated a robot capability that is a long long way from the level of performance we will expect from ECW.

Here is a little part of a short paper that just came out3 in the AAAI’s (Association for the Advancement of Artificial Intelligence) AI Magazine this summer, written by Alexander Kleiner about his transition from being an AI professor to working in AI systems that had to work in the real world, every day, every time.

After I left academe in 2014, I joined the technical organization at iRobot. I quickly learned how challenging it is to build deliberative robotic systems exposed to millions of individual homes. In contrast, the research results presented in papers (including mine) were mostly limited to a handful of environments that served as a proof of concept.

Academic demonstrations are important steps towards solving these problems. But they are demonstrations only. Brian Funt demonstrated a program that could imagine the future few seconds, forty one years ago, before computer graphics existed (his 1977 paper uses line printer output of fixed width characters to produce diagrams). That was a good early step. But despite the decades of hard work we are still not there yet, by a long way.

5. Relating Human and Robot Behavior to Maps

As I pointed out in my  what is it like to be a robot? post, our home robots will be able to have a much richer set of sensors than we do. For instance they can have built in GPS, listen for Bluetooth and Wifi, and measure people’s breathing and heartbeat a room away4 by looking for subtle changes in Wifi signals propagating through the air.

Our self-driving cars (such as they are really self driving yet) rely heavily on GPS for navigation. But GPS now gets spoofed as a method of attack, and worse, some players may decide to bring down one of more GPS satellites in a state sponsored act of terrorism.

Things will be really bad for a while if GPS goes down. For one thing the electrical grid will need to be partitioned into much more local supplies as GPS is used to synchronize the phase of AC current in distant parts of the network. And humans will be lost quite a bit until paper maps once again get printed for all sorts of applications. E-commerce deliveries will be particularly badly hit for a while, as well as flight and boat navigation (early 747’s had a window in the roof of the cockpit for celestial navigation across the Pacific; the US Naval Academy brought back into its curriculum navigation by the stars in 2016).

Whether it is spoofing, an attack on satellites, or just lousy reception, we would hope that our elder care robots, our ECWs, are not taken offline. They will be, unless they get much better at visual and other navigation without relying at all on hints from GPS. This will also enable them to work in rapidly changing environments where maps may not be consistent from one day to the next, nor necessarily be available.

But this is just the start. Maps, including terrain and 3D details will be vital for ECW to be able to decide where it can get its owner to walk, travel in a wheel chair, or move within a bathroom. This capability is not so hard for current traditional robotics approaches. But for SLP, the Services Logistics Planner, it will need to be a lot more generic. It will need to relate 3D maps that it builds in its plans for a dialysis ward to how a hypothetical human patient, or a group of hypothetical staff and patients, will together and apart navigate around the planned environment. It will need to build simulations, by itself, with no human input, of how groups of humans might operate.

This capability, of projecting actions through imagined physical spaces is not too far off from what happens in video games. It does not seem as far away as all the other items in this blog post. It still requires some years of hard work to make systems which are robust, and which can be used with no human priming–that part is far away from any current academic demonstrations.

Furthermore, being able to run such simulations will probably contribute to aspects of “common sense”, but it all has to be much more compositional than the current graphics of video games, and much more able to run with both plausibility and probability, rather than just possibility.

This is not unlike the previous section on diagnosis and repair, and indeed there is much commonality. But here we are pushing deeper on relating the three dimensional aspects of the simulation to reality in the world. For ECW it will the actual world as it is. For SLP it will be the world as it is designing it, for the future dialysis ward, and constraints will need to flow in both directions so that after a simulation, the failures to meet specifications or desired outcomes can be fed back into the system.

6.Write Or Debug a Computer Program

OK, I admit I am having a little fun with this section, although it is illustrative of human capabilities and forms of intelligence. But feel free to skip it, it is long and a little technical.

Some of the alarmists about Super Intelligence worry that when we have it, it will be able to improve itself by rewriting its own code. And then it will exponentially grow smarter than us, and so, naturally, it will kill us all. I admit to finding that last part perplexing, but be that as it may.

You may have seen headlines like “Learning Software Learns to Write Learning Software”. No it doesn’t. In this particular case there was a fixed human written algorithm that went through a process of building a particular form of Deep Learning network. And a learning network that learned how to adjust the parameters of that algorithm which ended up determining the size, connectivity, and number of layers. It didn’t write a single line of computer code.

So, how do we find our way through such a hyped up environment and how far away are we from AI systems which can read computer code, debug it, make it better, and write new computer code? Spoiler alert: about as far away as it is possible to be, like not even in the same galaxy, let alone as close as orbiting hundreds of millions of miles apart in the same solar system.

Each of today’s AI systems are many millions of lines of code, they have been written by many, many, people through shared libraries, along with, for companies delivering AI based systems, perhaps a few million lines of custom and private code bases. They usually span many languages such as C, C++, Python, Javascript, Java, and others. The languages used often have only informal specifications, and in the last few years new languages have been introduced with alarming frequency and different versions of the languages have different semantics. It’s all a bit of a mess, to everyone except the programmers whose lives these details are.

On top of this we have known since Turing introduced the halting problem in 1936 that it is not possible for computers to know certain rather straightforward things about how any given program might perform over all possible inputs. In 1967 Minsky warned that even for computers with relatively small amounts of memory (about what we expect in a current car key frob) that to figure out some things about their programs would take longer then the life of the Universe, even with all the Universe doing the computing in parallel.

Humans are able to write programs with some small amount of assuredness that they will work by using heuristics in analyzing what the program might do. They use various models and experiments and mental simulations to prove to themselves that their program is doing what they want. This is different from proof.

When computers were first developed we first needed computer software. We quickly went from programmers having to enter the numeric codes for each operation of the machine, to assemblers where there is a one to one correspondence between what the programmers write and that numeric code, but at least they get to write it in human readable text, like “ADD”. Then quickly after that came compilers where the language expressing the computation was at a higher level model of an abstract machine and the compiler turned that into the assembler language for any particular machine. There have been many attempts, really since the 1960s, to build AI systems which are a level above that, and can generate code in a computer language from a higher level description, in English say.

The reality is that these systems can only generate fairly generic code and have difficulty when complex logic is needed. The proponents of these systems will argue about how useful they are, but the reality is that the human doing the specifying has to move from specifying complex computer code to specifying complex mathematical relationships.

Real programmers tend to use spatial models and their “simulating the world” capabilities to reason through what code should be produced, and which cases should handled in which way. Often they will write long lists of cases, in pseudo English so that they can keep track of things, and (if the later person who is to maintain the code is lucky) put that in comments around the code. And they use variable names and procedure names that are descriptive of what is going to be computed, even though that makes no difference to the compiler. For instance they might use StringPtr for a pointer to a string, where the compiler would have been just as happy if they had used M, say. Humans use the name to give themselves a little help in remembering what is what.

People have also attempted to write AI systems to debug programs, but they rarely try to understand the variable names, and simply treat them as anonymous symbols, just as the compiler does.

An upshot of this has been “formal” programming methods which require humans to write mathematical assertions about their code, so that automated systems can have a chance at understanding it. But this is even more painful that writing computer code, and even more buggy than regular computer code, and so it is hardly ever done.

So our Super Intelligence is going to deal with existing code bases, and some of the stuff in there will be quite ugly.

Just for fun I coded up a little library routine in C–I use a library routine with the exact same semantics in another language that I regularly program in. And then I got rid of all the semantics in the variable, procedure and type names. Here is the code.  It is really only one line. And, it compiles just fine using the GCC compiler and works completely correctly.

a*b(a*c) {a*d; a*e;
  for(d=NULL;c!=NULL;e=(a*)*c,*c=(a)d,d=c,c=e);return d;}

I sent it to two of my colleagues who are used to groveling around in build systems and open source code in libraries, asking if they could figure out what it was. I had made it a little hard by not given them a definition of “a”.

They both figured out immediately that “a” must be a defined type. One replied that he had some clues, and started out drawing data structures and simulating the code, but then moved to experimenting by compiling it (after guessing at a definition for “a”) and writing a program that called it. He got lots of segment violations (i.e., the program kept crashing), but guessed that it was walking down a linked list. The second person said that he stared at the code and realized that “e” was a temporary variable whose use was wrapped around assignments of two others which suggested some value swapping going on. And that the end condition for the loop being when “c” became NULL, suggested to him that it was walking down a list “c”, but that list itself was getting destroyed. So he guessed it might be doing an in place list reversal, and was able to set up a simulation in his head and on paper of that and verify that it was the case.

When I gave each of them the equivalent and original form of the code with the  informative names (though I admit to a little bit of old fashioned use of equivalences in the type definition) restored, along with the type definition for “a”, now called “address”, they both said it was straightforward to simulate on paper and verify what was going on.

#define address unsigned long long int

address *reverse(address *list) {
 address *rev;
 address *temp;
 for(rev=NULL;list!=NULL;temp=(address *)*list,
                         *list=(address)rev,
                         rev=list,list=temp);
 return rev;}

The reality is that variable names and comments, though irrelevant to the actual operation of code is where a lot of the semantic explanation of what is going on is encoded. Simply looking at the code itself is unlikely to give enough information about how it is used. And if you look at the total system then any sort of reasoning process about it soon becomes intractable.

If anyone had already built an AI system which could understand either of the two versions of my procedure above it would be an unbelievably useful tool for  every programmer alive today. That is what makes me confident we have nothing that is close–it would be in everyone’s IDE (Integrated Development Environment) and programmer productivity would be through the roof.

But you might think my little exercise was a bit too hard for our poor Super Intelligence (the one whose proponents think will be wanting to kill us all in just a few years–poor Super Intelligence). But really you should not underestimate how badly written are the code bases on which we all rely for our daily life to proceed in an ordered way.

So I did a different, second experiment, this time just on myself.

Here is a piece of code I just found on my Macintosh, under a directory named TextEdit, in a file named EncodingManager.m. I wasn’t sure what a file extension of “.m” meant in terms of language, but it looked like C code to me. I looked only at this single procedure within that file, nothing else at all, but I can tell a few things about it, and the general system of which it is part. Note that the only words here that are predefined in C are static, int, const, void, if, and return. Everything else must be defined somewhere else in the program, but I didn’t look for the definitions, just stared at this little piece of code in isolation. I guarantee that there is no AI program today which could deduce what I did, in just a few minutes, in the italic text following the code.

/* Sort using the equivalent Mac encoding as the major key. Secondary key is the actual encoding value, which works well enough. We treat Unicode encodings as special case, putting them at top of the list.

*/

static int encodingCompare(const void *firstPtr, const void *secondPtr) {

    CFStringEncoding first = *(CFStringEncoding *)firstPtr;

    CFStringEncoding second = *(CFStringEncoding *)secondPtr;

    CFStringEncoding macEncodingForFirst = CFStringGetMostCompatibleMacStringEncoding(first);

    CFStringEncoding macEncodingForSecond = CFStringGetMostCompatibleMacStringEncoding(second);

    if (first == second) return 0; // Should really never happen

    if (macEncodingForFirst == kCFStringEncodingUnicode || macEncodingForSecond == kCFStringEncodingUnicode) {

        if (macEncodingForSecond == macEncodingForFirst) return (first > second) ? 1 : -1; // Both Unicode; compare second order

        return (macEncodingForFirst == kCFStringEncodingUnicode) ? -1 : 1; // First is Unicode

    }

    if ((macEncodingForFirst > macEncodingForSecond) || ((macEncodingForFirst == macEncodingForSecond) && (first > second))) return 1;

    return -1;

}

First, the comment at the top is slightly misleading as this is not a sort routine, rather it is a predicate which is used by some sorting procedure to decide whether any two given elements are in the right order. It takes two arguments and returns either 1 or -1, depending on which order they should be in the sorted output from that sorting procedure which we haven’t seen yet. We have to figure out what those two possibilities mean. I know that TextEdit is a simple text file editor that runs on the Macintosh. It looks like there are a bunch of possible encodings  for elements of strings inside TextEdit, and on the Macintosh there are a non-identical set of possible encodings. I’m guessing that TextEdit must run on other systems too! This particular predicate takes the encoding values for the general encodings and says which of the ones closest to each of them on the Macintosh is better to use. And it prefers encodings where only a single byte per character is used. The encodings themselves, both for the general case, and for the Macintosh are represented by an integer. Based on the third sentence in the first comment, and on the return value where the comment is “First is Unicode” it looks like this predicate returning -1 means its first argument should precede (i.e., appear closer to the “top of the list”–an inference I am making from “top” being used to refer to the end of a list that precedes all the other elements of the list; whether it is actually represented elsewhere as a classical list as in my first example of code above, or it is a sorted array is immaterial and this piece of code does not depend on that) the second argument in the sort, otherwise if it returns 1, then the second argument should precede the first argument. If the integer for the Macintosh encoding is smaller that means it should come first, and if they are equal for the Macintosh, the whether the integer representing the general case encoding is smaller should determine the order. All this subject to single byte representations always winning out.

That is a lot of things to infer about what is actually a pretty short piece of code. But it is the sort of thing that makes it so that humans can build complex systems, in the way that all our current software is built.

It is the sort of thing that any Super Intelligence bent on self improvement through code level introspection is going to need in order to understand the code that has been cobbled together by humans to produce it. Without understanding its own code it will not be able to improve itself by rewriting its own code.

And we do not have any AI system which can understand even this tiny, tiny little bit of code from a simple text editor.

7. Bond With Humans

Now we get to the really speculative place, as this sort of thing has only been worked in AI and robotics for around 25 years. Can humans interact with robots in a way in which they have true empathy for each other?

In the 1990’s my PhD student Cynthia Breazeal used to ask whether we would want the then future robots in our homes to be “an appliance or a friend”. So far they have been appliances. For Cynthia’s PhD thesis (defended in the year 2000) she built a robot, Kismet, an embodied head, that could interact with people. She tested it with lab members who were familiar with robots and with dozens of volunteers who had no previous experience with robots, and certainly not a social robot like Kismet.

I have put two videos (cameras were much lower resolution back then) from her PhD defense online.

In the first one Cynthia asked six members of our lab group to variously praise the robot, get its attention, prohibit the robot, and soothe the robot. As you can see, the robot has simple facial expressions, and head motions. Cynthia had mapped out an emotional space for the robot and had it express its emotion state with these parameters controlling how it moved its head, its ears and its eyelids. A largely independent system controlled the direction of its eyes, designed to look like human eyes, with cameras behind each retina–its gaze direction is both emotional and functional in that gaze direction determines what it can see. It also looked for people’s eyes and made eye contact when appropriate, while generally picking up on motions in its field of view, and sometimes attending to those motions, based on a model of how humans seem to do so at the preconscious level. In the video Kismet easily picks up on the somewhat exaggerated prosody in the humans’ voices, and responds appropriately.

In the second video, a naïve subject, i.e., one who had no previous knowledge of the robot, was asked to “talk to the robot”. He did not know that the robot did not understand English, but instead only detected when he was speaking along with detecting the prosody in his voice (and in fact it was much better tuned to prosody in women’s voices–you may have noticed that all the human participants in the previous video were women). Also he did not know that Kismet only uttered nonsense words made up of English language phonemes but not actual English words. Nevertheless he is able to have a somewhat coherent conversation with the robot. They take turns in speaking (as with all subjects he adjusts his delay to match the timing that Kismet needed so they would not speak over each other), and he successfully shows it his watch, in that it looks right at his watch when he says “I want to show you my watch”. It does this because instinctively he moves his hand to the center of its visual field and makes a motion towards the watch, tapping the face with his index finger. Kismet knows nothing about watches but does know to follow simple motions. Kismet also makes eye contact with him, follows his face, and when it loses his face, the subject re-engages it with a hand motion. And when he gets close to Kismet’s face and Kismet pulls back he says “Am I too close?”.

Note that when this work was done most computers only ran at about 200Mhz, a tiny fraction of what they run at today, and with only about 1,000th of the size RAM we expect on even our laptops today.

One of the key takeaways from Cynthia’s work was that with just a few simple behaviors the robot was able to engage humans in human like interactions. At the time this was the antithesis of symbolic Artificial Intelligence which took the view that speech between humans was based on “speech acts” where one speaker is trying to convey meaning to another. That is the model that Amazon Echo and Google Home use today. Here it seemed that social interaction, involving speech was built on top of lower level cues on interaction. And furthermore that a human would engage with a physical robot if there were some simple and consistent cues given by the robot.

This was definitely a behavior-based approach to human speech interaction.

But is it possible to get beyond this? Are the studies correct that try to show an embodied robot is engaged with better by people than a disembodied graphics image, or a listening/speaking cylinder in the corner of the room?

Let’s look at the interspecies interaction that people engage in more than any others.

This photo was in a commentary in the issue of Science that published a paper5 by Nagasawa et al, in 2015. The authors show that as oxytocin concentration rises for whatever reason in a dog or its owner then the one with the newly higher level engages more in making eye contact. And then the oxytocin level in the other individual (dog or human) rises. They get into a positive feedback loop of oxytocin levels mediated by the external behavior of each in making sustained eye contact.

Cynthia Breazeal did not monitor the oxytocin levels in her human subjects as they made sustained eye contact with Kismet, but even without measuring it I am quite sure that the oxytocin level did not rise in the robot. The authors of the dog paper suggest that in their evolution, while domesticated, dogs stumbled upon a way to hijack an interaction pattern that is important for human nurturing of their young.

So, robots, and Kismet was a good start, could certainly be made to hijack that same pathway and perhaps others. It is not how cute they look, nor how similar they look to a human, Kismet is very clearly non-human, it is how easy it is to map their behaviors to ones for which us humans are primed.

Now here is a wacky thought. Over the last few years we have learned how many species of bacteria we carry in out gut (our micro biome), on our skin, and in our mouths. Recent studies suggest all sorts of effects of just what bacterial species we have and how that influences and is influenced by sexual attraction and even non-sexual social compatibility. And there is evidence of transfer of bacterial species between people. What if part of our attraction to dogs is related to or moderated by transfer of bacteria between us and them? We do not yet know if it is the case. But if it is that may doom our social relationships with robots from ever becoming as strong as with dogs. Or people. At least, that is, until we start producing biological replicants as our robots, and by then we will have plenty of other moral pickles to deal with.

With that, we move to the next installment of our quest to build Super Intelligence, Part IV, things to work on now.



“A Heuristic Program that Solves Symbolic Integration Problems in Freshman Calculus”, James R. Slagle, in Computers and Thought, Edward A. Feigenbaum and Julian Feldman, McGraw-Hill , New York, NY, 1963, 191–206, adapted from his 1961 PhD thesis in mathematics at MIT.

2 “Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence”, Ernest Davis and Gary Marcus, Communications of the ACM, (58)9, September 2015, 92–103.

3 “The Low-Cost Evolution of AI in Domestic Floor Cleaning Robots”, Alexander Kleiner, AI Magazine, Summer 2018, 89–90.

4 See Dina Katabi’s recent TED talk from 2018.

5 “Oxytocin-gaze positive loop and the coevolution of human-dog bonds”, Miho Nagasawa, Shouhei Mitsui, Shiori En, Nobuyo Ohtani, Mitsuaki Ohta,Yasuo Sakuma, Tatsushi Onaka, Kazutaka Mogi, and Takefumi Kikusui, Science, volume 343, 17th April, 2015, 333–336.