Rodney Brooks

Robots, AI, and other stuff

AI/ML Is Not Uniquely Powerful Enough To Need Controlling

rodneybrooks.com/aiml-is-not-uniquely-powerful-enough-to-need-controlling/

Note: This short post is intended as a counterpoint to some claims that are being made about the need to control AI research. I don’t directly refer to those claims. You can figure it out. 

When humans next land on the Moon it will be with the help of many, many, Artificial Intelligence and Machine Learning systems.

Last time we got to the Moon and back without AI or ML.

I think this highlights the fact that current versions of AI and ML are just technologies. Different technologies can get to the same goal.

Some AI/ML researchers are making a bug fuss about how their work needs to be regulated as it is uniquely powerful. I disagree that it is uniquely powerful. Current day AI and ML is nothing like the intelligence or learning possessed by biological systems. They are both very narrow slices of the whole system. They are not particularly powerful.

Modern day Prometheuses rely on all sorts of technologies. Neither AI nor ML given them a particular leg up despite how exciting they might seem to current practitioners. It is the goal of a Prometheus that is important, not the particular technological tools that are used to achieve that goal.

Point 1: Swarms of killer drones could just as well be developed without any “AI”, using other technologies. We both got to the Moon, and had precise cruise missiles without any technologies that we would today call AI or ML1. We can develop “slaughterbots” without using anything that practitioners today would call AI or ML. So banning AI or ML in weapons systems will not change outcomes. It is futile. If you don’t like the sorts of things those weapons systems do, then work to ban the things they do, not the particular and very fungible technologies that are just one of many ways to produce that behavior.

Earlier this week, on December 18th, twitter user @ewschaetzle sent out a quote from H. P. Lovecraft from 1928, saying it “seems to capture the (misguided) fear that some have expressed toward .”:

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black sees of infinity. and it was not meant we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and darkness of a new dark age.

I have not found the full quote elsewhere but here is a partial version of it.

I like this quote a lot.

Three months ago in a long essay blog post (and in a better edited version in Technology Review) I pointed out seven common mistakes that people are making in predicting the future of AI, and by implication, the future of ML. In general they are vastly overestimating both its current power and how quickly it will develop.

Lovecraft’s words give a rationale for why this overestimation leads many other sensible, and even brilliant, entrepreneurs, physicists, and others to say that AI in general is incredibly dangerous and we must control its development. It is complex and they get scared.

Point 2: If one wants to legislate control of “AI research or development” in some way, then one must believe that those rules or laws will change at least one person’s behavior in some way. Without some change in behavior there is no point to legislation or rules, beyond smug self satisfaction the such laws or rules have been enacted. My question to those who say we should have these rules is: Show me one explicit change of behavior that you would like to see. Tell me who would have to do what differently than they currently are doing, and how that would impact the future. Tell me how it would make the world safer from AI.

So far I have not seen anyone suggest any explicit law or rule. All I have heard is “we must control it”. How? Let alone why?



1 Someone on twitter disagreed with my claim that we got to the Moon without ML by saying that Kalman filters, which were developed for navigation in the Apollo missions use Bayesian statistics, so therefore we did use ML to get to the Moon. That is a silly argument. ML today, and what ML refers to is much, much more than Kalman filters which were developed as state estimators, not as anything to do with learning from datasets. There is no pre-learned anything in using Kalman filters.

4 comments on “AI/ML Is Not Uniquely Powerful Enough To Need Controlling”

Comment on this

Your email address will not be published. Required fields are marked *