AGI Denialism?

Olle Häggsström has coined a new phrase: “superintelligence deniers”! This is clever because it instantly puts his opponents into the same camp as climate deniers or evolution deniers. But this a curious asymmetry: while most climate scientists and most evolutionary biologists are not deniers, most AI researchers do indeed seem to be “deniers”!

This however, says less about AI researchers and much more about Olle’s total misunderstanding of the position of most AI researchers.

Mainstream AI researchers are not at all “deniers” in any sense that Olle impugns. To explain their position requires clarifying Olle’s misunderstanding. First, there is very basic confusion about what the term “artificial general intelligence” (AGI) is supposed to mean, since there is no clear and unambiguous definition of what “intelligence” is. If “intelligence’ means creating artificial agents that can reason and accomplish tasks as humans can, then AI researchers could hardly be called deniers, since that is exactly the main goal driving their research! AI research is very much in the tradition of Alan Turing, namely, practical hard-headed engineering, eschewing vague philosophical ramblings. In the language Olle uses in the post above, AI researchers are very much Gallant, not Goofus, in fact, the overwhelming majority are explicit Bayesians in everything they do!

Almost no AI researcher is going to claim there is any obstacle in principle that makes it impossible for an artificial agent or robot to do something a human can. Unlike creationists, no AI researcher believes there is a special “soul” that only humans possess, they are very much materialists with both feet firmly on the ground. For the same reason, they also mostly eschew any discussion of “consciousness” since that is again something hard to define and pin down, and is in any cases, pretty irrelevant to the actual problems they are trying to solve in their research.

Olle describes AI researchers as adopting a Goofus position: they reject the existence of a superintelligent agent because one cannot produce it on request. He claims a typical AI researcher’s claim is: “I haven’t been able to build a superintelligent agent, hence it’s impossible.” Such a claim is indeed as ridiculous as it sounds, but it is a total strawman, since no AI researcher makes it. What the researcher may have been trying to convey to Olle is that it is extremely hard to build an intelligent agent, and so far we are very very far from it, notwithstanding all the impressive engineering advances of AI. One of the world’s foremost experts in robotics, Rodney Brooks has written a long series of posts on this (which I had recommended to Olle long ago). See also Stanislas Dehane’s recent book “How We Learn” for a perspective from a cognitive science point of view,

It may indeed be hard for a person outside the field to appreciate the struggles in building AI systems and quite easy to misunderstand research achievements, for example, as revealed by Olle’s reaction to the GPT-3 technology. Not surprisingly, it is mostly people who have never worked on any real engineering problem in AI that indulge in discussions on “superintelligence”!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s