Tilting at Windmills?

dragons

Olle Häggström
Here Be Dragons
Oxford University Press 2016.

In his acclaimed book Reasons and Persons, the British philosopher Derek Parfit discusses his theory of identities where a person at a certain time instant is not the same as the person an instant later. The author of the book under review is a self–professed admirer of Parfit and seems intent on  demonstrating the veracity of this theory by personal example. Many of us have known Olle  Häggström as a world famous probabilist working in the arcane area of percolation theory, but the author of this book which is subtitled “Science, Technology and the Future of Humanity” seems to be somebody altogether different. Indeed, Olle has run a hyperactive blog for a number of years, engaging in a number of polemics (sometimes bad–tempered) on issues ranging from religion and rationality in the early years, to climate change and most recently, artificial intelligence (AI) and its consequences. This book could be seen as the culmination of this effort.

Personally, I welcome this metamorphosis. I have often been struck by how narrow and blinkered my fellow academics can be, focused on their next paper deadline and rarely deigning to look at other research areas or other disciplines, let alone thinking about the larger problems facing humanity. Doesn’t scientific curiosity entail a natural interest in new scientific discoveries or technological breakthroughs? Shouldn’t their training and skills in marshalling facts and exposing logical flaws be brought to bear on issues of larger consequence? If so, there can be no sharper mind to dissect flawed arguments and marshall evidence than Olle Häggström, as would be readily testified by people who know him.

The starting premise of the book is that while science and technology are usually regarded as positive forces of benefit to humanity as a whole, this is neither guaranteed, nor always true. Indeed, sometimes they have the potential for catastrophic destruction. The most dramatic example that comes to mind are nuclear weapons. Thus, it is worth the effort to think carefully about the consequences of scientific discoveries and technological inventions, and steer them into beneficial directions. The book aims to chart out emerging and future advances in science and technology and warn of possible dangers to humanity, just as ancient map makers warned explorers of dragons — hence the title of the book. How well does the book succeed in this goal?

I have to return a mixed verdict. To continue the theme of Parfitian identities, there seem to be several Olles who authored this book. Olle, the hard–nosed scientist who marshalls evidence and dissects arguments rationally, he receives my unqualified admiration. Olle, whose angst of existential risk leads him to either downgrade or miss entirely, other (perhaps not existential) risks of utmost and pressing importance. Olle, the logic puzzler (“The Doomsday Argument”) and Olle the moral philosopher (“is’ versus “ought”) who may be entertaining, depending on your inclination, but contribute little of substance to a discussion of specific advances and their risks. I will not comment any further on the parts written by the latter two identities below.

The best parts of the book for me are clearly Chapters 2 and 6. In Chapter 2, the author addresses the threat of climate change and does a superb job of surveying and summarizing the key points of the underlying scientific issues including planetary mechanics (“The Milankovitch cycles”), radiative forcing (“the Stefan–Boltzmann law”) and crucially various feedback cycles involving greenhouse gases (most importantly carbon dioxide). The discussion and schematic diagram on p. 22 very
clearly summarizes the cause and effect and feedback mechanisms. If you have a climate denier  in the room, you need do no more than read these few pages that masterfully summarize the results from the most authoritative scientific studies, and send the denier slinking off with his tail between the legs. Later, in Chapter 10, the author also discusses the economics of climate change with a clear discussion of the discounting mechanisms used by economists, why the influential Stern report uses a much lower discount rate than most mainstream economists and what consequences this has on policy recommendations. There is little to add to this discussion, except to recommend also reading the similarly accessible account in Nicholas Stern’s recent call to arms, Why are we Waiting? The author observes that the obvious and urgent action needed is to cut down our greenhouse gas emissions, but is frustrated that progress on this has been very slow, attributing this to a lack of will on the part of politicians. This is surely a bit naive, the reason for slow progress is fundamentally a systemic problem tied to structure of the economy and society we live in. The author has chosen not to get into this argument. Instead, in the last few sections, he considers more radical proposals of geo–engineering techno-fixes such as spraying sulphur dioxide into the stratosphere to act as a shield from solar radiation. In the end, the effort devoted to these suggested techno–fixes is worth it because the author argues convincingly that they do not offer a real long term solution.

Chapter 6 is titled “What is Science?” . There are two valuable themes discussed here. The first is about the Popperian criterion of falsifiability that is often given as a defining feature of a  scientific theory: that it makes predictions which can be tested empirically, and if refuted, can  falsify the theory. Popper came to this criterion in response to certain arguments by the disciples of Freud and Marx where they seem to always be able to retro–fit any bit of evidence to their theories. While recognizing that Popperian falsifiability is a valuable guide to critical thinking in science, he gives some nice examples of how it can be too crude and rigid if applied mindlessly, and that science has not adhered to it historically either. It is a pity that the book went to press before the recent high profile controversy in Physics generated by a letter in Nature (“Defend the Integrity of Physics”, Dec. 2014) about the fact that string theory, multiverses and other theories in modern Physics are untestable even in principle. A conference was organized recently (Dec. 2015) where physicists and philosophers were brought together to discuss these issues and the nature of the scientific method. Perhaps the author may enjoy continuing the discussion in sections 6.3 and 6.4 in the light of these controversies in future editions. The philosopher Richard Dawid who organized that conference, has a “No Alternatives Theorem” in support of these untestable theories that centrally involves (albeit with a very strange twist!) the other major theme discussed in this chapter, ńamely statistical methods, in particular statistical significance, the contentious p-values and Bayesian methodology. Statistical significance and p-values have generated no end of controversies recently:
while some journals reject papers simply because they don’t put in p-values, others have decided to ban p-values entirely from their pages! This is a reflection of just how much confusion and misunderstanding surrounds the concept. In Section 6.6, the author gives a wonderfully clear primer of what p-values are and what they are not, using a simple coin tossing example. He follows this up with a discussion of decision theory and finally, the Bayesian approach, hinting at how these are all very crucially needed in making sense of the world. Here again, a lot more could be said, for example, the rather terse discussion of confidence intervals on page 161 could be expanded and examples could be given from real world settings, of both use and misuse. The author himself was recently involved in highlighting elementary blunders in the statistical analysis of a recent study in Sweden. Finally, all this is extremely important for the new discipline variously labelled “Data Science”, “Big Data” or “The Fourth Paradigm” which is permeating all of modern science today, from high throughput experiments in biology to medical image analysis and modern high volume radio astronomy. This calls for the next book from the author, and I have a suggestion on how to make it an instant bestseller that he will find appealing: start each chapter with a xkcd cartoon, then explain the statistical principles behind it, and finally illustrate it with real world examples.

Perhaps the threat that has exercised the author most in recent years is that of super intelligence: the emergence of computing machines that achieve a level of intelligence equalling or exceeding human intelligence. The threat is that should machines achieve such capabilities, they would endanger the existence of humans. This is no more just the stuff of movies like the Terminator but has permeated both academic discourse and popular media. The Oxford philosopher Nick Bostrom has  written a densely argued book called Superintelligence and a number of celebrity scientists such as Stephen Hawking and Frank Wilczek have written in the Independent warning of the dangers of AI.

This part of the book should also be the one of most interest to someone like myself, since we have been building up Sweden’s sole research group in core machine learning (ML) (though there are many groups applying ML in different domains). Like the author and many of my fellow computer scientists who consider ourselves the intellectual descendants of Alan Turing, I do not believe the human brain is the sole or even close to the optimum way of organizing matter to perform intelligent computation, rather, it is proof that it can be done by natural processes. Thus, there is also no obstacle, in principle, that matter could be organized to exceed human capabilities, and indeed that this could be done by … humans. The so-called singularity argument first enunciated by the British mathematician I.J. Good, and later popularized by techno–visionaries like Vernor Vinge and Ray Kurzweil, goes along these lines: our machines get better exponentially (like hardware speed) and so it would eventually reach human capabilities, at which  point there is an “intelligence explosion”. If this is a possibility, then certainly there is a risk that such an intelligence could become inimical to human existence, even if this happens as an indirect side effect rather than explicit design

It is one thing to suggest a possibility and quite another to frame it as an object of specific scientific study. The author has engaged in a debate on his blog with David Sumpter, a respected applied mathematician who described intelligence explosion and other futuristic arguments as “nonsense” by which he means literally, not related to any sense data.The author goes over the exchange in section 6.5 and perhaps scores a technical win on points (though I would declare Sumpter the winner in spirit). He poses two hypothesis (C1) about an intelligence explosion in the not too distant future
and (C2) about a substantial climate change in the same time frame. Most rational people (which thus excludes climate deniers) would agree that (C2) is a valid question for scientific study, and thus he argues that by the same criterio, so is (C1). However, he wisely adds a qualification at the end of the section: “since climate change is an incomparably more well developed science than that of a future intelligence
explosion … (C2) stands on incomparably firmer ground than (C1).” Alas, he does not take this further to ask where exactly the “science of of intelligence explosion” stands. I submit that an essential prerequisite to be taken seriously on a subject is to engage with its technical literature deeply, as indeed the author has done with climate change in Chapter 2. One reason we both like Daniel Dennett the philosopher of Biology is that he engages deeply with the modern scientific advances in evolution and genetics. We should set the same standards for philosophers of AI. Here one is sorely disappointed by the almost total absence of engagement with the recent advances in machine learning and AI. The result is that secondary philosophical sources are quoted (p. 107) such as the claim by  philosopher David Chambers that human level AI will be possible, and that the most promising routes are via whole brain simulation and genetic algorithms. The first is particularly unfortunate, coming just as the much touted European Flagship Human Brain Project has run aground in bitter controversy with many prominent neuroscientists staging a revolt and denying that whole brain simulation, even if possible, will lead to  any insights on human intelligence. The suggestion of genetic algorithms is also a blooper that betrays total ignorance of the techniques that are behind the recent dramatic breakthroughs in speech, image and vision technologies. Even if one didn’t follow the proceedings of the premier venues where these results have been presented, for example, the NIPS, ICML, NACCL, ACL and CVPR conferences, one could hardly have missed the headlines splashed everywhere these days about “Deep Learning”. I am particularly
surprised at this apparent total lack of awareness of the recent advances because almost all of them are based on probabilistic techniques which are the special forte of the author. Indeed, he is part of a large and prestigious research programme in the Mathematical Statistics Department at Chalmers sponsored by the Wallenberg Foundation whose central theme is precisely the new area of high dimensional “Big Data” statistics that have played a major role in the recent advances in machine learning. (And there are also lesser mortals like us at the same university engaged in this research area :..) This disconnect from the research frontiers of the subject seems to afflict the futuristic field more generally: consider a survey by Müller and Bostrom quoted (p. 106) as representative of the views of AI researchers on when the singularity might be expected to occur. This survey is based on responses from attendees of a philosophy conference, the Artificial General Intelligence (AGI) conference, and strangest of all, a national conference of Greek scientists. It misses completely the premier venues of machine learning and AI research – NIPS, ICML, AAAI – where amongst other leading research, the recent breakthrough results have been presented. If it wasn’t hard enough to make a sensible poll of this sort, it loses all credibility by the choice of respondents.

Thus, while I am happy to join the author and sign the open letter published at the Future of Life Institute (FLI) calling for research on how to make AI more robust and beneficial, a necessary starting point is to understand the basis and capabilities of current technologies. Otherwise it is hard to see how one could even begin to specify a concrete scientific programme along those lines. The author could do no better than to start with the review in Science (July 17, 2015) on ML by two of its most influential representatives, Michael Jordan and Tom Mitchell. I could also mention that our group at Chalmers successfully applied for grants from the FLI international programme, but I must also admit frankly that it is somewhat far-fetched to see the connection to taming a futuristic  unknown AI. I must also note here one of the rare inaccuracies in the book – on p. 231, the author  claims the open letter at the FLI calls for “moving ahead more slowly and cautiously in AI research”. This would be absurd and foolish at the current juncture and the letter in fact does not say this, rather it calls for “expanded research aimed at ensuring that increasingly capable AI systems are  robust and beneficial”.

On the other hand, there is a very real, direct, well understood and immediate threat to humanity at large posed by these “increasingly capable AI systems”: the threat of automation. The author touches on this in section 4.4, mentioning the influential book The Second Machine Age by the MIT economists Erik Brynjolfsson and Andrew McAfee. This book and others like the well argued Rise of the Robots by IT professional Martin Ford (following his earlier prescient Lights at the End of the Tunnel and Machines of Loving Grace by journalist John Markoff give a  range of examples of AI technologies at work in the everyday economy and society, showcasing both their capabilities and their consequences. Here is a case where the technology is well understood and the risks are clear and urgent, especially to western societies, with the effects likely to play out over the next 5-10 years. Various reports have estimated that upto 50% of jobs in western economies like the US and Sweden are in danger of being eliminated by automation over the next 5 years. There is urgent need for scientists and technologists to engage with economists, social scientists and policy makers to confront this major disruptive challenge to entire economies and societies. This may not be a case of existential risk, but nevertheless I would put it way higher in the priority list of concerns to address than the  danger of superintelligence, exactly opposite to the apparent prioritization the author makes.

Another such example I could mention is that of synthetic biology which the author does mention and recognize as a grave danger. But here again one wishes the author could have engaged more with the state-of-the-art technology beyond mentioning the vague threat of pandemics resulting from synthetic viruses let loose by do-it-yourself crackpots. In particular, the CRISPR gene editing technology is on the cusp of a revolution in medical genetics with the potential both to cure genetic conditions like Huntington’s disease or muscular dystrophy, and possible social engineering to create armies of blue-eyed super-intelligent babies. Once again,despite not being an existential threat, it deserves much greater attention in my ranking than nano “grey goo” and many of the others listed in chapter 8. What explains the single minded focus on existential risk when the book was supposed to be more generally about science, technology and the future of humanity? Is the author a victim of too many SciFi movies, or is it the well known Swedish cultural trait of being obsessed with extreme safety?

What is the intended audience of the book? The voluminous references and liberal sprinkling of footnotes on almost every page with subtle qualifications to arguments, are the vestiges of Olle the careful mathematician, but they are alas, not the best diet for a layperson. The person most likely to be drawn to the book is someone like the author himself, a combination of logic puzzler and dreamy philosopher. The author states that he was motivated to write this book after his experience of serving for 6 years on the Swedish National Research Council (VR), in particular, he found their criterion of judging proposals based solely on scientific quality without any regard to the consequences of the research very problematic. There are other major problems with the VR funding structure (another story for another time), but I hope the policy makers of VR do read the book and reflect on the important issues raised by the book and how to implement funding policies that would “nudge” science in directions beneficial to society rather than the opposite. I also hope it will inspire academics more generally, to think beyond their narrow specializations.

The scholarship and engagement of the book reflects an effort at a major career change
by the author, and this is confirmed in footnote 542. Once again, this must be commended, since the author could easily have carried himself along a standard “successful” path as a world leading percolation theorist (and indeed I have heard another world leading probabilist hurl infuriated expletives that he is choosing not to). Percolation Theory’s loss is our gain. We need people like Olle who combine sharpness of argument with vast scholarship and expository skill, to address the big issues. If I were to be granted a wish, it would be to hope that we see more of the Olle the hard nosed scientist in this future than the other identities so that the important message of the book is not diluted but conveyed even more forcefully.

 

Advertisements

2 thoughts on “Tilting at Windmills?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s