“The Future of Intelligence”: Nobel Dialogues 2015

Intelligence, especially that of machines – which is the focus of this blog – seems to be the Zeitgeist – in the new Millenium novel, the plot centers around the murder of the famous Swedish scientist Frans Balder whose research is about self-learning algorithms. Films like Her and ex Machina may (still) be fiction, but driverless cars are already here.

It was thus  entirely timely  that the Nobel week dialogue 2015 should have focussed on the “Future of Intelligence“. This inspired  us to set up our blog (shamelessly hijacking the title) to talk about  our’s group’s research which has an increasing focus on machine learning (ML) algorithms and their applications and to share our thoughts on ML developments around the world. We will discuss here both technical aspects of machine learning research, but also aspects touching on the wider context in philosophy, economics and societal issues.

After a short and electric dance sequence to a Monteverdi composition, the event got off to a rather disappointing start. First up was the futurist Ray Kurzweil, who lectured us about the difference between linear and exponential growth, and then showed graph after graph of Moore’s law illustrating exponential growth – in biology, nanotechnology and ICT.  Next was 2001 Physics Nobel laureate Carl Weimann who gave, by contrast a sedate talk that good learning habits for humans count for much more than IQ.

What is Artificial Intelligence?“. After several attempts, of which many featured the word “complex”, the panel basically came around to the view expressed by a distinguished US judge who, when asked to define obscenity, replied: “I know it when I see it.” Last year’s Nobel prize winner for medicine, Edvard Moser opined that neuroscience was currently undergoing a a revolution to understand the complex computational algorithms in the cortex, but we were still only at the beginning. Barbara Grosz wanted  AI to be seen as something to complement our abilities, not to replace them.

This segued nicely into the talk by IBM’s head honcho for cognitive computing, Guru Banavar, who talked about the changing nature of expertise in the future. Future experts will work together with smart machines that will serve as cognitive assistants, using them to rapidly sift through vast amounts of data and extract insights and reason with them. After a short clip showing the famous victory of IBM Watson system  in the  Jeopardy! competition, Banavar said that Watson is now being deployed to solve more complex and meaningful tasks such as curing cancer or reversing climate change. The formula is Humans + Machines, with the human strengths of intuition and judgement complimented by the machines’ strengths of massive search and calculating power. IBM’s cognitive computing vision for the future is a pervasive one where every knowledge worker will work with a cognitive assistant in collaborative tasks in applications like medical imaging, oncology, genomic medicine, clinical trials, drug discovery, city planning and green growth. Guru was enthusiastic about us joining IBM’s cognitive computing university alliance and doing some projects together. IBM’s chief technologist for Sweden, Mikael Häglund was very forthcoming about helping with student projects in cognitive computing – so, exciting times ahead for us!

“The future of human-computer interaction”. How should the cognitive assistants that Guru Banavar talked about interact with humans? Guru’s criterion was that it should be natural e.g. using natural language. But behind this is a tough problem – the assistant should be able to understand mental states of the human. An interesting question that came from the audience was whether computers should show any emotion in interacting with humans?  Stuart Russell joked that the only two emotions computers currently display are swirling round and round waiting for a task to complete or a linear progress bar saying “hey, I’m getting there”.  Russell wondered if computes should deliberately hide their intelligence so that they don’t hurt our egos, as in the movie Her, where the computer would much rather hang out with other AIs. Grosz was strongly of the view that computers should not pretend to be human as in ex Machina, because that would be very misleading.

Will AI change the world?” Nobel Laureate Michael Levitt started the session somewhat facetiously asking what one needs on a camping trip – a torch? the smart phone has it, music? the smart phone has it, compass? the smart phone has it  … everything except the charger! Economist Joel Mokyr hailed it as an example of a real changer of the world, especially in places like Asia and Africa. Mockery made the important  point that even as a mainstream economist trained in the ways of the market, the market should not decide the path that science and technology is moving in. Personalized education and access to education was agreed as a “killer app” in the future.

The AI singularity: should we welcome it or fear it? ” Kurzweil the optimist pointed to the benefits for the big problems of biotech and nanotech. Russell urged caution about unintended consequences, reminding us of the story of the genie emerging from the lamp: you are allowed three wishes, and the last wish is often to undo the previous two! The physicist Max Tegmark agreed, saying it was too costly to learn from past mistakes with these new technologies, much better to be prepared and plan in advance. Russell ended the session with the position that computers would need to learn human values giving the darkly funny example of the home assistant robot asked to fix dinner. Finding the fridge empty, the robot assistant proceeds to cook the cat! Tegmark gave another example in the same vein – ask your self driving car to get you to the airport as fast as possible and you end up with an astronomical speeding fine. But, but … you implore the car, that’s not what I meant. “Yes, that is exactly what you requested” said Tegmark giving a masterly imitation of a robot voice.

The Future of AI“.  Participants were asked to give their favorite examples of where the future of AI was going. Russell was optimistic because of real progress in robotics and that when language understanding got better, search engines of today like Google would be put to the shade.  Levitt wondered if a $1000robot would be available soon to make his morning omelette.  Tegmark asked about the previous hype of AI and unattained claims asking “why this time was different”.  Grosz said systems already exist that make a real difference.  Russell agreed, pointing out that out that while speech and language technologies were not perfect they were very useful and had tangible economic value. He said this economic incentive was so strong that companies would just pour investment in to corner that dominating advantage, dwarfing Government efforts. He made the claim that one of the reasons he believes in AI today as opposed to a few decades ago is that it is now built on solid mathematical foundations. This was music to my ears – finally a reason to motivate our students to learn math! I asked him later if he thought the current ML craze, Deep Learning, had solid mathematical foundations. “Hmm … that’s an embarrassing one”, he replied. As his dream example of what AI would do in the future, Russell gave the example of a assistant helping a human digest vast amounts of information in biology , aggregating data and reading millions of papers to create a digestible consensus summary.  This resonated very well with one of our projects on automatic summarization (possibly a topic for a future post). Another example, albeit with less economic incentive, would be to do a similar analysis of historical studies, aggregating information to study evolution of language and movement of peoples – yet another link to our own research in “culturomics” (another future post). Levitt wanted computers to be used to project future scenarios before politicians  made decisions. Just what our Global Systems Science project is all about!

I would’ve liked to attend some of the parallel sessions, especially the one with economists Robert Schiller and David Autor on the future of jobs, that is perhaps the most immediate challenge posed by the automated economy already upon us – recent studies int he US and in Sweden have estimated that unto 50% of all jobs may be displaced for ever. What are the appropriate policy responses to this challenge? I hope to explore this question in a future post.

AI, Art and Culture“. Almost at the end and we had a special guest, the 2004 Physics Nobel laureate Frank Wilczek speaking from a robot like contraption on which his face appeared and which he seemed to be guiding remotely from his home in Boston! Wilczek talks about art and physics and the principles of beauty behind both in his recent book A Beautiful Question and this was the theme of this session featuring also last year’s medicine Nobel laureate May-Britt Moser and the artistOlafur Eliasson.

Overall, some of the the conversations in the day were fun, though not as deep and sustained as one might have liked. But as a way to instill an interest in the subject amongst the general public, I think it succeeded admirably. And it inspired us to start this blog!

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s