The Economist on computer vision, deep learning, and job security

tl;dr

As a recovering1 devotee of The Economist, I occassionally keep up when I see interesting headlines of their articles on Twitter. Here’s a link to a recent article by The Economist, “Artificial Intelligence: Rise of the Machines”, aptly subtitled, “Artificial intelligence scares people—excessively so”.

I agree.2

To echo Tyler Cowen of Marginal Revolution: do read the whole thing.

et cetera

The Economist’s cover this week (left), which reminds me of a scene from The Matrix (right).

Artificial Intelligence       Matrix Baby
 

Today I came across a tweet that, ironically, caught my eye. Found it to be a very thoughtfully crafted piece that exists as an in-depth survey of (mostly) deep learning, in terms of how it works, what some of the recent breakthroughs have been (both theoretically and practically), what some of the current limitations are, and what some of the existing and potential applications are. If you’re looking for a short, non-technical primer, I’d highly recommend it.

The Economist is also on point in directly addressing some of the recent high-profile warning shots about AI, from Bill Gates, Stephen Hawking, and Elon Musk.

Nick Bostrom, a philosophy Professor at Oxford, is also mentioned in the article. His book Superintelligence: Paths, Dangers, Strategies has been making the rounds since it was published last year, particularly in articles discussing the risks of what Bostrom calls “Superintelligent” AI.3

The Economist develops, in parallel, a layperson’s explanation of cutting edge and bleeding edge AI, and an argument for why people are excessively scared of AI. Fear of the unknown is a major contributor; you’ll note that prominent academics such as Andrew Ng and other AI researchers – who are experts in AI – can’t stress enough how simple the state of the art really is (the following quote is from Ng):

I think the fears about “evil killer robots” are overblown. There’s a big difference between intelligence and sentience. Our software is becoming more intelligent, but that does not imply it is about to become sentient. … [I] think the hype about “evil killer robots” is an unnecessary distraction.

There’s more here and here (the following quote is from the latter) from Ng:

“For those of us shipping AI technology, working to build these technologies now,” [Ng] told me [Alexis C. Madrigal], wearily, yesterday, “I don’t see any realistic path from the stuff we work on today—which is amazing and creating tons of value—but I don’t see any path for the software we write to turn evil.”

Additionally, I’d like to cite Andrej Karpathy. Karpathy is a CS PhD student at Stanford and at the forefront of computer vision and deep learning. This post succinctly (and humorously!) describes how far away we really are from anything resembling “intelligence”. It made me chuckle.

The Economist, Ng, and Karpathy are right. Moreover, Gates, Hawking, and Musk are right to be concerned, but the larger issue is that the prominence of their voices leads people to miss the subtler themes of their concerns entirely. The open letter that Bostrom, Hawking, and Musk have signed (which is repeatedly cited as a primary source in articles discussing Hawking’s and Musk’s deep concerns about AI), Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter, is neither alarmist nor reactionary. It’s forward-thinking. There’s even an attached (and very detailed) research priorities document. Furthermore, if you look at the list of signatories, you’ll notice that there’s a nontrivial contingency of AI experts, particularly deep learning experts, such as:

  • Yann LeCun
  • Geoff Hinton
  • Yoshua Bengio

Need I continue? They’re the holy trinity of deep learning. A few more prominent figures in AI:

  • Peter Norvig
  • Eric Horvitz
  • Chris Bishop
  • Oren Etzioni
  • Ilya Sutskever

One could go on. Needless to say, that’s an impressive list of signatories. No one knows better than they what feats bleeding edge AI is currently capable of performing. Given their names on this document, I have a few thoughts.

Gates, Hawking, and Musk are thoughtful individuals whose legitimate and well-founded concerns about AI have been mischaracterized and blown out of proportion to feed people’s fears of the unknown.4 Additionally, if you look at the thoughtful people who are both working on AI and are concerned with the broader social, economic, political, and existential implications and risks of superintelligence – which looks like a who’s who in AI – it doesn’t paint a picture of researchers furiously working towards the singularity with the sole purpose of achieving technological breakthroughs along the way – which appears to be a common conception – it paints a pleasant picture of intentionality. That being said, I think it’s vital that this level of thoughtfulness trickles down to government and industry.

Thankfully, we’re at an interesting time where so many prominent researchers are working in industry.5 That’s not meant to be elitist. At a time when governments and corporations have so much power over our everyday lives, it’s critical that decisions made at those levels are the right mix of common and uncommon sense. My second biggest worry is that the number of people who know enough to be dangerous – i.e., folks who can deploy AI in governmental or commercial applications but who lack the foresight to consider broader and subtler issues – is far greater than the number of people who are thoughtful experts. My biggest worry is that these legions of practitioners will be dangerous in their applications of AI with or without realizing it.

Ergo, moving forward, let’s avoid ramping up people’s fear of the unknown by pitting prominent public intellectuals against researchers (and avoid doing so in such a way so as to appear that researchers’ work is antithetical to our very humanity). Let’s be intentional with our progress and be wise about implementation. Let’s celebrate what we’ve achieved. Finally, let’s be excited rather than fearful about what’s possible in the future. Fear can stop fruitful discussion in its tracks. Excitement elicits creative explosions in conjecturing what’s possible, what’s good, and what’s not: that provides necessary (though perhaps insufficient) inspiration and caution to nudge technological advancements away from the dark side.

Marginalia:

  1. In general, I am a long-term subscriber (and lover: I <3 The Economist) of The Economist, but I’ve been taking some time off as I couldn’t bear to see my pile of unread issues accumulate in height and in dust. 

  2. Crossing my fingers that my agreement is on the right side of history. 

  3. I would like to read that along with Global Catastrophic Risks. Last year Bostrom gave a fantastic talk on Superintelligence (as well has his work in general) at Town Hall Seattle

  4. This seems relevant: known knowns, known unknowns, unknown knowns, and unknown unknowns. Note: this citation is not an endorsement of the utterer of the famous utterance. 

  5. Correct me if I’m wrong, but I believe that of my list above, only Yoshua Bengio is solely working in academia.