To do financial analyses, design new products, or gauge markets going forward, companies will have to make room in all departments for AI. But it isn’t going to be easy. Even those with relevant backgrounds encounter difficulties. Here’s a real-world example:
“Whenever I saw another breakthrough in artificial intelligence or machine learning hit the press, I came back to the same question: How does it work? The curious thing to me was that I’d spent countless hours studying and practicing machine learning in academia and industry, and yet I still couldn’t consistently answer that question. Perhaps I didn’t know AI and machine learning as well as I should, I thought, or perhaps college courses didn’t teach us the right material. Most college courses on these topics usually just teach the building blocks behind these breakthroughs—not how these building blocks should be put together to do interesting things.”
Those are the puzzled musings of Sean Gerrish, a former engineering manager at Google with a Ph.D. in machine learning from Princeton University. Part of the problem was that new technology was running ahead of his training, and the new breakthroughs had to be reverse-engineered to be understood. So that’s what Gerrish did, and now he has produced an excellent book, How Smart Machines Think, which explains how smart machines do their thinking in a way that can be understood by a layperson.
A PRIMER
The problem with many introductory AI texts is that their pages are weighed down with advanced calculus and graduate-level mechanics. That, or they’re essentially theoretical.
Gerrish is an engineer, but he doesn’t write like one. He approaches every topic with a patient step-by-step analysis of how the machine or process works. He generally avoids jargon, but when he does use concepts like the Kalman filter (a mathematical function of GPS), you get several paragraphs of what it does and how it does it using examples you can understand. There are no page-long equations to show you how it works in autonomous cars. And by the time he gets to the end of the book’s “How to Build an Autonomous Car” section, you not only know where the 20 or so sensors are built into your new car, but you’ll have seen how they communicate and serve the several levels of intelligence in the car’s brain.
A number of basic AI concepts are covered in the book, and several historic contests are relived. The descriptions of the $1 million DARPA Grand Challenge, beginning in rough desert terrain and traveling through several intermediary contests, and then the DARPA Urban Challenge show how the autonomous car went from the first DARPA-sponsored race in the Mojave Desert to our commercial roadways in just three years. There, 50 robot cars strove for the best time navigating city street traffic and parking lots on an old military base. At each stage of the story, Gerrish explains the contestants’ solutions that ultimately were built into today’s self-driving vehicles.
There are also the stories of the Netflix contest that offered a reward of $1 million to the team that could create movie recommendation engines for the data the company collects from viewers. The gaming world, from Atari to world-championship chess and Go, is also covered, along with the AI tools and procedures they helped develop.
THEN, NOW, AND TOMORROW
The book begins with a description of a life-sized statue of a flute player that could create music from an actual flute. It was the creation of the French mechanical genius Jacques de Vaucanson in 1737. The automaton inspires the book’s first instance of the oft-repeated question, “But how did it work?”
After 17 chapters pursuing the “how” question, the book arrives at a final speculation, “Where do we go next?” About the future, Gerrish explains, “The automata will invariably still follow programs—programs that will grow more and more complex, and it will become more difficult to discern what they’re doing, but it will always be possible to trace every action they perform back to a deterministic set of instructions.”
The beginning of this year is a good time for taking a closer look at Gerrish’s explanations of how the growing ranks of smart machines actually think. Waiting will only put us further behind.
January 2019