While that comes from the head of a tech company touting an emerging technology, Pichai is far from the only voice talking about the vast potential of AI. To understand where this partnership of man and machines is taking us, it’s important to understand the differences between human and machine learning. Pedro Domingos, professor of computer science at the University of Washington, offers a concise outline of how humans and machines learn in The Knowledge Project Podcast episode “The Rise of the Machines."
HUMAN LEARNING
Domingos says natural intelligence has three primary sources—evolution, experience, and culture. Like all animals, we inherit eons of trial-and-error learning that has shaped our species and that’s encoded in our DNA.
We also have the body of knowledge that comes from experience. What we learn and remember from just living, we encode in our neurons. And the third source derives from our culture, what we learn from interacting with people, reading, and other activities.
And now we have a significant new source: Computers discovering knowledge from data. Domingos calls this source “every bit as momentous as the previous three were.” Computers can produce greater quantities of knowledge faster than the other three, and he concludes, “In the not-too-distant future, the vast majority of knowledge on Earth will be discovered and will be stored in computers.”
Machines not only discover knowledge, but they apply it as well. “In fact, both of those things,” Domingos assures us, “will generally be done in collaboration with human beings.” But some of it is already being done by machines alone. He cites the existence of hedge funds run entirely by computers as well as a venture fund with a seven-member board, one of which is an algorithm that has the same single vote as do the humans.
MACHINE LEARNING
Machine learning has branched off into what Domingos calls the “five Tribes of ML [machine learning]” in his book, The Master Algorithm.
The first tribe emulates the neural networks of the brain so that the machines can perform human tasks like image recognition and natural language processing. The second tribe is trying to simulate evolution on computers, but instead of developing plants and animals, its goal is programs. Some of these efforts have led to development of electronics sufficiently original to garner their own patents.
But because biology is random, and we don’t know if evolution is doing the best thing, most learning researchers believe in doing things from first principles like those pursued by the last three tribes of learning.
Number three, Bayesian learning, begins with a number of hypotheses. For these, you quantify how much you believe in each using probability. In the beginning, you have your prior probability. As you examine more evidence, your belief in the changing hypothesis evolves.
Number four, symbolic learning, is inductive and proceeds in the same ways that scientists, mathematicians, and logicians learn. You begin with data and a hypothesis, and you test your suppositions on the data. When done by machines, this process is incalculably faster than humans creating formulas and syllogisms.
An ultimate goal of symbolic learning is to build a robot intelligence that can operate on its own. Domingos describes the artificially intelligent U.K. “robot scientist” Eve that was designed to automate early-stage drug design. Eve discovered a compound shown to have anti-cancer properties and that’s also effective against malaria.
Finally, learning by analogy retrieves information from memory—situations from our past—in order to understand new situations. A human doctor will do this recalling past medical diagnoses that resemble present situations that need to be diagnosed.
All five of these learning systems are being explored in order to create the kind of machine intelligence that will design its own algorithms—for computers that program themselves.
Machine learning is a branch of AI, and the five tribes, each specializing in different learning techniques, will need to further coordinate in order to make progress from our current artificial narrow intelligence to the next level of strong artificial general intelligence. That still seems a long way off, but Domingos isn’t alone in predicting its eventuality. Yann LeCun, vice president and chief AI scientist at Facebook, has echoed the same sentiment in almost the very same words used by Domingos. LeCun said, “Most of the knowledge in the world in the future is going to be extracted by machines and will reside in machines.”
September 2021