Computers now learn in supervised modes, and when they are left alone, they can access “deep learning” skills for self-teaching. According to Stanford’s AI Index Report, “In 2017 both IBM and Microsoft achieved performance within close range of ‘human parity’ speech recognition.”

Another landmark in 2017 was reached by an AI system that was trained on a data set of 129,450 clinical images of 2,032 different diseases. The system’s diagnostic skills were then pitted against 21 board-certified dermatologists. Comparisons found the AI system was “capable of classifying skin cancer at a level of competence comparable to the dermatologists” (


Since 1950, the Turing Test has been the most recognized test for evaluating machine intelligence. Essentially, it depends on the computer successfully masquerading as a person. Another, older IQ test for computers is more relevant if the skill in question is originality of thought.

In the 1830s, Charles Babbage built a Difference Engine (mechanical calculator) and then an Analytical Engine (a decimal digital computing device). His young assistant, mathematician Ada Lovelace, realized the potential for this first computer, and she wrote a series of notes that contained the first algorithms for its operation. Many historians have thereby acknowledged Lovelace as the first computer programmer.

In her notes, Lovelace discussed both the potential and limits of this first computer, and her speculations provided the basis for the Lovelace Test. She wrote, “The Analytical Machine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

The Lovelace Test for machine originality has three values: a = artificial agent, h = human creator, and o = original concept. For a computer to pass the test: a must be able to generate o without h being able to explain how the agent (computer or neural net) was able to come up with o. To eliminate the possibility of a “lucky guess,” a must be able to duplicate o if asked to. Last month, we discussed an evolutionary algorithm by NASA engineer Jason Lohn that seems to pass this test. Lohn was given the task of creating a small Wi-Fi antenna. After hundreds of iterations of his program on an AI-enabled machine, the computer produced a design that resembled a bent paper clip. It worked beyond Lohn’s expectations, yet the engineer couldn’t explain why. The antenna became part of a successful lunar mission.

In his book Thinking Machines, Luke Dormehl has another example: EVE, “a so-called ‘robot scientist’ designed to automate drug discovery” at the University of Manchester’s Institute of Biotechnology. EVE’s job is to test new drugs and to “come up with hypotheses about what to test.” When it does this, it formulates theories to explain what it sees and then creates experiments to test the theories and carry them out.

Ross King, EVE’s creator, explains, “If it were a human doing this work, it would certainly be considered creative because it’s based around formulating and testing.”


One of the projects for Watson, after the computer defeated the Jeopardy! champions, was to see if it could become a creative chef. So how do you teach an entity without teeth or tastebuds to be innovative with foods? Well, IBM started by having it analyze about 9,000 existing recipes. Rob High, CTO at Watson Solutions, said, “It learned the differences between a salad and a sandwich, or quiche and a pasta dish. It also learned the difference between Vietnamese cooking and Southwestern styles. It figured out which flavors come out most prominently within all those types of dishes.”

Then High and his team added another data set—“the knowledge of taste chemistry.” That’s an essential advantage of the machine chef. Watson combines what it knows about existing recipe combinations and compatibilities and then, “It goes through something like 6 quintillion permutations to find the chemical compounds, and the ingredients which contain those compounds, to make you one perfect meal. Quite often, they’re ones you would never expect.” Unexpected combinations could be in the form of bacon cider drinks or cherries and mushrooms—both cited by tasters as delightful—and foreseen by the computer that understands “food synergy.” Chef Watson explains synergy this way: “Studies indicate that food sharing common chemical flavor compounds taste good together. Much of Western cooking seeks out these pairings, while some Asian cuisines focus instead on contrasts.”

There is a Chef Watson cookbook, but you can judge the merits of #cognitivecooking at

About the Authors