5 PLUS 1

In his will, industrialist Alfred Nobel defined worthy contributions as important discoveries or inventions within the fields of physics, chemistry, physiology or medicine, and literature, as well as the “best work for fraternity between nations” (the Peace Prize). And over the last 121 years, only one new category has been added to the list. In 1968, the Sveriges Riksbank, Sweden’s central bank, established the Prize in economic sciences in memory of Alfred Nobel.

This year, Nobel week began on October 3 with the awarding of the prize in medicine. October 4 was physics, October 5 chemistry, October 7 the Peace Prize, October 10 economics, and the belated literature prize on the following Thursday, October 13. The physics prize went to three researchers for their “discoveries of topological phase transitions and topological phases of matter.” These involve mathematical mapping of phases of matter when it shifts to superconductors or magnetic films. Chemistry went to three working in nanotechnology—designers of molecular machines. The medicine prize was given for discoveries in the field of autophagy, the way cells clean themselves up by digesting some of their own structures. The subject area for economics seemed more approachable—it was awarded to two individuals for their “contributions to contract theory.”

Once again, mixed in with the reviews of the current winners, there was a polite objection to the absence of a Nobel Prize in computing. An article in Deutsche Welle (www.dw.com), “Desperately seeking the Nobel Prize in computer science” by Zulfikar Abbany, complains about awards for some pretty abstract research while the tremors and upheavals caused by inventions like the cloud and mobile computing go unnoticed. Abbany admits the obvious problem of the awards having been born in a time well before the dawn of computing but then reminds us that the economics medal was instituted at a time when the digital revolution was underway.


HARDWARE GETS NOTICED

Abbany acknowledges that computing hardware does get recognition at times. For instance, the research behind the current physics prize might provide a pathway to new electronics and quantum computing, an area of research currently fully involving companies like IBM and Google.

And there are others like Charles Kuen Kao, father of fiber-optic communications, who won a Nobel, as did the three inventors of the transistor back in 1956. The hardware scientists do get credit, medals, and monetary prizes. But what about the coders and other software innovators?

Technically, you can’t claim their art/science also came too late, after the categories were drawn. The first generally recognized computer programmer was the poet Lord Byron’s daughter, mathematician Ada Lovelace, and she was living in London almost a century before the first Nobel Prizes were awarded. Lovelace wrote the first algorithm for Charle’s Babbage’s Analytical Machine, a mechanical computing device.

The long parade of programmers who followed have provided some rather significant benefits that they’ve “conferred on mankind” virtually anonymously. School children are taught about pioneers like Lewis and Clark, but what about the creators of the Mosaic browser who opened up something wider than the Northwest Passage—the World Wide Web? Tools like spreadsheet technology are important to more than just those in the back office, but how many know about Dan Bricklin and Bob Frankstron’s involvement in the late ’70s?

It might soon be too late to forge Nobel medals for human software innovators because we’re approaching a new era when the smartest computers are learning on their own and can actually write the software that they need without human intervention. It’s hard to imagine Google’s Deep Mind machine delivering an acceptance speech in Stockholm.

About the Authors