Machine learning is a form of artificial intelligence that allows machines to learn without comprehensive programming. Supervised machine learning involves the use of computational mathematics to make predictions, while unsupervised machine learning involves data mining to uncover previously unknown patterns in the data under study. A key feature of machine learning is that the machine can change its code as new data is introduced and analyzed.

When working with humans, people will do what you pay them to do. Part of a fraudster’s thought process includes rationalization, where they convince themselves that an action they know to be wrong is acceptable. The two most common rationalizations are “I’m just borrowing the money,” and “I don’t get paid what I deserve.” A fraudster rationalizes that the reward from stealing is greater than the risk of getting caught.

This incentive in the context of machine learning is contemplated in robot ethics, or roboethics for short. Whereas machine ethics is a discipline concerned with the moral behavior and spirituality of artificially intelligent beings, roboethics is concerned with the moral behavior of the people that create and design these machines.

Both Walt Disney and Enzo Ferrari have been credited with saying “If you can dream it, you can do it.” So if fraudsters can dream of designing an unethical learning machine, what’s to prevent them from doing it? Just like Richard Pryor’s character Gus Gorman in Superman III or Peter Gibbons in Office Space, a computer can be designed to commit fraud just as easily as it can be designed to fight fraud.

Let’s consider an example. As partners in Nefarious Computer Company, we decide to design and sell a system that provides automated internal auditing. But we want a reward beyond our sales revenue, so we include in the design commands for the system to divert $0.50 a month from each of our clients and send it to us.

If our Nefarious system is stealing $0.50 a month, then auditors’ use of tools such as Benford’s Law in auditing software would identify the spike in the number 5 relative to its fraud-free frequency. We at Nefarious are designing an audit system, though, so we can simply instruct our system to ignore the $0.50 transactions in any results provided to the clients.

Sound too complicated? Not to Volkswagen. They embedded a program into their ecofriendly cars that would report ideal emissions when the car detected the conditions needed for an emissions test. In reality, their ecofriendly cars were spewing high levels of pollutants into the atmosphere, some 40 times higher than reported in the emissions tests.

Machines can even learn to lie if you want them to. A 2009 experiment in Lausanne, Switzerland, ran a multigenerational test of machine learning software. The machines were instructed to work together to earn a reward. But after several dozen iterations, the computers began to lie to one another in order to increase their personal rewards rather than the reward of the group. It seems that computers will also do what you pay them to do.

In his 1942 story “Runaround,” author Isaac Asimov presented his Three Laws of Robotics:

1 “A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2 A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

Asimov later added a fourth law. Since lower numbers in his list carry a higher priority, he referred to this fourth law as the Zeroth Law:

0 A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Why do these laws, created 75 years ago within the ­science-fiction genre, matter? Since their introduction, these laws have pervaded both science fiction and the field of roboethics. When determining best practices, organizations such as the Institute of Electrical and Electronics Engineers (IEEE) and the British Standards Institute (BSI) define these laws as the conceptual ideal that should drive practical activities designated as best practices.

Best practices of these organizations call for policies in which writers of computer code are held accountable for their work. How does this convert to practice? One practice is to have programmers comment at the beginning of a block of code the programmer wrote. An even more detailed—and tedious—approach would be for programmers to comment on every line of code they write. Both of these methods have weaknesses that can obscure the accurate software audit trail. First, both methods rely on self-reporting. Second, fraudsters can change or insert code and not report their presence or changes. This may actually lead to false accusations against the programmer who self-reported.

Code loggers provide more assurance than self-reporting. If any employees have access to the log, the details can be altered. The employee with access could participate in hiding the fraud, or their password to the log could be stolen.

Every one of these governing organizations cautions that situations will arise in machine learning that humans never expected. For example, machines made by different companies need to work together. Even if no malicious code exists, the clash between the machines may lead to violations of Asimov’s Laws. As another example, if a learning machine creates its own code, how is that process monitored and controlled?

As machines learn and hacking becomes more corporatized, Asimov’s words become even more relevant: “Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” Machine learning and artificial intelligence don’t guarantee the elimination of fraud. Rather, they make its execution and detection more complex.



REPORTING’S NOT-SO-LITTLE WHITE LIES

The Volkswagen emissions scandal is the lead story in the United States. But in the E.U., other automakers, including GM, BMW, and Renault, have been called out as well. Automakers cite a minor clause in the E.U.’s emission regulations that allows for a car to violate emissions standards when unsafe operation or damage to the engine may occur. From a technology perspective, this villain is the “defeat device,” code embedded in an automobile’s software that provides acceptable results when the car is undergoing emissions testing and ignores these regulations otherwise. For example, GM’s defeat device allegedly reports acceptable emissions until the altitude exceeds the E.U.’s emissions testing window. German reports estimate that these vehicles are projecting unchecked pollution into the atmosphere up to 80% of the time.

In the world of car commercials, it seems as if every car has won an award for something—best in class, highest safety standards, best fuel efficiency, etc. But these awards are often based on results that were generated by the automobile’s defeat device. That firms tout these results in commercials implies they may do the same elsewhere—like in sustainability reporting. The Governance and Accountability Institute ranked the most reported sustainability factors utilized in the automotive industry. Looking just at the top 10, the results of the defeat device directly affect “Emissions, Effluents and Waste” reporting and can indirectly affect “Products and Services,” “Overall Environmental,” and “Customer Health and Safety.” Suddenly one little white lie becomes full-blown pervasive fraud.

About the Authors