Management accountants have the opportunity to harness the countless applications and benefits of AI. But, as Elon Musk warned at the March 2020 South by Southwest technology conference in Austin, Texas, “Mark my words, AI is far more dangerous than nukes…. It is capable of vastly more than almost anyone knows, and the rate of improvement is exponential.” That may sound like hyperbole to most, but AI does present real concerns and risks for business, and it’s worth introducing the questions management accountants will likely face going forward.
JOB AUTOMATION
It must be recognized that automation will alter or replace certain types of roles, including white-collar professionals such as doctors, lawyers, and accountants. Many of the jobs being replaced are predictable, repetitive, and task-oriented, and employ minorities, young individuals, and women.
Who gains the economic value of these reduced costs based on job losses? What happens to those who lose their job, and who picks up the costs of retraining? Assuming that management (or owners) will gain the financial benefits of AI job loss, will this further exacerbate the wealth gap? What about the impact on communities where the jobs are lost?
PRIVACY
Criminals are training machines to hack or socially engineer human victims. The affected domains include physical security (weaponizing consumer drones) and political security (e.g., privacy-eliminating surveillance, profiling, and repressions or targeted disinformation campaigns).
One example of the fluidity of these issues is the current dispute between Apple and Facebook about sharing individual users’ private data, with potentially billions of dollars in advertising revenue and new businesses at stake. Apple is proposing that the user make a conscious choice to opt in where Facebook wants an opt-out. The European Union and the U.S. state of California have the most extensive privacy models that require opting into the sharing of data, but business worldwide will be impacted by decisions made in this arena.
Management accountants, as part of the preservation of corporate assets, will often use security cameras. In the event of loss, the camera footage is frequently provided to law enforcement. Law enforcement may use facial recognition to help identify those responsible for the loss. An independent assessment by the National Institute of Standards and Technology confirms that facial recognition is least accurate on darker-skinned females and 18-to-30-year-olds by about 28% compared to lighter-skinned white males. Some U.S. states have proposed legislation designed to determine when and where drones and facial recognition may be used and who has the liability for its ultimate results.
DEEPFAKES
Deepfakes are synthetic media in which an existing image or video is replaced with a different likeness. It includes logos, voice, and other characteristics. They’re used to gain information or change the opinions of individuals and businesses. Deepfakes are the negative results of deep-learning technology, a branch of machine learning that applies neural network simulation to massive data sets to create the fakes.
These risks are huge for business and political enterprises. Deepfakes can be used to fool users into taking actions against the interests of individuals and businesses or to fall victim to potentially catastrophic fraud. These include apps and programming spoofs.
ALGORITHMIC BIAS
In her book Weapons of Math Destruction, Cathy O’Neil describes algorithms as being “weapons of math destruction” by these three characteristics: opacity, scale, and damage. Opacity is when the model is invisible or inscrutable to the people it affects. Scale is the number of people that the model or algorithm affects. Damage represents the societal costs that result from the mathematical model. (See a video of O’Neil speaking on weapons of math destruction.)
Examples of weapons of math destruction are endless—from those selected for jobs, credit rates, criminal sentencing, and policing to loans on housing, insurance ratings, and vendor selection. Management accountants must recognize that if they use these mathematical models, it’s crucial to at least understand the models at the level of being able to explain them to others. Management accountants should help ensure that their businesses understand the models they encounter and the risks involved, whether from vendors or from within their own organizations.
The applications of AI and machine learning must be done with a continual consciousness and thorough safeguards against bias within the black box. Finance leaders would do well to make a regular practice of asking whether decisions are good not only for some individuals and the business in the short term, but also for all stakeholders and society in the long term. This means management accountants must evaluate what could happen at every turn and determine if an action is worth the risks to the business, communities, and societies.
REPLACING HUMAN JUDGMENT
Can smart machines outthink humans? Eric Wang, senior director of AI at Turnitin, a plagiarism-checking company, states, “Many people think AI is smarter than people. But AI…is a mirror that reflects us to us and sometimes in very exaggerated ways.” AI strives for perfect judgment; yet who decides what that is?
Management accountants often must deal with the good, the bad, and the ugly as part of their roles. They are trained to recognize these factors. It’s similar with AI. Management accountants, must do a 360-degree examination of the technology and recognize the benefits and harm to business, individuals, communities, and societies.
MORE FROM SF ON ALGORITHMIC BIAS
“AI: New Risks and Rewards” by Mark A. Nickerson, CMA, CPA
“Machine Bias inside the Black Box” by Brad J. Monterio
“AI Isn’t Neutral” by Lorenzo Patelli, Ph.D.
“Data Bias and Diversity and Inclusion” by Richard Schaper, CMA, and Kenya Matsushita, CMA, CPA
September 2021