Bias in data analytics isn’t something new. It has merely reared its head again with the rise of more advanced technologies like AI, cognitive machine learning, intelligent systems, and several other developments that require data to perform their magic. Some data experts believe that data will never truly be without bias because the data life cycle—from data authoring to collection, storage, validation, analysis, and reporting or communication—involves humans in some way. We’re all prone to bias, even if it’s an unconscious bias.
Advanced algorithms are often created to sidestep these biases as mountains of data are processed inside the “black box,” or so some developers think. But as Chris Dowsett, head of marketing analytics and decision science at Instagram, stated in Towards Data Science (bit.ly/2I9jGik), “The fact that humans are both creators and users of data means there is an opportunity for bias to creep into the data life-cycle.”
In other words, whether authoring data or interpreting it once the analytics are completed, human interaction could introduce bias or cloud the subsequent insights that are used to drive decision making and business strategy, resulting in poor predictions and incorrect decisions. This is costly to the business and more costly to the individuals affected.
THE RISKS
As both chair emeritus of the IMA® (Institute of Management Accountants) Technology Solutions and Practices Committee and current chair of the IMA Diversity and Inclusion Committee, I’m sitting at the nexus point of this issue. Machine bias, as I call it, is a topic that concerns me greatly. And it should concern you as well.
Now more than ever, I feel at greater risk of becoming the victim of bias from sophisticated algorithms driving highly advanced analytics being used along with AI capabilities to make a decision or prediction about me, including things like my future buying behaviors, job prospects, or insurance coverage.
A colleague of mine is a chief data scientist at one of the Big Four accounting firms, and we have spoken several times about the risks of machine bias and the new frontiers it has opened. Data analytics isn’t exactly unfamiliar territory for accounting and finance professionals, but the landscape in which they perform analytics has evolved significantly. Cloud-based, scalable, and cost-effective AI solutions are now available to organizations in ways that only the deepest of pockets could access previously. This democratized access means that management accountants are able to use these tools to perform predictive and prescriptive analytics on structured and unstructured data sets in ways they couldn’t have done earlier in this decade.
Big Data and “little data” (more personalized data for each customer, member, and so on) alike are being scrutinized, analyzed, and visualized by accountants using these new technologies to find new correlations, patterns, and insights into the information. For some, these capabilities help them learn new and different things about their business, including strategic drivers—all good things. But they also add greater potential for bad things, such as bias or discrimination in the data analytics.
Adding to the complexity of what it means to be a well-functioning business in 2019 and beyond is the increased emphasis on best practices around diversity and inclusion (D&I). We already know that people can introduce bias into the data life cycle, meaning the organization risks being nondiverse and exclusive as a result. Layer on top of this the tools used in advanced analytics—the black box, which involves sophisticated algorithms embedded inside software solutions that crunch huge volumes of information to arrive at some conclusion—and the potential for bias gets amplified, especially if the algorithms are designed with bias built in, even if it’s unconscious or unintentional.
Regulators have worried for decades about the weaponized use of information by businesses against certain stakeholders or groups of individuals. Take, for example, the practice of redlining by mortgage lenders against people from higher-risk neighborhoods or whose race or ethnicity the banks deemed too high of a credit risk. So in some ways, things haven’t really changed much. Or have they?
REGULATORY CATCH-UP
We often see regulators in catch-up mode to get up to speed with the rest of the market in terms of new technologies. For example, banking and capital markets regulators are just beginning to deploy AI and other new technologies that the corporate sector has already been using. The problem is that during this lag time, businesses get more sophisticated, the technologies advance further, and the potential for negative exploitation of the public could balloon, leaving regulators to devise a strategy to get up to speed as quickly as possible.
But regulators are beginning to express their concern over bias in technology as well. In its 2016 report, Big Data: A Tool for Inclusion or Exclusion? (bit.ly/2WAvkf2), the U.S. Federal Trade Commission (FTC) said, “We are in the era of big data. With a smartphone now in nearly every pocket, a computer in nearly every household, and an ever-increasing number of Internet-connected devices in the marketplace, the amount of consumer data flowing throughout the economy continues to increase rapidly.” Yes, Big Data is getting bigger.
The report continues: “The analysis of this data is often valuable to companies and to consumers, as it can guide the development of new products and services, predict the preferences of individuals, help tailor services and opportunities, and guide individualized marketing. At the same time, advocates, academics, and others have raised concerns about whether certain uses of big data analytics may harm consumers, particularly low-income and underserved populations.”
DANGERS OF BIG DATA
Therein lies the risk. The FTC report recognized that the use of Big Data encompasses a “wide range of analytics,” but the 2016 summary was more narrowly focused on the commercial use of consumer data and its impact on low-income and underserved populations. These aren’t the only populations that can be harmed by bias in the data, the software, or the resulting analytics. (See “Algorithmic Accountability” on p. 60 for more on the FTC’s activity.)
The FTC is but one of many other groups studying the use of technology and data as drivers of bias in business. There are other examples of how technology is being used to discriminate and exclude. “Predictive policing” (the practice of using computer algorithms to determine where and when the next crime is likely to happen using data from previous criminal activity) is using new algorithms on gang warfare data. This new AI is among the first that focuses on gang-related violence. But some see the program as using inadequate inputs and prone to human error. It doesn’t take a data scientist to figure out how this could lead to profiling and predictions that turn out to be wrong. We know from the headlines that the courts are full of cases where defendants are wrongfully targeted, so why potentially exacerbate the problem?
The University of Melbourne has conducted the Biometric Mirror project that reminds me of the bias-related controversies surrounding Apple’s face recognition capabilities in its newer phones. The Biometric Mirror uses AI to analyze faces and a group of 14 characteristics about them, including their age, race, and perceived level of attractiveness. To train its AI, the Melbourne researchers asked human volunteers to “judge” thousands of photos against these same characteristics. The data from the human volunteers was used by the AI as it analyzed faces when people stood in front of the mirror. Given the subjectivity of the data from the human volunteer evaluators, it was seen to have bias built in, as was the output from the AI-driven Biometric Mirror. It’s easy to see how this might be misused in evaluating people for jobs or volunteer roles.
There are probably many scenarios in which AI is used to eliminate bias or discriminatory practices. I’ll be the first to applaud them. But society’s Achilles’ heel will be in the situations in which the bias is perpetuated and decisions are made that negatively impact the public’s livelihood, well-being, or even safety. I’m happy that regulators are watching and learning, but that isn’t enough.
ACCOUNTANT ACCOUNTABILITY
The accounting profession has to understand AI and its potential positive—and negative—impacts. As keepers of the data and those who are partly responsible for good data governance, accountants should care whether their data, software tools, and analytics are introducing bias into the business. This means accountants need to keep pace with change and the evolution of AI and analytics technologies.
AI has already evolved past its infancy. AI is considered “first wave” when it follows clear, logic-based rules to arrive at a decision or recommendation, e.g., how AI is used in computerized chess. The “second wave” of AI uses sophisticated statistical learning to arrive at an answer to solve problems, as seen with an image-recognition system. The “third wave” of AI is at the leading edge. It performs duties of the second wave and explains the logic or reasoning behind the decision at which it arrived. In other words, it tells you that the image it sees is an airplane and why it thinks it’s an airplane. The potential positive applications to the accounting and finance functions are endless.
But as the Biometric Mirror example shows us, there will always be the risk for misuse, bias, and discrimination in sophisticated software and AI algorithms. In my research, I came across an effort undertaken by IBM to reduce the risk of bias in algorithms called the Supplier’s Declaration of Conformity (SDoC). A publicly available SDoC report is intended to make the use of AI more safe, transparent, and fair in business. These reports show how algorithms performed against standardized tests against various performance, fairness, and risk factors. It was unclear to me what standards or benchmarks were being used to measure performance or whether any authority is even holding itself out as the arbiter of best-practice standards in this area. Although IBM has a great reputation as an innovator in technology, perhaps a more objective standard setter would make sense here.
LOOKING AHEAD
So what does this mean? Management accountants can play a leading role to protect the public interest by ensuring that technologies and sophisticated analytics don’t introduce bias and discrimination into the decision-making process. This begins by understanding these technologies (what’s going on inside the black box) and where the potential for bias exists, including within the algorithms themselves. The human interaction in the data life cycle needs to be examined to minimize or eliminate the potential for bias there as well. Sound data governance strategies and policies will help focus attention there. Bias-free technologies and algorithms must be developed, and accountants that take the time to get the data science and analytics competencies needed to accomplish this will rise above their peers in terms of value to their organizations. Accountants must have technology and AI competencies. This isn’t just a nice-to-have. They must understand how to eliminate the risk of bias within the black box and then steward the data toward bias-free insights, strategies, and decisions.
This FTC paper recommends several ways to prevent bias and avoid legal and ethical risks under U.S. law:
- Review data sets as well as the underlying algorithms used in analytics to ensure that hidden biases aren’t having an unintended impact on certain stakeholder groups.
- Keep in mind that just because analysis of Big Data may find a correlation, it doesn’t necessarily mean that the correlation is meaningful to the business. You should balance the risks of using those results, especially where your policies could negatively affect certain stakeholders. It’s likely worthwhile to have human oversight of data and algorithms when Big Data tools are used to make important decisions, such as those impacting health, credit, and employment situations.
- Consider whether maintaining fairness and taking into account ethical considerations steer you away from using Big Data in certain circumstances.
- Determine whether you can use Big Data in ways that advance opportunities for previously underrepresented stakeholders.
That seems like an excellent starting point for management accountants!
July 2019