While automated decision making, with its algorithms often described as AI, serves as a catalyst for efficiency, it can also have potentially harmful consequences. Typically viewed as innocuous and fair, these decision-making algorithms may cause and even amplify biases, creating or perpetuating structural inequalities in society.   The phenomenon of digital discrimination due to AI bias has become so prevalent that a study published by the U.S. Department of Commerce found that people of color are more likely than whites to be misidentified by AI-based facial recognition technology. In fact, the list of cases involving AI bias has continued to grow over the last several years, extending to increasingly complex processes, with grave consequences.   For example, with regard to police work, there have been numerous wrongful arrests resulting from false hits by facial recognition software. In healthcare, an AI algorithm that was supposed to identify high-risk, chronically ill patients and assign them additional primary care visits wound up favoring white patients over Black patients. Using cost as a proxy in the algorithm, however, often ended up causing a racial bias due to an inherent difference in healthcare needs between Blacks and whites, even when both groups spent comparable amounts.   These anecdotes illustrate the unintended biases that could result from automated decision-making systems commonly used in such varied fields as marketing, insurance, healthcare, law, and finance. With a multidisciplinary approach and a coordinated effort, however, there are solutions to mitigate such biases—an effort to which management accountants and other financial professionals can contribute.  

ONE COMPANY’S CAUTIONARY TALE

  In 2016, Microsoft developed an AI application that featured a chatbot named Tay that could handle written conversations in real time. Tay was developed following the earlier success of Xiaoice, which had been produced in China in 2014 and was hailed as an “emotional” computing device for its ability to write poems, compose songs, and even become a virtual chat partner, garnering millions of followers on Weibo and WeChat.   Initially, Tay received an overwhelmingly positive response from Twitter users, interacting with them more than 100,000 times within a 24-hour period. Yet Tay quickly went from friendly and tolerant to racist and sexist. A day later, following the public outcry over Tay’s behavior, Microsoft removed it from Twitter.   This came as a surprise to many experts, as Tay and Xiaoice were often regarded as twins. Both were from Microsoft and used the same technology. So how could they behave so differently?   To examine this discrepancy, it’s necessary to understand the crucial role that data plays within the sphere of AI. Operating from the platforms of Weibo and WeChat, Xiaoice was exposed predominantly to users within one country. In contrast, when Tay initially launched on Twitter, it interacted with a widely diverse group of users from around the globe, with the ensuing interactions often degenerating into uncivil conversations. As an unintended consequence of AI, Tay learned from the data that it received in such an environment and proceeded to mimic and make similar extreme statements.  

BIAS CREEP IN AI PLATFORMS

  In a way, AI developers are like parents who need to provide a suitable environment for their children to thrive. While children often have biases passed down from their parents, they can also learn from a wide range of societal biases prevalent in their environments. Neither is AI immune from bias. The data used by AI developers can similarly contain a myriad of implicit (also known as “algorithmic”) biases, either because of existing societal biases influencing algorithmic developments or due to biases embedded within the training data.   Amazon’s automated tool for recruiting new hires, for example, has been shown to exhibit both algorithmic and data biases. The automated screening system was taught to recognize word patterns instead of relevant skills in job applications. Moreover, since the system was exposed to data primarily from a largely male applicant pool over the past 10 years, the screening device inadvertently favors résumés from men over those from women.  

Another example of an apparent biased automated decision-making system is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a tool that many courts in the U.S. use to assess the likelihood of a defendant becoming a recidivist. This application collects information related to defendants’ prior offenses, education, and employment history. Although the application doesn’t collect data on race, investigative journalism organization ProPublica found that Blacks are “are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” while whites are more likely to be labeled a lower risk to re-offend but then go on to commit other crimes. (See Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,”).

ETHICAL CHALLENGES   A difficult conundrum exists concerning biases in automated decision making. In reality, biases resulting in discrimination can be both helpful and dangerous. For example, when AI is used as intended in accounting and auditing, it can flex its discriminatory abilities to help detect or predict activities that may be potentially fraudulent. Financial institutions can also use automated decision-making techniques to better manage their risks when debating whether to extend credit to both potential and existing customers. In effect, by using AI in credit scoring, financial institutions can determine an individual’s potential risk of default. Similarly, in the financial technology space, crowdlending companies assign scores to potential projects, thereby allowing lenders to select projects that fit their desired risk profiles and appetites.   While these applications have clear advantages, a major problem with using automated decision-making tools is that the associated vetting techniques are rife with biases. A study from the University of California, Berkeley found that both online and in-person lenders charge African American and Latino borrowers much higher interest rates, earning 11% to 17% higher profits on their loans, respectively, when compared to those of other borrowers. Another study from Stanford University suggests that the observed differences in mortgage loan approval rates are primarily due to minorities and low-income groups having less readily available information on their credit histories.   In certain situations, biases can further pose an ethical challenge, sometimes with dire consequences, when AI discriminates based on data that’s related to ethnicity, race, religion, and/or gender due to certain high correlations among the data. Nevertheless, simply removing these variables may not resolve the observed biases. This is because other seemingly unrelated variables may also indirectly serve as proxies. For example, even though COMPAS didn’t collect data related to race, the defendants’ educational backgrounds and addresses may introduce biases when aggregated and used as training data.   Similar cases of bias have involved Apple Card and a host of insurance companies, including Allstate, GEICO, and Liberty Mutual. Lawsuits were filed against these companies for the alleged discrimination in their automated decision-making processes, whereby their algorithms systematically discriminated against certain gender and minority groups—even though these companies comply with the prevailing regulations and hence don’t collect data related to gender, marital status, and race. In Oregon, for example, women were overpaying for basic auto insurance by an average of $100 (or roughly 11.4% more) compared to men. It turns out that even though no gender or race-related data was directly collected, other indirect demographic proxies like ZIP codes may often lead to generalizations of locations typically inhabited by Blacks or Latinos.  

PROMOTING RESPONSIBLE AI PRACTICES

  For individuals, businesses, and communities, automated AI decision making, if not managed properly, could ultimately lead to restriction of certain liberties, missed opportunities, and economic losses. In addition, as companies increasingly use AI for fraud analyses, insurance claims, and payment eligibility, potential bias-related pitfalls may arise.   Overall, organizations should remain vigilant toward responsibly implementing automated decision making. This includes instituting preventive actions aimed at minimizing potential risks of biases as well as corrective and compensatory measures when things go awry. In this regard, organizations would do well to consider adopting risk-mitigation strategies that will lead to responsible AI practices (see Table 1). These strategies fall under the categories of data governance and curation, system development life-cycle review, policies and regulations, and human-algorithmic interactions. Let’s take a closer look at each one.  

Table 1

DATA GOVERNANCE AND CURATION

  Data governance is critical for organizations because it helps them solve the “first mile” and the “last mile” of data problems. The “first mile” problem relates to data collection; it’s of utmost importance that the data, besides being sufficient and relevant, is of high quality and complies with regulations governing personal data protections. The “last mile” issue occurs when unexpected problems arise after the automated decision-making tool has processed the data. When that happens, data governance plays a role in determining which relevant data the company should use for automated decision making, who has the authority to access the data, how the data should be shared, and how it should be retained within an organization as well as across organizations.   There should be strict protocols regarding how data governance for automated decision-making tools should proceed and whether sufficient data has been collected in the first place. (For help on ensuring “clean” data, see “Tips for Clean Data.”) For example, American Banker estimates that about 14% and 17% of Hispanic and African American households in the United States, respectively, lack access to affordable banking. This problem is compounded by the fact that many members of these minority households don’t have government-issued photo IDs, thereby often leading to misinterpreted or missing data. Organizations thus need to be cautious when dealing with the algorithms used to process erroneous data. At a minimum, the results of automated decision making should be evaluated carefully in order to screen out any cases of possible bias.  

 

Side bar 1

Even though both the underlying models and data interact to produce results, AI developers should first look at the data rather than attempting to tweak the code when the models behave in unexpected ways. AI developers must carefully consider whether the use of proxies or other related variables may lead to biased results. From an ethical standpoint, biases could become problematic and unfair when any decisions discriminate on the basis of ethnicity, race, religion, and/or gender.   Alternatively, AI developers may consider using behavioral proxies. For example, in China, companies like Lenddo and Yongqianbao have started using a smartphone’s battery life, among various proxies, to determine customers’ credit scores, with the thinking being that individuals who regularly charge their smartphone may engage in other positive behaviors. Other variables, such as browsing history, time spent on social media, and the number of days taken to pay bills, could similarly be used as behavioral proxies.   Naturally, using customer data for predictive analytics can have negative consequences in the absence of good data governance. An effective data governance policy can help to protect against financial and reputational risks caused by data breaches, with management accountants playing a key role in helping companies better collect, store, and manage their data. Incorporating data ethics into a company’s corporate culture can help maintain data sensitivity and transparency, thereby leading to improved regulatory compliance and risk mitigation in relation to data use.  

SYSTEM DEVELOPMENT LIFE-CYCLE REVIEW

Problems can also exist in the system’s development life cycle of automated decision making, such as in any of the successive phases of concept, design, development, testing, operation, and decommissioning. To resolve these problems, several tools are available, including AI Fairness 360, Local Interpretable Model-Agnostic Explanations (LIME), and the SHapley Additive exPlanations (SHAP) tool kit. LIME and SHAP can help organizations to better understand how automated decision-making systems generate their results (see “LIME and SHAP” for more). Recent research has also demonstrated how these tools can help auditors improve the transparency, interpretability, and accountability of AI use in audit tasks.

 

Side bar 2

In addition to proper algorithmic design in order to mitigate bias, AI developers should also explain the impact of any resulting biases. For example, if automated decision making is tasked to score a diverse population pool, this can sometimes magnify existing biases due to a lack of training data from the minority pool (in the U.S. for example, this would comprise mainly Hispanics and African Americans). In such instances, providing appropriate explanations will help improve the accountability, transparency, and interpretability of the automated decision-making systems that will be developed. These explanations may include the rationale for creating the automated decision-making tool, whether the developers have considered their own unconscious biases, and how stakeholders could be better involved in the algorithmic design and development processes.   Even though transparency and accountability are key defenses when automation-related decisions become challenged in court, many companies developing automated decision-making tools have often resisted calls for greater public transparency, as doing so may inadvertently allow their competitors to steal their technology. As the next recommendation suggests, however, a concerted effort to enact uniform regulations toward public transparency will make for a more level playing field in the long run.  

POLICIES AND REGULATIONS

It turns out that the use of automated decision making in setting auto insurance premiums can sometimes result in lower-risk individuals paying higher premiums. Due to numerous cases of gender discrimination related to AI-based premium setting in the past, U.S. states such as Washington and Oregon have banned the use of credit scores in setting insurance premiums.   This type of dilemma highlights the need for governments to play a critical role in regulating AI-based automated decision making. Data confidentiality and privacy regulations are key elements toward setting the ethical guidelines for AI. For example, in Singapore, the Model Artificial Intelligence Governance Framework provides guidance in the areas of internal governance structures and measures, level of human involvement in AI-based decisions, and operational management, as well as addresses issues concerning stakeholder interactions and communications. Similarly, the European Union (EU) has released several privacy directives, like the General Data Protection Regulation (GDPR) and Ethics Guidelines for Trustworthy AI. According to the EU guidelines, AI should comply with applicable regulations and laws, be ethically and morally sound, and be technically robust. In addition, automated decision-making tools should be audited regularly to ensure that any biases caused by either the algorithms or the data can be mitigated.   When it comes to data protection, an accountant may act as either a controller or a processor. A controller determines the “purposes and means” of processing personal data. When processing data, accountants can help organizations decide why specific data must or must not be processed, what data types should be included in the analysis, and the duration of data processing and retention. Thus, accountants can assist in ensuring that data is compliant, that data collection and processing are ethical, and that automated decision-making systems ultimately produce more transparent results.   For example, accountants play an essential role in developing information systems such as enterprise resource planning systems in order to help ensure that they fit into their company’s business processes and generate relevant outputs for decision making. With the advent of AI and data science, this role should be expanded to include algorithmic evaluations. Accountants can also help explain the results of LIME and SHAP more clearly to enable a better understanding of the inner workings of AI tools and to document them for internal and external stakeholders.  

HUMAN-ALGORITHMIC INTERACTIONS

AI can’t and shouldn’t unilaterally make decisions for humans. With no will and no intention, AI is merely a tool to help generate information. On the other hand, humans make decisions and are responsible for the consequences. With justification, human decisions can be questioned and rebuked in the name of respecting the rule of law.   Even though AI is capable of being fully automated, human judgment remains critical. Automated decision-making tools, for example, can correlate a person’s gender with risk. This, however, shouldn’t be generalized and thus needs to be justified on a case-by-case basis. This is consistent with the AI ethical guidelines in Singapore and the EU. (For more on these guidelines, see “AI Governance around the World.”) Both have emphasized the need for humans to supervise and control AI-based decision making, using a combination of human-in-the-loop and human-in-command approaches.

 

Side bar 3

Simply put, with proper checks and balances, the system put in place as a result of implementing human-algorithmic and data interactions will help mitigate biased and potentially harmful outcomes that can erroneously occur with either full automation or human-only decision making.   Overall, mitigating the potential risks posed by automated decision making remains an ongoing challenge. Accountants can meet that challenge head on by leveraging their stewardship and value-creation expertise. With their skills in data management, accountants can extend their contribution to minimize risks when using automated decision-making systems to transform data into information, thereby helping to create a more responsible AI environment within their organizations. And, finally, one of the best things an ethical, diligent management accountant can do is raise questions with AI developers regarding sources of potential bias and how to prevent them from ever becoming a problem.  

About the Authors