Since its public launch in November 2022, ChatGPT has captured the world’s imagination about its capabilities. ChatGPT can respond to complex questions and produce comprehensive, essay-length answers on virtually any topic (see “What Is ChatGPT?”). As such, it can revolutionize the way students learn. ChatGPT and other chatbots, such as Google’s Bard, can accept text input to help write code, generate stories, look up information, and more. However, there are ethical risks—including compromised data privacy, biased inputs, inaccuracies/misleading results, unreliability, plagiarism, and a lack of transparency and accountability—that need to be addressed. This article evaluates these risks from the perspectives of both students and educators. It also addresses ways to mitigate risk, such as implementing appropriate security measures to counteract possible security and privacy risks.


What Is ChatGPT?

ChatGPT—a chatbot developed by the research and deployment company OpenAI—is trained to follow an instruction in a prompt and provide a detailed response. Unlike Google, which has static and stored results, ChatGPT uses deep learning to produce results from one or more sentences that provide background information or pose a question. This involves self-improvement or self-learning capabilities.



An artificial language model, ChatGPT depends on the data it’s fed to make inferences and return accurate information. Using a wide range of internet data, ChatGPT can help users answer questions, write articles, program code, and conduct in-depth conversations on a substantial range of topics. Some users even ask ChatGPT to take on more complex forms of these tasks, including crafting job descriptions, writing company mission statements, and even drafting termination agreements by using its knowledge base, input answers, and key references.


In accounting and finance, ChatGPT can be used to enhance audit practice and can be a valuable tool in financial management. The auditing profession uses innovative technologies and digital data to assist in developing audit programs. AI and ChatGPT can save time and resources by relieving auditors from performing repetitive, mundane tasks. As a result, auditors can devote more time and attention to creative work and critical thinking. ChatGPT can help finance professionals, who are entrusted with managing the financial responsibilities of an organization, by automating routine tasks, analyzing large amounts of data, providing guidance in decision making, and creating financial reports.


Whether in accounting and finance or elsewhere, generative AI systems like ChatGPT can give inaccurate or misleading results because of prompts that are too vague due to poor data sources. The more detailed and specific the prompts, the more reliable the responses. The limitation of the technology means it can experience problems on relatively simple queries. Accountability can be elusive because of the way data inputs are determined. The issue is one of trust: Why should we trust ChatGPT responses when there is no way to verify its usefulness?


Student and Educator Views


Given the advancements in providing responses to queries, educators need to be cognizant of the uses of ChatGPT as well as how to handle ethical risks and increase reliability. Educators should undergo training and familiarize themselves with the tool’s features, capabilities, and limitations. They also should emphasize responsible usage, ensuring that ChatGPT is used to enhance learning and not replace it.


Students are the major stakeholders in using ChatGPT in a university setting. They can use it as a tool to complete written assignments and write research papers, and for examinations. It can be used as a resource, much like a search engine. The difference is that a search engine, such as Google, searches the internet to provide relevant results while ChatGPT uses deep learning to generate humanlike responses to user prompts.


A survey of college students conducted by BestColleges showed mixed results on how students view the ethical risks of using ChatGPT in the classroom and how their instructors view it from a learning perspective (see “Student Views”).


Student Views

The following are results from a survey of college students conducted March 6-13, 2023, by BestColleges, an online resource for college students.

  • 43% of college students have used ChatGPT or a similar AI application.
  • 50% of those who have used ChatGPT have used it to complete assignments or exams (22% of all college students in the survey).
  • 57% don't intend to use it or continue using it to complete their schoolwork.
  • 31% say their instructors, course materials, or school honor codes have explicitly prohibited AI tools.
  • 54% say their instructors haven't openly discussed the use of AI tools like ChatGPT.
  • 60% report that their instructors or schools haven’t specified how to use AI tools ethically or responsibly.
  • 61% think AI tools like ChatGPT will become the new normal.
  • 51% agree that using AI tools to complete assignments and exams counts as cheating and plagiarism, while 20% disagree. The rest are neutral.
  • 40% say that using AI defeats the purpose of education.

Source: Lyss Welding, Half of College Students Say Using AI on Schoolwork Is Cheating or Plagiarism, BestColleges, March 17, 2023.



The results seem contradictory. While about half of students have used ChatGPT or a similar application, more of them said they don’t intend to use it going forward, even though 61% believe it will be the new normal. This hesitancy may be due to a lack of clarity on the part of professors regarding their expectations for students in using the technology.


Other researchers have reported stronger feelings on the part of respondents about the ethical risks of using ChatGPT., for example, showed that students, by a 2-to-1 ratio compared to college professors, believe ChatGPT should be banned. However, while 72% of professors who are aware of ChatGPT are concerned about possible cheating, fewer than half believe it should be banned (see “Educator Views”). Because 66% of professors support students having access to the technology, one possible explanation may be that professors believe ChatGPT has learning value but feel guidelines are needed to deal with its ethical risks, including plagiarism. Laurie Burney, Kimberly Church, Mfon Akpan, and Scott Dell support this theory by suggesting that the use of ChatGPT and other AI in education can be met with resistance because its use can walk “a fine line between questionable integrity and employing a valuable educational tool.”


Educator Views

The following are results of research by, an online resource for students and educators.

  • 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper.
  • 72% of college students believe that ChatGPT should be banned from their college’s network.
  • 82% of college professors are aware of ChatGPT.
  • 72% of college professors who are aware of ChatGPT are concerned about its impact on cheating.
  • 34% of all educators believe that ChatGPT should be banned in schools and universities, while 66% support students having access to it.
  • 5% of educators say that they have used ChatGPT to teach a class, and 7% have used the platform to create writing prompts.

Source: Chris Westfall, Educators Battle Plagiarism As 89% Of Students Admit To Using OpenAI’s ChatGPT For Homework, Forbes, January 28, 2023.



Not all educators believe that using ChatGPT is a bad thing. John Villasenor tells students in his class at the UCLA School of Law that they can use ChatGPT in their writing assignments. He suggests that rather than banning students from using labor-saving and time-saving AI writing tools, educators should teach students to use them ethically and productively. He emphasizes the practical reasons for using ChatGPT, including learning “how to engage productively with AI systems, using them to both complement and enhance human creativity with the extraordinary power promised by mid-21st-century AI.”


It’s likely that student views will change over time and with increased applications in education and as educators define the limitations of ChatGPT usage. We can expect more students to turn to ChatGPT to enhance classroom performance, especially since their classmates may turn to it. Ultimately, all students may come to rely on it to remain competitive. Otherwise, some students may have an unfair advantage when educators grade their work. Fairness is an ethical value that ensures students are given the same opportunities to perform at the highest level.


Ethical Issues


The use of ChatGPT creates ethical issues that need to be addressed by students and educators. The overriding issue is one of trustworthiness. Beyond concerns about the reliability of responses provided by ChatGPT, educators should ensure that the workings of the system are known by students and standards are set for its use in coursework. Table 1 provides a summary of the ethical issues and the implications for education.




Data privacy. Given ChatGPT’s access to vast amounts of data, there’s a risk that this data could be compromised, either through hacking or by other means. Proper security measures should be in place to protect the data from unauthorized access. Students shouldn’t place private information into the chat box.


Fairness. The ethical value of fairness is part of the foundation for grading students. Some students with greater access to technology may have an unfair advantage when using ChatGPT over those students who don’t.


Biased inputs. Educators should be aware of the potential for bias that may occur if information is skewed to support one point of view over another. ChatGPT may also show a preference for certain topics it considers more important than others. The presence of bias can have serious implications for the reliability of the outcomes generated. Educators can analyze ChatGPT-generated answers/essays that favor a particular viewpoint, which can help to recognize bias in the system.


Inaccuracies/misleading results. ChatGPT may come up with incorrect, inaccurate, or incomplete information depending on the questions asked. One reason is it doesn’t generate answers by looking for the information in a database, as would a Google search, but rather it draws on patterns it learned in its training. One way to deal with such issues is to submit feedback to the responses to ChatGPT and see whether it produces more reliable information.


David Wood headed up a group of hundreds of accounting educators using data from 14 countries and 186 institutions to compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provided correct answers for 56.5% of questions and partially correct answers for an additional 9.4% of questions. Wood and his coauthors found that “Students generally outperform ChatGPT, but the bot can approximate average human performance in some topic areas and for certain question types” (“The ChatGPT Artificial Intelligence Chatbot: How Well Does It Answer Accounting Assessments Questions?” Issues in Accounting Education, November 2023, pp. 1-28). It’s likely that ChatGPT’s performance will improve over time and with advances in machine learning.


Reliability. ChatGPT is generally reliable for straightforward factual questions but can yield less helpful, more generalized responses to complex topics. By asking clear and specific questions, ChatGPT can produce reliable results.


According to Dave Epstein, professor of practice at Boston University’s Questrom School of Business, technology can go wrong: picking up unreliable or even wrong material and producing it as fact. He cautions that the output from ChatGPT should be evaluated and a judgment made whether it’s true and useful for the question asked. This is important to ensure the reliability of data produced by the system. One way to do so is to cross-check the responses with other sources to alleviate uncertainty.


Plagiarism. ChatGPT can facilitate using advanced teaching methodologies, promoting interactive learning, and developing students’ critical thinking skills. The obvious issue for educators is that ChatGPT has the potential to facilitate cheating by students without being detected, most likely through plagiarism. This has implications for academic integrity and could undermine the fundamental values of higher education.


Wood and his coauthors noted that students are short-circuiting the learning process by using ChatGPT to cheat. There are ways to deal with student cheating by conducting oral examinations, administering exams in a setting where technology can’t be accessed, shifting from traditional exams to more presentation-style assignments, or pretesting exams with ChatGPT to assess whether the questions can be correctly answered.


Some professors are phasing out take-home, open-book assignments—which became a dominant method of assessment during the COVID-19 pandemic but now seem vulnerable to chatbots. Instead, they’re opting for in-class assignments, handwritten papers, group work, and oral exams. These are useful methodologies to counteract the motivation to cheat using AI technologies and ChatGPT.


Ethical risks can also be mitigated by relying on OpenAI and other companies such as Turnitin that have developed tools to detect text generated by language models such as ChatGPT. Turnitin is an advanced writing detection technology that’s highly reliable and proficient in distinguishing between AI- and human-written text and is specialized for student writing. However, educators should use caution: AI technology is moving so fast that any tool is likely already out of date. These detectors may not be accurate because some are being introduced before they have been widely vetted and may not be reliable to distinguish between AI-generated responses and human-generated content. In one instance, a high school senior wrote a paper on socialism and Turnitin flagged it for plagiarism. It was discovered that the student didn’t copy the paper from information provided by ChatGPT; rather, it was an outlier. It may be a good idea to use a second detector to assess whether cheating has occurred.


As the previously mentioned surveys show, there’s a divide between students and educators regarding whether ChatGPT should be banned. Given that both groups recognize the possibility of cheating and plagiarism, it’s up to educators to set standards and communicate them to students.


Transparency. Transparency in AI is about clearly communicating the workings of AI systems to users. Students should understand why they have been given a particular response when ChatGPT responds to a query. The goal is to ensure that AI doesn’t become a “black box”—a machine that takes in data and produces answers without understanding how it arrived at those conclusions. Transparency enhances trustworthiness and builds confidence in using ChatGPT.


Accountability. Increased accountability and transparency in ChatGPT help to ensure that users are receiving the most relevant and useful prompts. Appropriate measures should be taken to reduce the potential for bias in the system.   


Implications for Accounting and Finance Education


Educators should be knowledgeable not only about the factors raised in Table 1 and the previous sections but also the uses of ChatGPT in business settings. It’s essential to teach students about the value that this technology brings to the real world. ChatGPT can be used to solve complex accounting and finance problems, generate summaries and reports, make recommendations, and conduct data analysis, such as analyzing financial reports. However, the bot may not collect information from reliable sources, and the information may be outdated, incorrect, or biased.


A key advantage is that ChatGPT in education can adapt to each student’s unique learning style and pace. It can provide resources to write an essay on a particular topic, help the student to write the essay, and then the student can query ChatGPT with follow-up questions. It can also facilitate collaborative learning by facilitating group discussions to produce ideas or solve problems. Plus, ChatGPT can provide students with immediate feedback on their work. It can answer questions, provide explanations, and even suggest resources to use to better understand the issues related to the student's prompt.


Students also should realize that ChatGPT can get things wrong, so they should look for confirmation from other resources. They should fact-check responses, which can engage critical thinking skills. Students graduating from accounting and finance programs in the future will be expected to work with AI-generated text and need to learn how to engage productively with AI systems such as ChatGPT.


Educators should realize that they have reached a crossroads in educating students. ChatGPT will be as transformational as Google was in 1998. Educators need to learn how to engage with it in a meaningful way. ChatGPT is here to stay. Rather than banning it, educators should find ways to incorporate it into the curricula in a meaningful way so it can be a valuable educational tool.

Ethics in the Digital Age_540x69

For more on this subject, check out this Count Me In podcast and webinar:




About the Authors