Generative A.I. and risks to CPA firms
It may be tempting to explore generative A.I.’s capabilities and investigate how it may simplify your life, but there are more immediate practical concerns and considerations for CPAs.
By Sarah Beckett Ference, CPA
Note: This article appeared in the October 2023 issue of the Journal of Accountancy.
“Write a 1,200-word article regarding how CPA firms can manage risk related to the use of ChatGPT and other forms of generative A.I.”
This easy ChatGPT request would have resulted in the production of this month’s column in a far more expeditious manner than writing one from scratch, but would it have been as informative or accurate?
ChatGPT is an example of generative artificial intelligence, and its popularity has soared since its introduction to the public in November 2022. Given ChatGPT’s versatility, speed, and ability to engage in human-like conversations, it’s easy to see why.
Generative A.I. models learn from input data and can then generate new data based on what they have learned. While it may be tempting to explore ChatGPT’s capabilities and investigate how generative AI may make your life easier, there are more immediate practical concerns and considerations for CPA firms before using these tools.
Professional liability risks
Confidentiality: CPA firms handle a vast amount of financial and personal information related to the firm’s clients, owners, and employees, making data privacy and security a top priority. Myriad data protection laws and regulations — too many to name — require the holders of confidential individual and personal data to protect it.
When data is entered into a generative A.I. tool, you are sharing that data with the AI tool’s owners and, thus, entrusting them to protect this data. Have you read the tool’s terms and conditions and privacy policies to understand how data is protected? What happens if the A.I. system experiences a data security incident and unauthorized individuals access your firm’s or your client’s sensitive data? A data breach can have significant financial and reputational consequences for a CPA firm, and a generative A.I. tool’s owner may attempt to disclaim liability for a data security incident.
Reliability: A.I. models are not infallible. In fact, ChatGPT’s terms of service at the time of this writing remind users that A.I. and machine learning are evolving fields of study and acknowledge that use of ChatGPT may “result in incorrect output that does not accurately reflect real people, places or facts.” Responses from generative A.I. are based on the patterns and data on which it has been trained. If the data used to train the A.I. model is out of date, inaccurate, or incomplete, then the output may also be inaccurate or incomplete.
For example, if a CPA were to ask ChatGPT about the merits of a certain tax return position, ChatGPT may pull from various online sources to provide a response without being capable of differentiating between the sources that may provide reliable guidance and those that may not. In other words, garbage in, garbage out.
Further, unlike humans, generative A.I. currently lacks the ability to understand context and nuance, which may be critical to arriving at a proper result. Today’s generative A.I. are machine-learning systems that do not have the same level of understanding, analysis, and judgment of a human being. Generative A.I. may not understand the complexities of professional standards, tax laws, or financial reporting frameworks.
While generative A.I. is likely able to reliably answer a straightforward question — such as, “What is the financial reporting standard governing lease accounting under U.S. generally accepted accounting principles?” — its answer to a question regarding the application of the same standard to a specific transaction may not be accurate. In addition, generative A.I. may not understand the question’s intent if the question is not phrased properly.
This potential for misinterpretation or misapplication of standards, laws, and regulations can have disastrous consequences if human judgment and professional skepticism are removed. Indiscriminate reliance on a response or an answer derived from A.I. when delivering services without critically analyzing whether that result is correct may lead to errors or omissions and potential professional liability claims.
Risk management recommendations
Chances are, personnel at your firm already use ChatGPT to some extent. It’s still relatively novel, and its capabilities are fun to explore. Before jumping into the deep end of the generative A.I. pool, CPA firms should consider the following:
Understand the limitations of any specific generative A.I. tool used by the firm: Remember that ChatGPT and other forms of generative A.I. are, fundamentally, tools — albeit highly sophisticated ones. And just like any other tool, before using it, one must first understand the tool’s purpose, limitations, and instructions for use. If a firm uses generative A.I., perform diligence. What are the datasets used to train the tool? How current are its inputs? Are there limitations of which the firm should be aware?
Develop a policy for appropriate use: Draft a firmwide policy for how generative A.I. may be used. A clear policy shared and reinforced with all personnel will help promote consistency of use throughout the firm.
- Scope: What are the specific purposes or tasks for which employees should be permitted to use generative A.I.? Some items may include creating a first draft of emails or reports or conducting initial research. Specifically identifying permissible uses helps avoid ambiguity regarding when A.I. should and should not be used. It may also be useful to create separate sub-policies that apply to specific roles or groups at the firm. For example, HR personnel may have more restrictions regarding use of generative A.I. in the recruitment and talent development processes as generative A.I. may introduce unintended bias in these processes, which may violate employment laws and regulations.
- Guidelines related to data inputs: Prohibit the sharing of confidential and proprietary client and firm information with generative A.I. tools. Advise all firm personnel to take the same level of care with information shared with generative A.I. as they would if they were posting on a public site, such as social media.
- Responsibility for review of outputs: Unlike you, ChatGPT and other generative A.I. tools have not been formally educated and trained in the practice of public accountancy and are not licensed CPAs. It is important to supervise and review the output from generative A.I. just as you would the work of any other engagement team member. Indeed, ChatGPT terms of service at the time of this writing remind users to “evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.” Consider requiring firm personnel to inform their supervisor when work has been created or developed using generative A.I. Doing so will help ensure that the output is properly reviewed.
Training, monitoring, and oversight: As with any new policy, train firm personnel on the firm’s new generative A.I. policy and regularly monitor usage of generative A.I. tools to ensure adherence to the policy.
Consult with counsel: Consult with the firm’s counsel to help understand the terms of service and privacy policies of any generative A.I. model being used, the potential data security issues posed by how the firm intends to use generative A.I., and whether usage requires any formal client communication or consent.
Stay abreast of changes in the generative A.I. landscape: Unlike the development of new professional or financial reporting standards or the enactment of new tax legislation, the evolution of generative A.I. technology and its capabilities will likely continue at a rapid pace, requiring CPA firms to flexible and responsive. Monitor usage, trends, and developments in generative A.I. and be prepared to adjust the firm’s policy and approach.