Forget What Happened in the Past: Internal Auditors Really Are Here to Help!
May 15, 2023New Research: Companies Appreciate Internal Audit More After They Get in Trouble
May 31, 2023For the past few years, there has been an ongoing debate about whether artificial intelligence (AI) will present risks or opportunities for the internal audit profession. I have taken the position that it will offer both.
If we engage primarily in hindsight and focus our energies on tactical audit engagements, such as assurance on internal controls over financial reporting, we run a significant risk of being disintermediated by AI. However, if we turn our attention on strategic/business risks and other engagements that require professional (human) judgement, we have the opportunity as an enabler to leverage AI, rather than to fear it.
I also have argued that internal auditors will play an important role in assurance on the effectiveness of AI governance and compliance. Recent developments confirm that AI compliance risks are emerging faster than many of us expected.
Earlier this month the EU Parliament adopted a draft negotiating mandate of the Artificial Intelligence Act, billed as the “first relating to AI by a major regulator.” According to a dedicated website on the new legislation:
“The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring…, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”
The website goes on to offer a number of concerns about the new legislation:
“There are several loopholes and exceptions in the proposed law. These shortcomings limit the Act’s ability to ensure that AI remains a force for good in your life. Currently, for example, facial recognition by the police is banned unless the images are captured with a delay or the technology is being used to find missing children.
In addition, the law is inflexible. If in two years’ time a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as ‘high-risk.’”
Those concerns notwithstanding, the new law signals a heightened regulatory environment around AI that is approaching at warp speed. As one author of the legislation observed:
“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.”
While Europe is moving ahead with AI legislation, the debate is just heating up in the US. A recent Wall Street Journal article reported that “Rising concern in Congress over the risks posed by powerful artificial-intelligence tools in the hands of consumers is giving momentum to a long-simmering idea: Creating a federal agency to regulate technology platforms including AI systems.”
The Wall Street Journal notes that creation of an AI regulatory agency is “one of many ideas being kicked around in Washington as lawmakers contend with a new technology with humanlike abilities to complete an array of tasks.” One thing is certain: We haven’t heard the last of AI regulation, and if we are to help our organizations to navigate the risks that lie ahead, we had better tune into what is being said and done by legislators and regulators sooner rather than later.
Artificial intelligence legislative and regulatory momentum is not limited to Europe and the US. A recent update from the World Economic Forum on global AI trends noted that since 2016, 123 AI-related bills have been passed around the world – 37 in 2022 alone. Other examples around the world is a bill in the Philippines addressing education reforms to meet challenges caused by new technologies including AI, and a bill in Spain focused on nondiscrimination and accountability in AI algorithms.
Based on other regulatory tsunamis of the 21st century, I encourage a three-phase process for internal auditors navigating looming AI regulation:
Phase 1: Awareness. This is where we should already be. We should be actively monitoring legislative and regulatory progress, encouraging our organizations to weigh in on proposed initiatives, where possible, and identifying emerging risks.
Phase 2: Readiness assurance. Once legislation is enacted, the natural course is development of corresponding regulations. So, as these regulations are approved, the countdown to implementation commences. This period presents internal audit with the opportunity to assess and provide assurance to management and the board on organizational compliance readiness. As we saw during the runup to Sarbanes-Oxley implementation in the U.S. and GDPR in Europe, this is a crucial period in which internal audit can add significant value by warning of looming compliance risks.
Phase 3: AI compliance assurance. Once regulatory implementation arrives, internal auditors must shift into compliance assurance mode. Noncompliance with new legislation and regulations can create legal, financial and reputational risks for our organizations. It is imperative that we are vigilant in assessing compliance and providing assurance to management and the board.
We are in the early stages on AI legislation and regulation, and the risks and opportunities surrounding this incredible technology continue to evolve. But it’s not too soon to step up and serve as a beacon for our organizations on the myriad compliance issues that may lie ahead.
I hope you find my suggestions helpful as you start contemplating AI compliance risks. Please share any comments or drop me a note at blogs@richardchambers.com.
I welcome your comments via LinkedIn or Twitter (@rfchambers).