
When the Board is Surprised by a Corporate Debacle, Who’s at Fault?
April 15, 2025
Early Turmoil in 2025 Has Disrupted a Majority of Internal Audit Plans
May 4, 2025Over the past few years, I’ve written and spoken extensively about the transformative impact of artificial intelligence (AI) on internal audit. From analytics and automation to risk sensing and assurance, AI has undoubtedly been a game changer. But as with any powerful tool, AI’s potential cuts both ways. In 2025, we continue to witness a disturbing evolution—AI is no longer just enabling internal auditors to uncover fraud. It is increasingly being weaponized to commit fraud.
For internal auditors, the stakes couldn’t be higher. AI-enabled fraud schemes are sophisticated, fast-moving, and often invisible to traditional controls. If we’re not evolving alongside the threat, our organizations facing increasing exposure to these emerging risks.
Here are five key trends in AI-driven fraud that internal auditors must stay ahead of in 2025:
1. Deepfake-Fueled Social Engineering
In the early days of fraud, impersonation schemes involved phony emails or spoofed phone calls. As I wrote last year, many fraud schemes today involve hyper-realistic deepfake videos and synthetic voice clones of senior executives. Fraudsters are leveraging generative AI to mimic a CEO’s voice or simulate a live video call to authorize illicit transactions, redirect wire transfers, or extract sensitive information.
One of the more legendary cases involved a finance employee wiring millions to a Hong Kong account after “speaking” with what appeared to be the CFO on a video call—only to learn later that the CFO was in a different country entirely. These schemes bypass traditional red flags because the AI is convincing enough to override suspicion.
Internal Audit’s Response: We must work closely with cybersecurity, HR, and finance teams to assess controls over communication verification, especially in high-risk areas like treasury and procurement. Multi-channel verification protocols (e.g., voice plus SMS plus in-person confirmation) should be tested regularly. Internal auditors should also evaluate whether their organizations have trained employees on identifying deepfake fraud attempts.
2. AI-Generated Synthetic Identities
AI tools are now being used to create synthetic identities that are indistinguishable from real ones. These digital personas often pass background checks, generate employment histories, and even receive fraudulent benefits or loans. Banks, insurance companies, and government programs are particularly vulnerable to these identity fabrications.
What makes synthetic identity fraud especially dangerous is its latency—it may take years for these “phantom customers” to default or trigger an investigation. By then, the damage is already done.
Internal Audit’s Response: Review identity verification processes, especially in customer onboarding, HR hiring, and vendor due diligence. Are there analytics in place to detect anomalies across seemingly separate but subtly linked identities? Has the organization implemented biometric or behavioral authentication? Internal audit can add significant value by recommending advanced verification mechanisms that go beyond static data.
3. AI-Powered Insider Threats
Insider fraud is not new, but AI is making it more potent. Employees or contractors can now use AI to scrape data, manipulate financial systems, or exploit vulnerabilities at scale. Imagine a malicious actor using a generative AI model to write tailored phishing campaigns against coworkers—or an insider using machine learning to identify and exploit gaps in compliance patterns.
Moreover, generative AI makes it easier to erase footprints. Unlike traditional fraud, which often leaves behind emails, logs, or handwritten records, AI-assisted schemes can self-delete, making forensic reviews more challenging.
Internal Audit’s Response: Internal auditors must broaden their views of insider risk. Look beyond traditional behavioral monitoring and consider the AI tools employees have access to—both formally and informally. Collaborate with IT and HR to monitor privileged access, usage of generative AI platforms, and any deviation from expected digital behaviors.
4. AI-Driven Financial Manipulation
From manipulating market sentiment with fake news to auto-generating fictitious invoices, fraudsters are now using AI to commit fraud at scale. Shell companies can be created and populated with AI-generated documentation. Chatbots can convincingly “communicate” as fake vendors. And perhaps most concerning, AI tools can fabricate documents so well—audits, contracts, bank statements—they pass cursory reviews undetected.
In one case, a mid-sized company unknowingly paid nearly $500,000 to an AI-generated shell entity that had successfully mimicked the look and tone of a legitimate supplier, including cloned email threads and authentic-looking vendor documents.
Internal Audit’s Response: Auditors should evaluate how the company verifies supplier authenticity and invoice legitimacy. Incorporate AI-based tools into audit procedures to perform anomaly detection in payment patterns, vendor creation processes, and invoice formatting. Use your own AI tools to fight AI-driven fraud—this is one battleground where technological parity matters.
5. AI-Augmented Money Laundering and Cryptocurrency Fraud
Cryptocurrency fraud is nothing new, but as Lucinity recently observed, AI is giving money launderers powerful new tools to evade detection. Sophisticated AI algorithms are now being used to layer transactions, obscure crypto flows, and mimic legitimate behavior. These tools can automatically adjust laundering strategies based on real-time feedback, making traditional compliance rulesets increasingly obsolete.
Some fraudsters are even using AI to identify regulatory gaps in global jurisdictions to exploit weak enforcement environments.
Internal Audit’s Response: If your company deals in digital assets or has exposure to cryptocurrency, now is the time to upskill. Ensure your audit team understands blockchain tracing tools, smart contract risks, and the evolving landscape of AI-driven crypto fraud. Evaluate the effectiveness of anti-money laundering (AML) controls and recommend enhancements where traditional models fall short.
Final Thoughts
The rise of AI-enabled fraud isn’t just a cybersecurity issue. It’s a governance and assurance issue. And it’s one internal audit cannot afford to ignore.
We must become more technologically literate, more agile, and more connected to emerging risks than ever before. The fraud landscape of 2025 is dynamic, deceptive, and digitally sophisticated. But internal auditors—armed with curiosity, skepticism, courage, and the right tools—can still stay one step ahead.
We’ve done it before, and we can do it again. But only if we commit to evolving as fast as the risks we’re entrusted to assess.
Stay vigilant. Stay curious. Stay ahead.
I welcome your comments via LinkedIn or Twitter (@rfchambers).