Oracle is bringing the power of artificial intelligence (AI) to the fight against financial crime, addressing persistent issue of false positives and high investigation costs plaguing financial institutions.
In an interview with Computer Weekly on the sidelines of Oracle CloudWorld Tour Singapore, Sonny Singh, executive vice-president and general manager of Oracle Financial Services, said traditional rule-based approaches to financial crime detection often generate a deluge of alerts, with “90% of the alerts generated typically false positives”.
“Imagine you were in a business where 90% of the work you were doing was going to result in a waste of time,” Singh said, underscoring the urgency of the problem. “Larger banks could be spending anywhere from $600m to $1bn on the resource costs of these investigations.”
Oracle is tackling the problem through AI in two ways: prioritising alerts for financial crime investigators, and automating investigative processes, such as gathering and enriching data from different sources.
“AI can help you optimise the alerts that you should truly be pursuing,” Singh said, adding that by leveraging AI models and generative AI capabilities, alerts with higher probabilities of indicating criminal activity can be prioritised, allowing investigators to focus their efforts effectively.
On automating investigative processes, Oracle recently introduced agentic AI capabilities in its Investigation Hub service designed to surface key insights, collect evidence and generate alert narratives, providing a view of the entities and transactions under investigation.
To achieve this, Oracle uses graph technology to track layered transactions, a common tactic used by money launderers to obscure the origin and movement of funds. By visualising the flow of funds, investigators can more easily identify suspicious patterns and networks.
Singh said this allows investigators to analyse “higher quality alerts that have a higher propensity to reflect financial crime” while equipped with “much better tools in order to investigate and identify if this is something that needs to be reported to law enforcement.”
“And when you report to law enforcement, you must fill out reports and generative AI can help you look at all the data, identify elements that should go into that report, and automatically generate those reports that still have human oversight as a final check,” he added.
Addressing concerns about governance and the potential risks of AI agents, Singh said Oracle provides model management capabilities that allow financial institutions to demonstrate transparency and fulfil their regulatory responsibilities by showing how their models are explainable and governed.
Oracle also employs “compliance agents” – AI-powered bad actors – to test the thresholds of a bank’s financial controls and identify vulnerabilities. “They are almost like ethical hackers of your financial controls,” Singh said.
While acknowledging that AI agent technology is still in its early stages, Singh said Oracle is building observability and monitoring capabilities into its AI offerings, ensuring responsible and effective deployment.
“That type of instrumentation has to be built in, and we are incorporating that where applicable,” Singh said. “We are also learning the process of how this is done. Oracle provides the technological tooling to be able to create observability inside of these applications and we are constantly evolving that capability.”
Singh noted that larger financial institution customers with very stringent data privacy and sovereignty requirements tend to deploy Oracle’s AI capabilities in a private cloud.
“This type of data has reputational elements, and they are very nervous about information about their banks being used as conduits for money laundering, so there’s a lot of focus on keeping this environment very controlled. Many of them will automatically default to doing this inside of their datacentres.
“Not all workloads are moving to the cloud, even now,” Singh added, noting that over time, however, large financial institutions will likely progress towards public cloud deployments for their AI workloads.
Besides Oracle, SymphonyAI, a software company that builds AI capabilities for specific industries and use cases, has also been active in the fight against financial crime, having built a technology platform used by major financial institutions to investigate and manage money laundering cases.
Mike Foster, the former CEO of SymphonyAI’s anti-money laundering technology specialist Sensa-NetReveal, told Computer Weekly that the company’s advantage lies not only in its platform, which has a heritage of over 20 years, but also industry expertise and predictive AI models that had been developed within the SymphonyAI business.