Connect with us

Tech

The EU AI Act and telecoms fraud

Published

on

The EU AI Act and telecoms fraud

The European Union’s AI Act, which came into effect on 1 August 2024 across 27 member states, enforces a comprehensive regulatory framework for AI applications in the EU. The Act classifies AI applications into different categories based on their potential to cause harm. AI systems that pose unacceptable risks are banned, while those dealing with high-risk applications are required to comply with stringent regulations to ensure transparency, safety and accountability.

We recently spoke with Gavin Stewart, Vice President for Sales at telecoms software company Oculeus, about the impact of the Act on communications service providers in the European Union and how it influences their strategies to prevent telecom fraud.

What service providers and network operators need to know

According to Stewart, the EU’s AI Act impacts all telecoms service providers in the region. While the Act mandates a 24-month compliance timeline, some requirements will take effect as early as February 2025, leaving operators with limited time for implementing the necessary changes. “Operators have only a few months to interpret the regulation, reduce it to actionable tasks and deliver the changes to ensure they are ready,” he said.

“The EU’s AI Act is basically about empowering citizens with additional consumer rights that directly impact what enterprise organisations can and cannot do with AI, and how they do it. Complying with the Act is likely to involve significant changes to organisational process and technology,” Stewart explained.

Stewart’s view is that like prior GDPR regulations, the AI Act will likely set a global standard for AI regulation, obligating telecom operators worldwide to adopt similar frameworks in line with the new Act.

In view of these developments, addressing the complex regulatory requirements becomes a priority for telecoms organisations. The Act has set a deadline as early as November 2024, requiring participating countries to identify the national authorities responsible for rights protection in each sector. In telecoms organisations, the AI governance functions are still evolving, often lacking a designated point of focus. Considering the current developments, Stewart feels that AI governance in telecom organisations is likely to evolve as a collaborative effort across IT, technology, policy management and Corporate Social Responsibility (CSR) teams. This is because AI governance extends beyond regulatory compliance, encompassing emerging best practices in ethics and risk management. “The Governance requirement itself is in a state of evolution, as the rapid development of AI technologies poses new challenges that existing IT governance professionals must identify and account for,” he said.

Implications for preventing telecoms fraud

As in many other cases, AI is a double-edged sword in the industry: Fraudsters are increasingly using AI to carry out telecoms fraud, while at the same time operators are using AI to detect and prevent any fraudulent activities. The rise of sophisticated AI tools has enhanced fraudsters’ ability to evade detection. Traditional anti-fraud tools, which rely on pattern detection, struggle to keep up with these advanced tactics. Telecoms operators often fall prey to such incidents, inviting severe penalties and other legal consequences.

For example, a recent incident in the US involved a deepfake robocall mimicking President Joe Biden, which resulted in a $1 million fine for the telecom provider that unknowingly carried the fraudulent calls. This incident highlights the risks associated with AI-driven telecom scams and demands state-of-the-art strategies to identify and block such fraudulent activity on operators’ networks.

Against this backdrop, the new EU AI Act helps citizens stay protected. In the event of a suspicious occurence, citizens can file complaints about the AI system and also seek explanations of the decisions made by the AI systems. In both scenarios, organisations will need to provide clear audit trails to prove compliance, as blaming AI for errors will not be an acceptable defence. In practice, this impacts the software or system that employs AI or other automated decision-making technique. The appropriate use of automated software decision-making is a topic already covered by the GDPR, indicating that significan complexity lies ahead for telecom organisations in terms of compliance.

Oculeus’ AI-powered telecom fraud protection strategy

“For the anti-fraud technology provider like Oculeus, AI really excels at recognising very subtle patterns in very large data sets. These are either beyond the abilities of humans to spot or would require excessive time spent on analysis,” Stewart said. Oculeus leverages AI to identify subtle patterns from very large data sets. Since telecom frauds evolve rapidly, AI is essential to “see the unseen,” he added.

“By using AI on the metadata of telecoms calls, Oculeus is able to identify emerging patterns of new, fraudulent behavior that may involve different parties, like subscribers, B2B customers or business partners, including a community of interconnect and wholesale providers that handle call traffic across several networks,” he said.

Stewart spoke about Oculeus’ multi-pronged AI approach, one that evaluates traffic at three levels: Monitoring general traffic behaviour to identify changes, assessing individual calls for fraud risk based on known patterns, and reviewing incidents that have not reached fraud threshold but exhibit suspicious elements, flagging them for further investigation. According to Stewart, Oculeus customers can block more cases of fraud in less time.

Oculeus has incorporated AI technology into its anti-fraud products to support evaluation and case-validation processes in accordance to EU AI Act. AI governance principles are also embedded in the company’s technologies, Stewart said. He emphasised the system’s ability to generate transparent audit trails of activities and decisions, and ensure crucial human oversight in the decision-making process. “This way, we already equip our customers with the necessary tools to meet the compliance requirements of the AI Act.”

Continue Reading