The world is witnessing a surge in AI-based cyber attacks as cybercriminals increasingly develop sophisticated AI-driven hacking tools to exploit the growing use of artificial intelligence in enterprises, according to a recent report by Trend Micro.

The study reveals that cybercriminals of all skill levels are now more equipped than ever to launch large-scale attacks aimed at extortion, identity theft, fraud, and misinformation. These developments highlight the urgent need for organisations to bolster their defences against data theft and financial loss.

"Generative AI is revolutionising innovation in the Sultanate of Oman and across the region by automating tasks, enhancing customer experiences, and aiding in critical decision-making processes. However, this same technology has also given rise to new Deepfake tools that make it alarmingly easy for cybercriminals to execute damaging scams, social engineering attacks, and security breaches," said Dr Moataz bin Ali, Regional Vice President and Managing Director, MMEA, Trend Micro.

Globally, Deepfakes pose a significant threat to both enterprises and individuals.

A survey by PwC found that over 50 per cent of regional respondents believe this technology could lead to cyberattacks within the next year. The report warns that undetected Deepfakes can result in severe consequences, including financial losses, job terminations, legal issues, reputational damage, identity theft, and potential harm to mental or physical well-being.

Recognising the severity of these threats, Omani authorities and cybersecurity experts in the region are intensifying efforts to raise awareness and educate the public about the risks associated with Deepfakes and AI-based cyber attacks.

"Beyond traditional methods like image noise analysis and color detection, we focus on analysing user behavioural patterns to provide a more robust approach to detecting and stopping Deepfakes. Once detected, our system immediately alerts enterprise security teams, enabling them to take proactive measures to prevent future attacks. This technology is not only being exploited to bypass human verification but also biometric security measures like facial recognition," added Dr Moataz.

He further emphasised that tools for protecting against data theft and financial loss are now more affordable and accessible than ever. Tools like 'Deepfake Inspector' can verify whether a participant in a live video conversation is using Deepfake technology, alerting users to potential impostors.

As AI continues to shape the future, it is crucial for organisations and individuals alike to stay vigilant and adopt advanced security measures to protect themselves from the evolving landscape of cyber threats.

2022 © All right reserved for Oman Establishment for Press, Publication and Advertising (OEPPA) Provided by SyndiGate Media Inc. (Syndigate.info).