العقود الآجلة
وصول إلى مئات العقود الدائمة
TradFi
الذهب
منصّة واحدة للأصول التقليدية العالمية
الخیارات المتاحة
Hot
تداول خيارات الفانيلا على الطريقة الأوروبية
الحساب الموحد
زيادة كفاءة رأس المال إلى أقصى حد
التداول التجريبي
مقدمة حول تداول العقود الآجلة
استعد لتداول العقود الآجلة
أحداث مستقبلية
"انضم إلى الفعاليات لكسب المكافآت "
التداول التجريبي
استخدم الأموال الافتراضية لتجربة التداول بدون مخاطر
إطلاق
CandyDrop
اجمع الحلوى لتحصل على توزيعات مجانية.
منصة الإطلاق
-التخزين السريع، واربح رموزًا مميزة جديدة محتملة!
HODLer Airdrop
احتفظ بـ GT واحصل على توزيعات مجانية ضخمة مجانًا
منصة الإطلاق
كن من الأوائل في الانضمام إلى مشروع التوكن الكبير القادم
نقاط Alpha
تداول الأصول على السلسلة واكسب التوزيعات المجانية
نقاط العقود الآجلة
اكسب نقاط العقود الآجلة وطالب بمكافآت التوزيع المجاني
A present danger: deepfakes in financial services and how to respond
Three years ago, if you had asked any security professional or KYC specialist in the financial services industry about the top three security threats, it’s unlikely that deepfakes would have featured at all.
Today, however, deepfakes are wreaking havoc across industries, and within traditional and social media ecosystems, altering our relationship with reality and delivering frequent and often highly successful fraud campaigns. Financial institutions must act fast, as deepfake technology and the tools and methods behind digital deception are evolving faster than ever.
**The threat landscape as it stands **
According to a recent report by Deloitte, AI-driven fraud attacks are set to cause the financial services industry losses of $40 billion across by 2027, and this is just in the US alone. Gartner also reports that 62% of organisationsexperienced a generative AI attack over a 12-month period.
AI-powered attack vectors are increasing in their sophistication, with high-profile examples of fraud being reported regularly. Voice cloning technology has enabled AI-powered bots to conduct entire conversations that sound exactly like the cloned target, matching diction and tone with impressive fidelity. All fraudsters need is 20 to 30 seconds of recorded speech that they can feed into a generative AI model to clone a voice. When we consider that many C-level executives and senior spokespeople regularly speak at industry events, and on panels and podcasts, it’s no surprise that these incidents are more common.
Deepfake technology can now even fool biometric facial ID verification tools, marking a watershed moment for adversaries. Biometrics, and particularly facial recognition, were once thought to be the most secure means of verification, but fraudsters are now able to bypass in-app onboarding checks through ‘video injection attacks’ and sophisticated 3D masking techniques.
A combination of these techniques can also deliver highly convincing deepfake spoofs via video calls, in which victims are fooled by an entirely AI-generated senior executive or family member asking them to provide access to a system or to transfer money to a new account. When we consider that a finance worker in Hong Kong was fooled into transferring $25 million following such an attack in 2024, and that the technology has improved significantly in just the last year alone, it’s clear that financial institutions must seek to implement better security and detection practices and solutions.
The wider implications of deepfakes
While deepfakes are driving significant success for fraudsters in terms of immediate paydays, there are many other ways cybercriminals and state-sponsored actors are using the technology to infiltrate organisations and even attempt to destabilise markets.
Last year, it was revealed that North Korean hackers were using AI deepfakes and stolen identities to fake job interviews for software engineering positions at cryptocurrency, Web3 and fintech companies. The intention was to conduct espionage, but hackers from the country have also managed to steal more than $2 billion of crypto to date globally, revealing the increasingly complex and interconnected cybercrime landscape.
Deepfakes across social media platforms are also posing a risk to financial markets. Early this year, Sundararaman Ramamurthy, CEO of the Bombay Stock Exchange, became the victim of a deepfake attack when a video of him apparently sharing trading advice spread across social media platforms. While the impact of the deepfake was difficult to determine, the example reveals the danger digital deception poses to countries and financial institutions.
How the financial services sector can respond
The challenge for financial services organisations is a familiar one; legacy verification tools and processes were not designed to detect AI-generated content. With attacks now also consisting of mixed-modalities–for example, the combination of voice-cloning and video-deepfakes–many organisations will understandably be playing catch-up when it comes to delivering a defensive posture that is fit for purpose.
The most immediate strategy for defending against deepfake attacks is to address behaviour through education. Security frameworks and policies must be updated, with employees educated about the new threats that deepfake and AI-generated attacks pose. Establishing strict escalation processes and reinforcing the usual cyber sense checks around phishing techniques–such as treating urgent requests from senior executives to do something out of the ordinary with suspicion–are essential.
When it comes to technological controls, there are a number of priority investments that should be made to directly address the new vulnerabilities exposed by deepfakes. For KYC and identity verification processes, advanced solutions that are able to elevate liveness detection thresholds should be prioritised. As multi-modal attacks advance, multi-modal biometric verification systems that can correlate audio, spatial and temporal signals during identity proofing processes will be invaluable.
Real-time anomaly detection solutions that can be embedded into unified communications platforms, can also be deployed, helping firms avoid the unfortunate fate of the aforementioned Hong Kong finance worker.
With daily stories about how AI is helping to defraud, discredit and destabilise businesses, politicians, governments, and vulnerable people online, it’s crucial that organisations act fast to assess their exposure to the current threat landscape. The old cybersecurity adage holds true: an organisation is only as secure as its weakest link. People make mistakes and it only takes one to enable adversaries to gain access to systems through clicking on a link or opening a file. However, as cybercriminals increase their use of AI-powered attacks, as well as scale their efforts using the technology, advanced solutions that detect and flag anomalies in real time will be the essential partner to more security conscious employees.