Incidentes deepfake na fintechs aumentaram 700% em 2023. Malfeitores armados usam modelos de AI para gerar fotos e áudios falsos, representando riscos potenciais.
Com deepfakes cada vez mais sofisticados, a possibilidade de fraudes e enganações se torna cada vez mais presente. Empresas e indivíduos precisam estar atentos aos riscos envolvidos na manipulação de vídeos e imagens, que podem ser utilizados de forma maliciosa para disseminar informações falsas e prejudicar a reputação de pessoas e organizações.
As falsificações profundas e vídeos forjados representam um desafio para a sociedade moderna, que precisa desenvolver mecanismos eficazes de combate a essa forma de manipulação digital. A disseminação de deepfakes pode trazer consequências graves e irreparáveis, exigindo um maior cuidado e atenção por parte de todos os usuários da internet. A conscientização sobre os perigos dessas práticas é fundamental para garantir a segurança e a privacidade de todos os envolvidos.
Deepfakes e o Mundo dos Negócios
But the ability of these AI models to now mimic the actual voice patterns of an individual giving instructions to someone over the phone to do something – these types of risks are completely new,’ said Bill Cassidy, Chief Information Officer at New York Life. Banks and financial service providers are among the first companies to be targeted.
Os Desafios das Falsificações Profundas nas Empresas
‘This space is moving very fast,’ said Kyle Kappel, cyber leader at KPMG in the US. The speed was demonstrated earlier this month, when OpenAI introduced a technology that can recreate a human voice from a 15-second clip.
Como as Empresas Estão se Preparando para os Vídeos Forjados?
Open AI said it wouldn’t release the technology publicly until more was known about the potential risks of misuse. Among the concerns is the possibility of bad actors using AI-generated audio to manipulate voice authentication software used by financial services companies to verify clients and grant them access to their accounts.
A Importância da Defesa Contra Deepfakes no Setor Financeiro
Chase Bank was recently deceived by an AI-generated voice during an experiment. Deepfake incidents in the fintech sector increased by 700% in 2023 compared to the previous year, according to a recent report from the identity verification platform Sumsub. Companies say they are working to implement more protection barriers to prepare for a wave of generative AI-powered attacks.
Empresas se Unem Contra as Tecnologias de Deepfakes
For example, Cassidy said they are working with New York Life’s venture capital group to identify startups and emerging technologies designed to combat deepfakes. ‘In many cases, the best defense against this generative AI threat is some form of generative AI on the other side,’ he said.
Desafios da Verificação de Identidade na Era dos Deepfakes
Criminals can also use AI to generate fake driver’s license photos to set up online accounts, so Alex Carriles, digital director at Simmons Bank, said they are changing some identity verification protocols. Previously, a step to open an online account at the bank involved customers uploading driver’s license photos.
Novas Estratégias para a Segurança Online
Now that driver’s license images can be easily generated with AI, the bank is working with the security provider IDScan.net to improve the process. Instead of sending a pre-existing photo, Carriles said customers must now take photos of their driver’s licenses through the bank’s app and then take selfies.
A Importância da Proteção Contra Riscos de Deepfakes
To avoid a situation where they hold up cameras against a screen with an AI-generated face of someone else, the app instructs users to look left, right, up, or down, as a generic AI deepfake may not necessarily be prepared to do the same.
Receba as Notícias do Mundo dos Negócios
IM Business Want to stay updated on the latest news driving the business world? Sign up and receive alerts from the new InfoMoney Business by email. Name Fill in the field correctly! Email Invalid email! Company Fill in the field correctly!
Fonte: @ Info Money
Comentários sobre este artigo