The industrial revolution saw humanity create machines that could replicate and enhance human effort. Machines replaced muscles and the world’s productive capabilities expanded exponentially. Artificial intelligence (AI) holds the same potential. Cognitive ability can be enhanced, innovation can be prioritized over administration, as AI removes the need for repetitious activity. We have yet to scratch the surface of what this nascent technology will enable us to do. It may help us create new cures and mitigate or solve our greatest challenges.
But what about when that technology falls into the wrong hands? Or rather, is seized – and developed – by individuals who use it to defraud businesses of their hard-earned revenue? Increasingly sophisticated technology is evolving the way criminals carry out social engineering attacks and, with the help of AI, has given rise to a new wave of cyber security risk: the deepfake scam.
Social engineering scams are a fraudulent activity where a criminal poses as a figure of authority within that company and convinces an employee to transfer funds to a false account. These crimes are not a new phenomenon. Nor are phishing attacks, which trick employees into sharing information or clicking on seemingly innocent appearing links purporting to come from a trusted source that enable a cyber-criminal to access an IT network to steal valuable data and, or, money.
Historically, criminals’ methods have been limited to emails or text messages which mimic the vocabulary and style of the executive or firm being impersonated. Today, the sophistication of AI means criminals can replicate voices on audio calls and faces in images and video. Also known as a ‘deepfake’, this deviously sophisticated technology means it is trickier than ever to distinguish between reality and AI manipulation.