"Another AI Pitfall: Digital Mirroring Opens New Cyberattack Vector"
"Digital twins" or Artificial Intelligence (AI) assistants trained to serve needs by learning about and, in some ways imitating users, can be turned against people in various ways. According to Ben Sawyer, a professor at the University of Central Florida, and Matthew Canham, the CEO of Beyond Layer Seven, despite the uproar over how Large Language Models (LLMs) will allow hackers to create increasingly sophisticated phishing emails, vishing calls, and bots, this type of activity is nothing new. There is already much discussion regarding the insecurity of LLMs, as both researchers and attackers experiment with breaking and manipulating them. Today's social engineering attacks rely on an attacker's ability to closely imitate familiar entities such as coworkers or brands. Sawyer and Canham believe that the future of social engineering will be defined by AI's ability to imitate people and manipulate subconscious preferences. This article continues to discuss how LLMs can be hacked as well as the use of AI to build digital personas to make it easier for malicious actors to create more convincing attacks.
Dark Reading reports "Another AI Pitfall: Digital Mirroring Opens New Cyberattack Vector"