Hackers Can Turn Bing's AI Chatbot Into a Convincing Scammer, Researchers Say
https://www.vice.com/en_us/article/7kxzzz/hackers-bing-ai-scammer
Hackers can make Bing’s AI chatbot ask for personal information from a user interacting with it, turning it into a convincing scammer without the user's knowledge, researchers say.
In a new study, researchers determined that AI chatbots are currently easily influenced by text prompts embedded in web pages. A hacker can thus plant a prompt on a web page in 0-point font, and when someone is asking the chatbot a question that causes it to ingest that page, it will unknowingly activate that prompt. The researchers call this attack "indirect prompt injection," and give the example of compromising the Wikipedia page for Albert Einstein. When a user asks the chatbot about Albert Einstein, it could ingest that page and then fall prey to the hackers' prompt, bending it to their whims—for example, to convince the user to hand over personal information.