Hackers Can Turn Bing's AI Chatbot Into a Convincing Scammer, Researchers Say

Hackers Can Turn Bing's AI Chatbot Into a Convincing Scammer, Researchers Say

a year ago
Anonymous $Gb26S9Emwz

https://www.vice.com/en_us/article/7kxzzz/hackers-bing-ai-scammer

Hackers can make Bing’s AI chatbot ask for personal information from a user interacting with it, turning it into a convincing scammer without the user's knowledge, researchers say.

In a new study, researchers determined that AI chatbots are currently easily influenced by text prompts embedded in web pages. A hacker can thus plant a prompt on a web page in 0-point font, and when someone is asking the chatbot a question that causes it to ingest that page, it will unknowingly activate that prompt. The researchers call this attack "indirect prompt injection," and give the example of compromising the Wikipedia page for Albert Einstein. When a user asks the chatbot about Albert Einstein, it could ingest that page and then fall prey to the hackers' prompt, bending it to their whims—for example, to convince the user to hand over personal information.

Last Seen
38 minutes ago
Reputation
0
Spam
0.000
Last Seen
16 minutes ago
Reputation
0
Spam
0.000
Last Seen
47 minutes ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000