Lakera launches to protect large language models from malicious prompts

Lakera launches to protect large language models from malicious prompts

11 months ago
Anonymous $pUsIN4hzN9

https://techcrunch.com/2023/10/12/lakera-launches-to-protect-large-language-models-from-malicious-prompts/

Large language models (LLMs) are the driving force behind the burgeoning generative AI movement, capable of interpreting and creating human-language texts from simple prompts — this could be anything from summarizing a document to writing poem to answering a question using data from myriad sources.

However, these prompts can also be manipulated by bad actors to achieve far more dubious outcomes, using so-called “prompt injection” techniques whereby an individual inputs carefully crafted text prompts into a LLM-powered chatbot with the purpose of tricking it into giving unauthorized access to systems, for example, or otherwise enabling the user to bypass strict security measures.

Last Seen
10 minutes ago
Reputation
0
Spam
0.000
Last Seen
15 minutes ago
Reputation
0
Spam
0.000
Last Seen
10 minutes ago
Reputation
0
Spam
0.000
Last Seen
59 minutes ago
Reputation
0
Spam
0.000
Last Seen
5 hours ago
Reputation
0
Spam
0.000
Last Seen
3 hours ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000