OWASP Top 10 Vulnerabilities in Large Language Models (LLMs)
We will be exposing the most critical security risks associated with the use of LLMs within companies, focusing on describing each case and bringing examples of each one.
Some of them include model behavior manipulation, data poisoning, prompt injection, security breaches and hallucinations. A special chapter will also focus on AI Agents and the risks involved with them.
Our aim will be to clarify what companies are facing, concerning security AI risks, and expose some mitigating clues.
Speakers
Organizers
