LLM Adversarial Attacks - Prompt Injection
Автор: Fahd Mirza
Загружено: 2023-08-01
Просмотров: 342
Описание:
Prompt hacking and prompt injections are on the rise. Large language models (LLMs) like ChatGPT, Bard, or Claude undergo extensive fine-tuning to not produce harmful content in their responses to user questions.
#aisecurity #llmsecurity #llmattacks #llamattack
PLEASE FOLLOW ME:
▶ LinkedIn: / fahdmirza
▶ YouTube: / @fahdmirza
▶ Blog: https://www.fahdmirza.com
RELATED VIDEOS:
▶ Prompt Engineering 101 for Beginners • Prompt Engineering 101 for Beginners
▶ Introduction to AWS Bedrock • Amazon Bedrock Introduction
▶ LLM Attacks https://github.com/llm-attacks/llm-at...
▶ LLM Attack Example https://llm-attacks.org
All rights reserved © 2021 Fahd Mirza
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: