Awesome_GPT_Super_Prompting logo

Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

Maintained by CyberAlbSecOP

Project Information

GitHub Stars
2,193
Language
Unknown
Last Updated
April 19, 2025 at 12:46 AM

Topics

adversarial-machine-learning
chatgpt
gpt
gpt-3
gpt-4
hacking
jailbreak
leaks
llm
prompt-engineering
prompt-injection
prompts
agent
ai
assistant
prompt-security
system-prompt

Explore More

Discover similar projects or browse the full catalog.