Topic: "prompt-injection-llm-security"
Repello-AI/whistleblower
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
Language: Python - Size: 48.8 KB - Last synced at: 10 days ago - Pushed at: 10 months ago - Stars: 119 - Forks: 10

microsoft/llmail-inject-challenge-analysis
Data Analysis of the results of llmail-inject challenge
Language: Jupyter Notebook - Size: 18 MB - Last synced at: 1 day ago - Pushed at: 21 days ago - Stars: 1 - Forks: 0

0x6f677548/copilot-instructions-unicode-injection
Proof of Concept (PoC) demonstrating prompt injection vulnerability in AI code assistants (like Copilot) using hidden Unicode characters within instruction files (copilot-instructions.md). Highlights risks of using untrusted instruction templates. For educational/research purposes only.
Size: 1.48 MB - Last synced at: 5 days ago - Pushed at: 27 days ago - Stars: 1 - Forks: 0

AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection
Lakera Gandalf AI challenge's step by step walkthrough, showcasing real-world prompt injection techniques and LLM security insights.
Size: 4.88 KB - Last synced at: about 2 months ago - Pushed at: about 2 months ago - Stars: 1 - Forks: 0

AmanPriyanshu/FRACTURED-SORRY-Bench-Automated-Multishot-Jailbreaking
FRACTURED-SORRY-Bench: This repository contains the code and data for the creating an Automated Multi-shot Jailbreak framework, as described in our paper.
Language: Python - Size: 2.38 MB - Last synced at: about 7 hours ago - Pushed at: 7 months ago - Stars: 1 - Forks: 0

Prediction-by-Invention/promptbouncer
A first line of defense against prompt-based attacks with real-time threat assessment.
Language: Python - Size: 1.21 MB - Last synced at: 10 months ago - Pushed at: 10 months ago - Stars: 0 - Forks: 0
