GitHub topics: ai-red-teaming
Repello-AI/whistleblower
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
Language: Python - Size: 48.8 KB - Last synced at: 26 minutes ago - Pushed at: 9 months ago - Stars: 119 - Forks: 10

splx-ai/agentic-radar
A security scanner for your LLM agentic workflows
Language: Python - Size: 15.8 MB - Last synced at: 2 days ago - Pushed at: 2 days ago - Stars: 472 - Forks: 50

kereva-dev/kereva-scanner
Code scanner to check for issues in prompts and LLM calls
Language: Python - Size: 7.12 MB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 29 - Forks: 2

user1342/Folly
Open-source LLM Prompt-Injection and Jailbreaking Playground
Language: Python - Size: 3.96 MB - Last synced at: 4 days ago - Pushed at: about 1 month ago - Stars: 12 - Forks: 1
