An open API service providing repository metadata for many open source software ecosystems.

GitHub / SamH135 / LLM-Assessment-Framework

A modular and extendable framework built to for the purpose of testing trustworthiness in AI language models. The framework is currently under development to add more OWASP based risk evaluators to determine different types of vulnerabilities within AI systems

JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SamH135%2FLLM-Assessment-Framework
PURL: pkg:github/SamH135/LLM-Assessment-Framework

Stars: 0
Forks: 0
Open issues: 0

License: None
Language: Python
Size: 128 KB
Dependencies parsed at: Pending

Created at: 8 months ago
Updated at: 4 months ago
Pushed at: 4 months ago
Last synced at: 4 months ago

Topics: ai, owasp-top-10, python, redteam-tool, testing-tools, vulnerability-assessment

    Loading...