Topic: "llm-evaluation-framework"
promptfoo/promptfoo
Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
Language: TypeScript - Size: 361 MB - Last synced at: 3 days ago - Pushed at: 3 days ago - Stars: 6,506 - Forks: 527

confident-ai/deepeval
The LLM Evaluation Framework
Language: Python - Size: 82.9 MB - Last synced at: 2 days ago - Pushed at: 3 days ago - Stars: 6,282 - Forks: 549

msoedov/agentic_security
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
Language: Python - Size: 21.2 MB - Last synced at: 5 days ago - Pushed at: 14 days ago - Stars: 1,350 - Forks: 211

JinjieNi/MixEval
The official evaluation suite and dynamic data release for MixEval.
Language: Python - Size: 9.37 MB - Last synced at: 6 days ago - Pushed at: 6 months ago - Stars: 239 - Forks: 40

cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Language: Python - Size: 30.4 MB - Last synced at: 2 days ago - Pushed at: 20 days ago - Stars: 207 - Forks: 31

parea-ai/parea-sdk-py
Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)
Language: Python - Size: 5.48 MB - Last synced at: 27 days ago - Pushed at: 3 months ago - Stars: 76 - Forks: 6

Addepto/contextcheck
MIT-licensed Framework for LLMs, RAGs, Chatbots testing. Configurable via YAML and integrable into CI pipelines for automated testing.
Language: Python - Size: 464 KB - Last synced at: 6 days ago - Pushed at: 5 months ago - Stars: 67 - Forks: 9

multinear/multinear
Develop reliable AI apps
Language: Svelte - Size: 1.12 MB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 36 - Forks: 1

zhuohaoyu/KIEval
[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
Language: Python - Size: 10.6 MB - Last synced at: about 1 month ago - Pushed at: 10 months ago - Stars: 36 - Forks: 2

flexpa/llm-fhir-eval
Benchmarking Large Language Models for FHIR
Size: 15.6 KB - Last synced at: about 1 month ago - Pushed at: 6 months ago - Stars: 29 - Forks: 3

honeyhiveai/realign
Realign is a testing and simulation framework for AI applications.
Language: Python - Size: 27.3 MB - Last synced at: 20 days ago - Pushed at: 5 months ago - Stars: 16 - Forks: 1

aws-samples/fm-leaderboarder
FM-Leaderboard-er allows you to create leaderboard to find the best LLM/prompt for your own business use case based on your data, task, prompts
Language: Python - Size: 511 KB - Last synced at: 10 months ago - Pushed at: 10 months ago - Stars: 14 - Forks: 4

Networks-Learning/prediction-powered-ranking
Code for "Prediction-Powered Ranking of Large Language Models", NeurIPS 2024.
Language: Jupyter Notebook - Size: 4.74 MB - Last synced at: 22 days ago - Pushed at: 7 months ago - Stars: 9 - Forks: 1

pyladiesams/eval-llm-based-apps-jan2025
Create an evaluation framework for your LLM based app. Incorporate it into your test suite. Lay the monitoring foundation.
Language: Jupyter Notebook - Size: 11.6 MB - Last synced at: 3 days ago - Pushed at: 10 days ago - Stars: 7 - Forks: 5

parea-ai/parea-sdk-ts
TypeScript SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)
Language: TypeScript - Size: 2.94 MB - Last synced at: 21 days ago - Pushed at: 4 months ago - Stars: 4 - Forks: 1

yukinagae/genkitx-promptfoo
Community Plugin for Genkit to use Promptfoo
Language: TypeScript - Size: 553 KB - Last synced at: 27 days ago - Pushed at: 4 months ago - Stars: 3 - Forks: 0

stair-lab/melt
Multilingual Evaluation Toolkits
Language: Python - Size: 204 MB - Last synced at: 6 months ago - Pushed at: 6 months ago - Stars: 3 - Forks: 3

yukinagae/promptfoo-sample
Sample project demonstrates how to use Promptfoo, a test framework for evaluating the output of generative AI models
Size: 334 KB - Last synced at: 27 days ago - Pushed at: 8 months ago - Stars: 1 - Forks: 0

nhsengland/evalsense
Tools for systematic large language model evaluations
Language: Python - Size: 992 KB - Last synced at: 6 days ago - Pushed at: 6 days ago - Stars: 0 - Forks: 0

Fbxfax/llm-confidence-scorer
A set of auxiliary systems designed to provide a measure of estimated confidence for the outputs generated by Large Language Models.
Language: Python - Size: 96.7 KB - Last synced at: 15 days ago - Pushed at: 15 days ago - Stars: 0 - Forks: 0

ronniross/llm-confidence-scorer
A set of auxiliary systems designed to provide a measure of estimated confidence for the outputs generated by Large Language Models.
Language: Python - Size: 0 Bytes - Last synced at: 18 days ago - Pushed at: 18 days ago - Stars: 0 - Forks: 0

petmal/MindTrial
MindTrial: Evaluate and compare AI language models (LLMs) on text-based tasks with optional file/image attachments. Supports multiple providers (OpenAI, Google, Anthropic, DeepSeek), custom tasks in YAML, and HTML/CSV reports.
Language: Go - Size: 143 KB - Last synced at: 19 days ago - Pushed at: 19 days ago - Stars: 0 - Forks: 0

yukinagae/genkit-promptfoo-sample
Sample implementation demonstrating how to use Firebase Genkit with Promptfoo
Language: TypeScript - Size: 2.3 MB - Last synced at: 27 days ago - Pushed at: 8 months ago - Stars: 0 - Forks: 0

jaaack-wang/multi-problem-eval-llm
Evaluating LLMs with Multiple Problems at once: A New Paradigm for Probing LLM Capabilities
Language: Jupyter Notebook - Size: 23.1 MB - Last synced at: 10 months ago - Pushed at: 10 months ago - Stars: 0 - Forks: 0

nagababumo/Building-and-Evaluating-Advanced-RAG
Language: Jupyter Notebook - Size: 51.8 KB - Last synced at: 2 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 1
