GitHub topics: ai-benchmark
petmal/MindTrial
MindTrial: Evaluate and compare AI language models (LLMs) on text-based tasks with optional file/image attachments. Supports multiple providers (OpenAI, Google, Anthropic, DeepSeek), custom tasks in YAML, and HTML/CSV reports.
Language: Go - Size: 143 KB - Last synced at: 1 day ago - Pushed at: 1 day ago - Stars: 0 - Forks: 0

playsaurus-inc/play-bench
PlayBench is a platform that evaluates AI models by having them compete in various games and creative tasks. Unlike traditional benchmarks that focus on text generation quality or factual knowledge, PlayBench tests models on skills like strategic thinking, pattern recognition, and creative problem-solving.
Language: Blade - Size: 644 KB - Last synced at: 6 days ago - Pushed at: 6 days ago - Stars: 0 - Forks: 0

microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.
Language: Python - Size: 191 MB - Last synced at: 6 days ago - Pushed at: about 2 months ago - Stars: 666 - Forks: 66

TheAgentCompany/TheAgentCompany
An agent benchmark with tasks in a simulated software company.
Language: Python - Size: 6.58 MB - Last synced at: 21 days ago - Pushed at: 21 days ago - Stars: 279 - Forks: 34

kaykycampos/gta-benchmark
GTA (Guess The Algorithm) Benchmark - A tool for testing AI reasoning capabilities
Size: 1.95 KB - Last synced at: about 1 month ago - Pushed at: about 1 month ago - Stars: 8 - Forks: 0
