GitHub / llm-lab-org / MENA-Values-Benchmark-Evaluating-Cultural-Alignment-and-Multilingual-Bias-in-Large-Language-Models
This repository contains the dataset and code used in our paper, “MENA Values Benchmark: Evaluating Cultural Alignment and Multilingual Bias in Large Language Models.” It provides tools to evaluate how large language models represent Middle Eastern and North African cultural values across 16 countries, multiple languages, and perspectives.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/llm-lab-org%2FMENA-Values-Benchmark-Evaluating-Cultural-Alignment-and-Multilingual-Bias-in-Large-Language-Models
PURL: pkg:github/llm-lab-org/MENA-Values-Benchmark-Evaluating-Cultural-Alignment-and-Multilingual-Bias-in-Large-Language-Models
Stars: 2
Forks: 0
Open issues: 0
License: mit
Language: HTML
Size: 88.4 MB
Dependencies parsed at: Pending
Created at: 6 months ago
Updated at: 17 days ago
Pushed at: 17 days ago
Last synced at: 16 days ago
Topics: ai-alignment, ai-fairness, cultural-ai, mena-region, multilingual-nlp, nlp-evaluation, token-level-analysis