GitHub / hate-alert / HateXplain
Can we use explanations to improve hate speech models? Our paper accepted at AAAI 2021 tries to explore that question.
JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hate-alert%2FHateXplain
PURL: pkg:github/hate-alert/HateXplain
Stars: 173
Forks: 62
Open issues: 11
License: mit
Language: Python
Size: 6.57 MB
Dependencies parsed at: Pending
Created at: about 5 years ago
Updated at: over 1 year ago
Pushed at: about 2 years ago
Last synced at: over 1 year ago
Topics: attention-lstm, bert-fine-tuning, bert-model, bias, detection, explainability, hate-speech, hatespeech, interpretable-deep-learning, lstm, offensive