GitHub / DolbyUUU / offensive-language-and-toxic-content-detection-with-visualization
Detection of Offensive Language and Toxic Content in Tweets Using Word Embeddings, the OLID and SOLID Datasets, and LIME Visualizations for Enhanced Interpretability.
Stars: 11
Forks: 0
Open issues: 0
License: None
Language: Python
Size: 3.16 MB
Dependencies parsed at: Pending
Created at: 5 months ago
Updated at: 4 months ago
Pushed at: 4 months ago
Last synced at: 4 months ago
Topics: fasttext, fasttext-embeddings, hate-speech-detection, nlp, offensive-language-detection, social-media-analytics, text-classification, toxic-content-moderation, tweet-analysis