GitHub / harshjuly12 / Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models
Enhancing Explainability in Fake News Detection uses SHAP and BiLSTM models to improve the transparency and interpretability of detecting fake news, providing insights into the model's decision-making process.
Stars: 12
Forks: 3
Open issues: 0
License: other
Language: Jupyter Notebook
Size: 199 KB
Dependencies parsed at: Pending
Created at: 10 months ago
Updated at: about 2 months ago
Pushed at: 7 months ago
Last synced at: 24 days ago
Topics: bidirectional-lstm, lstm-neural-networks, shap, xai