An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: ai-fairness

mahmoodlab/CPATH_demographics

Demographic bias in misdiagnosis by computational pathology models - Nature Medicine

Language: Python - Size: 1.04 MB - Last synced at: 1 day ago - Pushed at: about 1 year ago - Stars: 13 - Forks: 2

jihan-lee01/ml-fairness-mortgage-lending

Fairness Analysis in US Mortgage Lending with Machine Learning Algorithms

Language: Jupyter Notebook - Size: 40.3 MB - Last synced at: 5 months ago - Pushed at: 5 months ago - Stars: 1 - Forks: 0

RexYuan/Shu

AI fairness checker

Language: Python - Size: 6.13 MB - Last synced at: 25 days ago - Pushed at: 5 months ago - Stars: 0 - Forks: 0

micheledusi/SupervisedBiasDetection

A project on bias detection in transformer-based LLMs, with a weakly supervised approach.

Language: Python - Size: 644 KB - Last synced at: 10 months ago - Pushed at: 10 months ago - Stars: 0 - Forks: 0

HandcartCactus/The-Modeler-Manifesto-Model-Card

A model card inspired by Derman & Wilmott's "Modelers' Hippocratic Oath", adapted for responsible and nuanced ML.

Size: 39.1 KB - Last synced at: about 1 year ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

jolares/ai-ethics-fairness-and-bias

Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.

Size: 5.86 KB - Last synced at: about 2 months ago - Pushed at: over 3 years ago - Stars: 1 - Forks: 0

zhihengli-UR/discover_unknown_biases

Official code of "Discover the Unknown Biased Attribute of an Image Classifier" (ICCV 2021)

Language: Python - Size: 11.7 MB - Last synced at: over 1 year ago - Pushed at: over 3 years ago - Stars: 19 - Forks: 1

FairWell-dev/FairWell

FairWell is a Responsible AI tool developed using Streamlit

Language: Jupyter Notebook - Size: 14.2 MB - Last synced at: over 1 year ago - Pushed at: over 3 years ago - Stars: 3 - Forks: 0

heyaudace/communities_and_crime

Deep-Learning approach for generating Fair and Accurate Input Representation for crime rate estimation in continuous protected attributes and continuous targets.

Language: Jupyter Notebook - Size: 16.7 MB - Last synced at: over 1 year ago - Pushed at: over 5 years ago - Stars: 0 - Forks: 2

mirianfsilva/ai-fairness

Notes, references and materials on AI Fairness that I found useful and helped me in my academic research.

Size: 53.7 KB - Last synced at: about 1 year ago - Pushed at: over 3 years ago - Stars: 7 - Forks: 2

IBMDeveloperUK/Trusted-AI-Workshops

Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360

Language: Jupyter Notebook - Size: 23.4 MB - Last synced at: about 2 years ago - Pushed at: over 4 years ago - Stars: 10 - Forks: 5

RishiDarkDevil/Regularization-Based-Fair-Classifier

Here we deal with the issue of fairness in machine learning classification algorithm and we try to exploit regularization technique to attain fairness.

Language: Jupyter Notebook - Size: 1.1 MB - Last synced at: almost 2 years ago - Pushed at: over 2 years ago - Stars: 1 - Forks: 2

IBMDeveloperMEA/AI-Ethics Fork of asnajaved/Identify-and-remove-bias-from-AI-models-using-Watson-Studio

Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine learning model, fairness gives us a way to understand the practical implications of deploying the model in a real-world situation.

Size: 907 KB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 3 - Forks: 1

IBMDeveloperMEA/AI-Integrity-Improving-AI-models-with-Cortex-Certifai

Explainability of AI models is a difficult task which is made simpler by Cortex Certifai. It evaluates AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities. Certifai can be applied to any black-box model including machine learning models, predictive models and works with a variety of input datasets.

Language: Jupyter Notebook - Size: 7.88 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 0 - Forks: 0

ankushjain2001/Fairness-Evaluation-Of-Word-Embeddings

A benchmark of the different word embedding techniques on fairness and bias in AI models

Language: Jupyter Notebook - Size: 849 KB - Last synced at: about 2 years ago - Pushed at: about 4 years ago - Stars: 0 - Forks: 0