An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: ai-explainability

arthur-ai/arthur-sandbox 📦

Example projects for Arthur Model Monitoring Platform

Language: Jupyter Notebook - Size: 50.1 MB - Last synced at: 6 days ago - Pushed at: about 2 years ago - Stars: 4 - Forks: 3

ranfysvalle02/crewai-flask-autoresearch

In-depth exploration of Large Language Models (LLMs), their potential biases, limitations, and the challenges in controlling their outputs. It also includes a Flask application that uses an LLM to perform research on a company and generate a report on its potential for partnership opportunities.

Language: Python - Size: 759 KB - Last synced at: 14 days ago - Pushed at: 8 months ago - Stars: 1 - Forks: 2

despinakz/xai-tools

University of Piraeus - Thesis Project

Language: Jupyter Notebook - Size: 122 MB - Last synced at: 11 months ago - Pushed at: about 2 years ago - Stars: 0 - Forks: 0

EnriqManComp/Mango-Leaf-Disease-Classification

This project aims to differentiate among various diseases (multiclass prediction) present in mango leaves. Various machine learning techniques were employed in this project to achieve optimal performance in a model capable of predicting multiple classes.

Language: Jupyter Notebook - Size: 2.47 MB - Last synced at: about 1 year ago - Pushed at: about 1 year ago - Stars: 0 - Forks: 0

IBMDeveloperUK/Trusted-AI-Workshops

Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360

Language: Jupyter Notebook - Size: 23.4 MB - Last synced at: about 2 years ago - Pushed at: over 4 years ago - Stars: 10 - Forks: 5

divanoLetto/Explicability-of-decisions-and-uncertainty-in-Deep-Learning

Implementation of global methods (explain the behavior of a model as a whole) and local methods (respect to a specific decision) that allow to explain why an AI model makes its decisions.

Language: Python - Size: 19.8 MB - Last synced at: 5 months ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

basiralab/basic-ML-DL-concepts

Introduction to simple machine learning, deep learning and geometric deep learning concepts and methods.

Language: Jupyter Notebook - Size: 11.3 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 2 - Forks: 0

IBMDeveloperMEA/AI-Ethics Fork of asnajaved/Identify-and-remove-bias-from-AI-models-using-Watson-Studio

Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine learning model, fairness gives us a way to understand the practical implications of deploying the model in a real-world situation.

Size: 907 KB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 3 - Forks: 1

IBMDeveloperMEA/AI-Integrity-Improving-AI-models-with-Cortex-Certifai

Explainability of AI models is a difficult task which is made simpler by Cortex Certifai. It evaluates AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities. Certifai can be applied to any black-box model including machine learning models, predictive models and works with a variety of input datasets.

Language: Jupyter Notebook - Size: 7.88 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 0 - Forks: 0