GitHub topics: data-poisoning
RiccardoBiosas/awesome-MLSecOps
A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.
Size: 85.9 KB - Last synced at: 4 days ago - Pushed at: 4 months ago - Stars: 313 - Forks: 47

NullTrace-Security/Exploiting-AI
This class is a broad overview and dive into Exploiting AI and the different attacks that exist, and best practice strategies.
Language: Python - Size: 19.6 MB - Last synced at: 4 days ago - Pushed at: 5 days ago - Stars: 45 - Forks: 12

penghui-yang/awesome-data-poisoning-and-backdoor-attacks 📦
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
Size: 52.7 KB - Last synced at: 9 days ago - Pushed at: 4 months ago - Stars: 247 - Forks: 24

lafeat/apbench
APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
Language: Python - Size: 67.5 MB - Last synced at: 12 days ago - Pushed at: 12 days ago - Stars: 30 - Forks: 2

ZhengyuZhao/AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
Size: 118 KB - Last synced at: about 1 month ago - Pushed at: 8 months ago - Stars: 147 - Forks: 16

bliutech/SeBRUS
MIT IEEE URTC 2023. GSET 2023. Repository for "SeBRUS: Mitigating Data Poisoning in Crowdsourced Datasets with Blockchain". Using Ethereum smart contracts to stop AI security attacks on crowdsourced datasets.
Language: JavaScript - Size: 11.2 MB - Last synced at: 15 days ago - Pushed at: over 1 year ago - Stars: 10 - Forks: 0

ch-shin/awesome-data-poisoning
Size: 34.2 KB - Last synced at: 9 days ago - Pushed at: over 2 years ago - Stars: 20 - Forks: 0

Slaymish/malware-classifier-backdoors Fork of elastic/ember
Implementation of backdoor attacks and defenses in malware classification using machine learning models.
Language: Python - Size: 14.8 MB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 1 - Forks: 0

shaialon/ai-security-demos
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
Language: JavaScript - Size: 363 KB - Last synced at: 4 months ago - Pushed at: 10 months ago - Stars: 16 - Forks: 3

privacytrustlab/adversarial_bias
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
Language: Python - Size: 6.61 MB - Last synced at: 4 months ago - Pushed at: almost 4 years ago - Stars: 7 - Forks: 1

hammlab/LimitsOfUDA
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)
Language: Python - Size: 129 KB - Last synced at: 9 months ago - Pushed at: over 3 years ago - Stars: 7 - Forks: 1

hammlab/PoisoningCertifiedDefenses
How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
Language: Python - Size: 128 KB - Last synced at: 9 months ago - Pushed at: almost 4 years ago - Stars: 12 - Forks: 1

ebagdasa/mithridates
Measure and Boost Backdoor Robustness
Language: Jupyter Notebook - Size: 1.14 MB - Last synced at: 3 months ago - Pushed at: 9 months ago - Stars: 8 - Forks: 3

reds-lab/Meta-Sift
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Language: Python - Size: 3.62 MB - Last synced at: 10 months ago - Pushed at: almost 2 years ago - Stars: 15 - Forks: 4

ZaydH/target_identification
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
Language: Python - Size: 73.2 KB - Last synced at: 11 months ago - Pushed at: over 1 year ago - Stars: 7 - Forks: 0

kaiwenzha/contrastive-poisoning
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Language: Python - Size: 13.5 MB - Last synced at: about 1 year ago - Pushed at: over 1 year ago - Stars: 24 - Forks: 1

liuzrcc/ImageShortcutSqueezing
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Language: Python - Size: 117 MB - Last synced at: almost 2 years ago - Pushed at: almost 2 years ago - Stars: 5 - Forks: 2

ebegoli/StreamToxWatch
A repository for the experimental framework for in-stream data poisoning monitoring.
Language: Python - Size: 26.4 KB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 0 - Forks: 0

TLMichael/Hypocritical-Perturbation
[NeurIPS 2022] Can Adversarial Training Be Manipulated By Non-Robust Features?
Language: Python - Size: 556 KB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 1 - Forks: 0

Fraunhofer-AISEC/regression-data-poisoning 📦
Experiments on Data Poisoning Regression Learning
Language: Python - Size: 19.5 KB - Last synced at: about 2 years ago - Pushed at: over 4 years ago - Stars: 10 - Forks: 5

rssalessio/data-poisoning-linear-systems
Code for the paper Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems.
Language: Python - Size: 299 MB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 3 - Forks: 0

andrea-gasparini/backdoor-federated-learning
A backdoor attack in a Federated learning setting using the FATE framework
Language: Python - Size: 1.14 MB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 4 - Forks: 0

TLMichael/Delusive-Adversary
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Language: Python - Size: 142 KB - Last synced at: about 2 years ago - Pushed at: over 3 years ago - Stars: 29 - Forks: 2
