GitHub / Trusted-AI / adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: 5,234
Forks: 1,210
Open issues: 34
License: mit
Language: Python
Size: 610 MB
Dependencies parsed at: Pending
Created at: about 7 years ago
Updated at: 4 days ago
Pushed at: 6 days ago
Last synced at: 4 days ago
Commit Stats
Commits: 9828
Authors: 131
Mean commits per author: 75.02
Development Distribution Score: 0.693
More commit stats: https://commits.ecosyste.ms/hosts/GitHub/repositories/Trusted-AI/adversarial-robustness-toolbox
Topics: adversarial-attacks, adversarial-examples, adversarial-machine-learning, ai, artificial-intelligence, attack, blue-team, evasion, extraction, inference, machine-learning, poisoning, privacy, python, red-team, trusted-ai, trustworthy-ai