Topic: "pgd-adversarial-attacks"
henry8527/GCE
[ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy
Language: Python - Size: 36.1 KB - Last synced at: over 1 year ago - Pushed at: almost 6 years ago - Stars: 39 - Forks: 2

hammaad2002/ASRAdversarialAttacks
An ASR (Automatic Speech Recognition) adversarial attack repository.
Language: Jupyter Notebook - Size: 10 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 6 - Forks: 1

ahmedgh970/adversarial-training
Adversarially Training of Autoencoders for Unsupervised Anomaly Segmentation
Language: Python - Size: 65.4 KB - Last synced at: 8 months ago - Pushed at: 8 months ago - Stars: 4 - Forks: 0

deepmancer/adversarial-attacks-robustness
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Language: Jupyter Notebook - Size: 393 KB - Last synced at: 7 months ago - Pushed at: 9 months ago - Stars: 4 - Forks: 0

jaiprakash1824/VLM_Adv_Attack
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Language: Jupyter Notebook - Size: 23.5 MB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 4 - Forks: 0

aminul-huq/WideResNet_MNIST_Adversarial_Training
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
Language: Python - Size: 19.5 KB - Last synced at: about 2 years ago - Pushed at: almost 5 years ago - Stars: 3 - Forks: 1

LukasDCode/Adversarial_OOD
Language: Jupyter Notebook - Size: 27.5 MB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 2 - Forks: 0

rojinakashefi/Adversarial-Robustness
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
Language: Jupyter Notebook - Size: 10.3 MB - Last synced at: over 1 year ago - Pushed at: over 1 year ago - Stars: 1 - Forks: 0

gautamHCSCV/Image-Anonymization-using-Adversarial-Attacks
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
Language: Jupyter Notebook - Size: 20.4 MB - Last synced at: 12 months ago - Pushed at: 12 months ago - Stars: 0 - Forks: 0

abhinav-bohra/Adversarial-Machine-Learning
Adversarial Sample Generation
Language: Jupyter Notebook - Size: 142 MB - Last synced at: about 2 years ago - Pushed at: about 2 years ago - Stars: 0 - Forks: 0

kyungphilDev/Robust-Deep-RL_Soft-Actor-Critic-Approach
2022 Spring Semester, Personal Project Research
Language: Python - Size: 719 MB - Last synced at: about 2 years ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

pepealessio/Adversarial-Face-Identification
An University Project for the AI4Cybersecurity class.
Language: Jupyter Notebook - Size: 213 MB - Last synced at: about 2 years ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0

mavewrik/adversarial-robust-image-classifier
Adversarially-robust Image Classifier
Language: Jupyter Notebook - Size: 129 MB - Last synced at: 11 months ago - Pushed at: almost 3 years ago - Stars: 0 - Forks: 0
