Main Article Content
Recent research says that by adding relatively small amount of perturbation in the input vector of DNN (Deep Neural Network), the output can easily be altered. In this project we will be performing the attack by modifying only one pixel of the input vector. To do that we will be proposing a novel method, that will help us to generate one-pixel adversarial perturbation based on something called DE (Differential Evolution). This will be a black box attack (having less target information) and it can fool more types of neural networks because of features of DE. The results for this test shows that few of the original images present in CIFAR-10 testing dataset and few from the ImageNet testing images can be attacked to minimum of one target class just by changing one pixel. The same vulnerability is present in the original dataset of CIFAR-10. Thus, this attack explores a different take on adversarial ML, showing that current Deep Neural Networks are susceptible to such low dimension attacks.