Implement a simple white box attack in Keras to attack a simple neural network. Our goal is to generate adversaries to decieve a simple single layer neural network with 20 hidden nodes into misclassifying data from a test set that is provided by us. This test set consists of examples from classes 0 and 1 from CIFAR10. The target model is pretrained on CIFAR10 class 0 vs 1 and achieves 89% test accuracy. The CIFAR10 data is numpy form available from here https://web.njit.edu/~usman/courses/cs677_summer20/CIFAR10/. You can easily extract classes 0 and 1 with testData = np.load(testFile) testLabels = np.load(testLabelsFile) testData = testData[np.logical_or(testLabels == 0, testLabels == 1)] testLabels = testLabels[np.logical_or(testLabels == 0, testLabels == 1)] testLabels = keras.utils.to_categorical(testLabels, 2) We normalize each image by subtracting the mean: testDataMean = np.mean(valData, axis=0) testData = valData - valDataMean A successful attack should have a classification accuracy of at most 10% on the test. Submit your assignments as a single file wbattack.py. Make train.py take three inputs: the test data and labels and the target model to attack (in our case this is the network with 20 hidden nodes). python wbattack.py Your wbattack.py program should create adversaries x' for every image in the test set using the formula x' = x + epsilon*sign(grad_x(f(x,y))) where epsilon=0.0625, grad_x(f(x,y)) is the gradient of the model f(x,y) with respect to the training data x, and x and y are the training data and training labels repsectively. We can obtain the gradient of the model w.r.t the training data in Keras (see code below). from keras import backend as K gradients = K.gradients(model.output, model.input)[0] iterate = K.function(model.input, gradients) grad = iterate([traindata]) evaluated_gradient = grad[0] Instead of the last two lines above you may also use gradients = calculategrads(traindata)[0] After creating adversaries evaluate their output from the target model f(x,y). A successful white box attack should have adversarial accuracy (which is the accuracy of adversarial examples) below 10%. Copy both your programs and model file to your AFS course folder /afs/cad/courses/ccs/S20/cs/677/850/. The assignment is due 11:30am on July 25th 2020.