In this project we will do a transfer learning exercise to train a model for image recognition on the chest X-ray dataset that is located in /home/u/usman/cs_677_datasets/chest_xray. You may use the Keras data loader to read the train and test images as shown here https://keras.io/api/preprocessing/image/. Use the project template files given on the course website as a starting point. Your goal is to achieve above 90% accuracy on the test dataset. You may train your own model from scratch but it may take long to train and achieve above 90% accuracy. You may also fine tune the weights of the pre-trained model. Submit your project as two files train.py and test.py. Your train.py takes two inputs: the input training directory and a model file name to save the model to. python train.py traindir Your test.py take two inputs: the test directory and a model file name to load the model. python test.py testdir The output of test.py is the test error of the data which is the number of misclassifications divided by size of the test set. Submit your assignment by copying it into the directory /afs/cad/courses/ccs/s21/cs/677/002/. For example if your ucid is abc12 then copy your solution into /afs/cad/courses/ccs/s21/cs/677/002/abc12. Your project is due April 12th 2021. Due date extended to midnight April 18th 2021. ----------------------------------------------------------------- To use Keras first setup your miniconda tensorflow environment by following the steps here https://wiki.hpc.arcs.njit.edu/index.php/MinicondaUserMaintainedEnvs. 1. Login directly to a datasci node with "srun -p datasci --gres=gpu:1 --mem=32GB --pty bash" 2. After logging into a node activate your tensorflow-gpu miniconda environment and run your python program within the environment. 3. The manual login gives you more control over development. In case your model needs to run for longer you can submit your command via a script. Use the script template below if needed. #!/bin/bash #SBATCH --job-name=cnn_job #SBATCH --output=cnn_job.%j.out # %j expands to slurm JobID #SBATCH --nodes=1 #SBATCH --tasks-per-node=1 #SBATCH --partition=datasci #SBATCH --gres=gpu:1 #SBATCH --mem=32GB #purge and load the correct modules module purge > /dev/null 2>&1 #if you tensorflow miniconda environment is called tf then #activate it as shown below conda activate tf #now run your python program srun python train.py newmodel