First I really appreciate your work and it is really helpful to understand the essence of adversarial examples.
I met some problems when I try to re-implemented the standard testing results of the model trained with Non-Robust Cifar10 Dataset. I trained the Non-Robust Cifar10 Dataset as you said in Appendix C.2 but I only got a bad test accuracy(about 64% which is not 88%).
Could you please share your training code? Thanks in advance and I can't help saying that this paper is extremely useful!
First I really appreciate your work and it is really helpful to understand the essence of adversarial examples.
I met some problems when I try to re-implemented the standard testing results of the model trained with Non-Robust Cifar10 Dataset. I trained the Non-Robust Cifar10 Dataset as you said in Appendix C.2 but I only got a bad test accuracy(about 64% which is not 88%).
Could you please share your training code? Thanks in advance and I can't help saying that this paper is extremely useful!