Abstract: Deep Neural Networks (DNNs) have recently made significant strides in various fields; however, they are susceptible to adversarial examples—crafted inputs with imperceptible perturbations ...
Abstract: Deep code models are vulnerable to adversarial attacks, making it possible for semantically identical inputs to trigger different responses. Current black-box attack methods typically ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results