Neural networks can be tricked into misclassifying images with imperceptible changes.
Deep neural networks, like those used for speech and image recognition, can learn in ways that are hard to understand. Researchers found that in these networks, the space where information is stored is more important than individual units. They also discovered that small changes to an image can make the network see it completely differently. This means the network's decisions can be very sensitive to tiny details, which can lead to mistakes in classifying images.