New method slashes queries needed to trick text classification models by 3-30 times!
Researchers have found a way to trick text classification models with fewer attempts. By using a new method, they can reduce the number of queries needed to fool the system by 3-30 times while still being effective. This means that someone trying to manipulate a deep NLP model can do so more efficiently and with less effort.