Honeypots thwart adversarial AI attacks, safeguarding online machine learning services from malicious manipulation.
The article discusses using honeypots to protect online machine-learning services from adversarial attacks. The researchers aim to deceive attackers by making it difficult for them to manipulate the learning model. They achieve this by confusing the attacker with false information, leading them to a decoy model, and creating extra work for them.