Overconfident AI Models Vulnerable to Label Noise, Threatening Real-World Deployments
Benign overfitting in deep learning models can sometimes fail when dealing with label noise in real-world classification tasks. A ResNet model overfits benignly on Cifar10 but not on ImageNet, showing a difference in behavior. By analyzing benign overfitting with a setup where the number of parameters is not much larger than the data points, researchers found that benign overfitting can fail in the presence of label noise. This finding sheds light on the importance of understanding implicit bias in underfitting scenarios for future research.