Prestopping method eliminates label noise for accurate deep learning models!
Noisy labels in training data can cause poor generalization in deep neural networks. A new method called Prestopping helps avoid overfitting to noisy labels by stopping training early and then resuming with a set of likely true-labeled samples. This method outperforms other techniques by reducing test errors by 0.4-8.2 percent points in the presence of real-world noise.