Mixed data augmentation harms model interpretability in deep neural networks.
Data augmentation methods used in training deep neural networks can affect how easily we can understand and interpret the models. A study found that models trained with mixed sample data augmentation, like CutMix and SaliencyMix, show lower interpretability. This means that using these methods can make it harder to understand how the model works, especially in important applications.