Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks

Author(s)

,
,
,
&

Abstract

We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (F-Principle) — DNNs often fit target functions from low to high frequencies — on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. This F-Principle of DNNs is opposite to the behavior of Jacobi method, a conventional iterative numerical scheme, which exhibits faster convergence for higher frequencies for various scientific computing problems. With theories under an idealized setting, we illustrate that this F-Principle results from the smoothness/regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or a randomized dataset.

About this article

Abstract View

  • 86027

Pdf View

  • 4254

DOI

10.4208/cicp.OA-2020-0085

How to Cite

Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks. (2020). Communications in Computational Physics, 28(5), 1746-1767. https://doi.org/10.4208/cicp.OA-2020-0085