Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks

Authors

  • Zhi-Qin John Xu
  • Yaoyu Zhang
  • Tao Luo
  • Yanyang Xiao
  • Zheng Ma

DOI:

https://doi.org/10.4208/cicp.OA-2020-0085

Keywords:

Deep learning, training behavior, generalization, Jacobi iteration, Fourier analysis.

Abstract

We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (F-Principle) — DNNs often fit target functions from low to high frequencies — on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. This F-Principle of DNNs is opposite to the behavior of Jacobi method, a conventional iterative numerical scheme, which exhibits faster convergence for higher frequencies for various scientific computing problems. With theories under an idealized setting, we illustrate that this F-Principle results from the smoothness/regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or a randomized dataset.

Published

2020-11-18

Abstract View

  • 85473

Pdf View

  • 4069

Issue

Section

Articles

How to Cite

Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks. (2020). Communications in Computational Physics, 28(5), 1746-1767. https://doi.org/10.4208/cicp.OA-2020-0085