Loading [MathJax]/jax/output/HTML-CSS/config.js
Journals
Resources
About Us
Open Access
Go to previous page

How Can Deep Neural Networks Fail Even with Global Optima?

How Can Deep Neural Networks Fail Even with Global Optima?

Year:    2024

Author:    Qingguang Guan

International Journal of Numerical Analysis and Modeling, Vol. 21 (2024), Iss. 5 : pp. 674–696

Abstract

Fully connected deep neural networks are successfully applied to classification and function approximation problems. By minimizing the cost function, i.e., finding the proper weights and biases, models can be built for accurate predictions. The ideal optimization process can achieve global optima. However, do global optima always perform well? If not, how bad can it be? In this work, we aim to: 1) extend the expressive power of shallow neural networks to networks of any depth using a simple trick, 2) construct extremely overfitting deep neural networks that, despite having global optima, still fail to perform well on classification and function approximation problems. Different types of activation functions are considered, including ReLU, Parametric ReLU, and Sigmoid functions. Extensive theoretical analysis has been conducted, ranging from one-dimensional models to models of any dimensionality. Numerical results illustrate our theoretical findings.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/10.4208/ijnam2024-1027

International Journal of Numerical Analysis and Modeling, Vol. 21 (2024), Iss. 5 : pp. 674–696

Published online:    2024-01

AMS Subject Headings:    Global Science Press

Copyright:    COPYRIGHT: © Global Science Press

Pages:    23

Keywords:    Deep neural network global optima binary classification function approximation overfitting.

Author Details

Qingguang Guan Email