Variational Formulations of ODE-Net as a Mean-Field Optimal Control Problem and Existence Results

Author(s)

&

Abstract

This paper presents a mathematical analysis of ODE-Net, a continuum model of deep neural networks (DNNs). In recent years, machine learning researchers have introduced ideas of replacing the deep structure of DNNs with ODEs as a continuum limit. These studies regard the “learning” of ODE-Net as the minimization of a “loss” constrained by a parametric ODE. Although the existence of a minimizer for this minimization problem needs to be assumed, only a few studies have investigated the existence analytically in detail. In the present paper, the existence of a minimizer is discussed based on a formulation of ODE-Net as a measure-theoretic mean-field optimal control problem. The existence result is proved when a neural network describing a vector field of ODE-Net is linear with respect to learnable parameters. The proof employs the measure-theoretic formulation combined with the direct method of calculus of variations. Secondly, an idealized minimization problem is proposed to remove the above linearity assumption. Such a problem is inspired by a kinetic regularization associated with the Benamou-Brenier formula and universal approximation theorems for neural networks.

About this article

Abstract View

  • 17828

Pdf View

  • 2123

DOI

10.4208/jml.231210