arrow
第二十卷, 第二期
【期刊信息】Journal of Machine Learning, Volume 1, Number 4, 2022

xuzhiqin@sjtu.edu.cn


Journal of Machine Learning (JML, jml.pub) is a new journal, published by Global Science Press and sponsored by the Center for Machine Learning Research, Peking University & AI for Science Institute, Beijing. Professor Weinan E serves as the Editor-in-Chief together with managing editors Jiequn Han, Arnulf Jentzen, Qianxiao Li, Lei Wang, Zhi-Qin John Xu, Linfeng Zhang. JML publishes high quality research papers in all areas of machine learning (ML), including innovative algorithms, theory, and applications in all areas. The journal emphasizes a balanced coverage of both theory and application.

An introduction to the fourth issue of Journal of Machine Learning.

Title: A Mathematical Framework for Learning Probability Distributions

Authors: Hongkang Yang

DOI: 10.4208/jml.221202, J. Mach. Learn., 1 (2022), pp. 373-431.

The modeling of probability distributions is an important branch of machine learning. It became popular in recent years thanks to the success of deep generative models in difficult tasks such as image synthesis and text conversation. Nevertheless, we still lack a theoretical understanding of the good performance of distribution learning models. One mystery is the following paradox: it is generally inevitable that the model suffers from memorization (converges to the empirical distribution of the training samples) and thus becomes useless, and yet in practice the trained model can generate new samples or estimate the probability density over unseen samples. Meanwhile, the existing models are so diverse that it has become overwhelming for practitioners and researchers to get a clear picture of this fast-growing subject. This paper provides a mathematical framework that unifies all the well-known models, so that they can be systemically derived based on simple principles. This framework enables our analysis of the theoretical mysteries of distribution learning, in particular, the paradox between memorization and generalization. It is established that the model during training enjoys implicit regularization, so that it approximates the hidden target distribution before eventually turning towards the empirical distribution. With early stopping, the generalization error escapes from the curse of dimensionality and thus the model generalizes well.

Title: Approximation of Functionals by Neural Network Without Curse of Dimensionality

Authors:  Yahong Yang & Yang Xiang

DOI: 10.4208/jml.221018, J. Mach. Learn., 1 (2022), pp. 342-372.

Learning functionals or operators by neural networks is nowadays widely used in computational and applied mathematics. Compared with learning functions by neural networks, an essential difference is that the input spaces of functionals or operators are infinite dimensional space. Some recent works learnt functionals or operators by reducing the input space into a finite dimensional space. However, the curse of dimensionality always exists in this type of methods. That is, in order to maintain the accuracy of an approximation, the number of sample points grows exponentially with the increase of dimension.

In this paper, we establish a new method for the approximation of functionals by neural networks without curse of dimensionality. Functionals, such as linear functionals and energy functionals, have a wide range of important applications in science and engineering fields. We define Fourier series of functionals and the associated Barron spectral space of functionals, based on which our new neural network approximation method is established. The parameters and the network structure in our method only depend on the functional. The approximation error of the neural network is $O(1/\sqrt{m})$ where $m$ is the size of the network, which does not depend on the dimensionality.