Structured First-Layer Initialization Pre-Training Techniques to Accelerate Training Process Based on $\varepsilon$-Rank

Author(s)

,
,
&

Abstract

Training deep neural networks for scientific computing remains computationally expensive due to the slow formation of diverse feature representations in early training stages. Recent studies [37] identify a staircase phenomenon in training dynamics, where loss decreases are closely correlated with increases in $\varepsilon$-rank, reflecting the effective number of linearly independent neuron functions. Motivated by this observation, this work proposes a structured first-layer initialization (SFLI) pre-training technique to enhance the diversity of neural features at initialization by constructing $\varepsilon$-linearly independent neurons in the input layer. We present systematic initialization schemes compatible with various activation functions and integrate the strategy into multiple neural architectures, including modified multi-layer perceptrons and physics-informed residual adaptive networks. Only needing to add one line of code to conventional stochastic gradient descent algorithms, extensive numerical experiments on function approximation and PDE benchmarks, demonstrate that SFLI significantly improves the initial $\varepsilon$-rank, accelerates convergence, mitigates spectral bias, and enhances prediction accuracy.

Author Biographies

  • Tao Tang

    School of Mathematics and Statistics, Guangzhou Nanfang College, Guangzhou, China

     

    Zhuhai SimArk Technology Co., LTD, Zhuhai, Guangdong, China

  • Jiang Yang

    Department of Mathematics, Southern University of Science and Technology, Shenzhen, China

     

    SUSTech International Center for Mathematics, Shenzhen, China

     

    Guangdong Provincial Key Laboratory of Computational Science and Material Design, Southern University of Science and Technology, Shenzhen, China

     

    National Center for Applied Mathematics Shenzhen (NCAMS), Shenzhen, 518055, P.R. China

  • Yuxiang Zhao

    Department of Mathematics, Southern University of Science and Technology, Shenzhen, China

  • Quanhui Zhu

    Department of Mathematics, Southern University of Science and Technology, Shenzhen, China

     

    Department of Mathematics, National University of Singapore, Singapore

About this article

Abstract View

  • 0

Pdf View

  • 0

DOI

10.4208/cicp.OA-2025-0185

How to Cite

Structured First-Layer Initialization Pre-Training Techniques to Accelerate Training Process Based on $\varepsilon$-Rank. (2026). Communications in Computational Physics. https://doi.org/10.4208/cicp.OA-2025-0185