欧阳晔, 江巍, 吴怡, 冯强, 郑宏. 广义乘子法求解构造变分问题的神经网络方法[J]. 工程力学, 2023, 40(11): 11-20. DOI: 10.6052/j.issn.1000-4750.2022.05.0488
引用本文: 欧阳晔, 江巍, 吴怡, 冯强, 郑宏. 广义乘子法求解构造变分问题的神经网络方法[J]. 工程力学, 2023, 40(11): 11-20. DOI: 10.6052/j.issn.1000-4750.2022.05.0488
OUYANG Ye, JIANG Wei, WU Yi, FENG Qiang, ZHENG Hong. NEURAL NETWORK METHOD FOR CONSTRUCTIVE VARIATIONAL PROBLEMS BY GENERALIZED MULTIPLIER METHOD[J]. Engineering Mechanics, 2023, 40(11): 11-20. DOI: 10.6052/j.issn.1000-4750.2022.05.0488
Citation: OUYANG Ye, JIANG Wei, WU Yi, FENG Qiang, ZHENG Hong. NEURAL NETWORK METHOD FOR CONSTRUCTIVE VARIATIONAL PROBLEMS BY GENERALIZED MULTIPLIER METHOD[J]. Engineering Mechanics, 2023, 40(11): 11-20. DOI: 10.6052/j.issn.1000-4750.2022.05.0488

广义乘子法求解构造变分问题的神经网络方法

NEURAL NETWORK METHOD FOR CONSTRUCTIVE VARIATIONAL PROBLEMS BY GENERALIZED MULTIPLIER METHOD

  • 摘要: 边界条件的施加是求解偏微分方程定解问题的重要步骤。神经网络方法求解偏微分方程定解问题时,将原问题转化为对应的构造变分问题后,损失函数是包含控制方程与边界条件的泛函。采用经典罚函数法及其改进方法施加边界条件时,罚因子的取值直接影响计算精度和求解效率;直接采用Lagrange乘子法施加边界条件,计算结果可能偏离原问题最优解。为破解此局限性,使用广义乘子法施加边界条件。基于神经网络获得原问题的预测解,再使用广义乘子法构建神经网络的损失函数并计算损失值,利用梯度下降法进行参数寻优,判断损失值是否满足要求;不满足则更新罚因子与乘子后再进行求解直至损失满足要求。数值算例的计算结果表明:与采用经典罚函数法、L1精确罚函数法和Lagrange乘子法施加边界条件构造的神经网络相比,该文提出的方法具有更好的数值精度和更高的求解效率,且求解过程更加稳定。

     

    Abstract: The imposition of boundary conditions is an essential step in solving the definite problem of partial differential equations. When the definite problem of partial differential equations is resolved by neural network, the original problem should be transformed to its corresponding constructive variational problem, and the loss function is a functional consisting of the governing equations and the boundary conditions. If the boundary conditions are imposed by the classical penalty function method and its improvements, the value of the penalty factor will affect the solution accuracy and the computational efficiency. If the boundary conditions are directly imposed by the Lagrange multiplier method, the computational results may deviate from the optimal solution of the original problem. To overcome these limitations, the generalized multiplier method is employed in the imposition of boundary conditions. The predicted solution of the original problem is obtained from the neural network. The generalized multiplier method is used to construct the loss function of the neural network and calculate the loss. The gradient descent method is utilized to perform parameter optimization. Afterwards, the loss function is calculated. The penalty factor and multiplier are updated, and the resolution is repeated till the loss is acceptable. The results of numerical examples verify that the proposed method has better solution accuracy, higher computational efficiency and more stable solution process than those neural networks in which the boundary conditions are applied by the classical penalty function method, L1 exact penalty function method, and Lagrange multiplier method.

     

/

返回文章
返回