TY - JOUR T1 - Convergence of Stochastic Gradient Descent under a Local Łojasiewicz Condition for Deep Neural Networks AU - An , Jing AU - Lu , Jianfeng JO - Journal of Machine Learning VL - 2 SP - 89 EP - 107 PY - 2025 DA - 2025/06 SN - 4 DO - http://doi.org/10.4208/jml.240724 UR - https://global-sci.org/intro/article_detail/jml/24143.html KW - Non-convex optimization, Stochastic gradient descent, Convergence analysis. AB -
We study the convergence of stochastic gradient descent (SGD) for non-convex objective functions. We establish the local convergence with positive probability under the local Łojasiewicz condition introduced by Chatterjee [arXiv:2203.16462, 2022] and an additional local structural assumption of the loss function landscape. A key component of our proof is to ensure that the whole trajectories of SGD stay inside the local region with a positive probability. We also provide examples of neural networks with finite widths such that our assumptions hold.