High order neural networks have strong nonlinear mapping ability, but the
network structure is more complex, which restricts the efficiency of the network, and
the relevant theoretical analysis is still not perfect up to now. To solve these problems,
an online gradient learning algorithm model of Pi-Sigma neural network with a smooth
set lasso regular term is proposed. Since the original lasso regular term contains absolute values and is not differentiable at the origin, it causes experiment oscillations and
poses a great challenge to the convergence analysis of the algorithm. We use grinding
technology to overcome this deficiency. The main contribution of this paper lies in the
adoption of online learning algorithm, which effectively improves the efficiency of the
algorithm. At the same time, strict theoretical proofs are presented, including strong
convergence and weak convergence. Finally, the effectiveness of the algorithm and the
correctness of the theoretical results are verified by numerical experiments.