Application of the dynamic regressor extension and mixing approach in machine learning on the example of perceptron
https://doi.org/10.17586/2226-1494-2025-25-1-169-173
Abstract
This paper explores the application of the Dynamic Regressor Extension and Mixing method to improve the learning speed in machine learning tasks. The proposed approach is demonstrated using a perceptron applied to regression and binary classification problems. The method transforms a multi-parameter optimization problem into a set of independent scalar regressions, significantly accelerating the convergence of the algorithm and reducing computational costs. Results from computer simulations, including comparisons with stochastic gradient descent and Adam methods, confirm the advantages of the proposed approach in terms of convergence speed and computational efficiency.
Keywords
About the Authors
A. A. MargunRussian Federation
Alexey A. Margun — PhD, Associate Professor; Scientific Researcher
Saint Petersburg, 197101
Saint Petersburg, 199178
K. A. Zimenko
Russian Federation
Konstantin A. Zimenko — PhD, Associate Professor
Saint Petersburg, 197101
A. A. Bobtsov
Russian Federation
Alexey A. Bobtsov — D.Sc., Full Professor
Saint Petersburg, 197101
References
1. Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 1958. vol. 65, no. 6, pp. 386–408. https://doi.org/10.1037/h0042519
2. Haykin Simon S. Neural Networks: A Comprehensive Foundation. Macmillan, 1994, 696 p.
3. Karthick K. Comprehensive overview of optimization techniques in machine learning training. Control Systems and Optimization Letters, 2024, vol. 2, no. 1, pp. 23–27. https://doi.org/10.59247/csol.v2i1.69
4. Reyad M., Sarhan A.M., Arafa M. A modified Adam algorithm for deep neural network optimization. Neural Computing and Applications, 2023, vol. 35, no. 23, pp. 17095–17112. https://doi.org/10.1007/s00521-023-08568-z
5. Wang Y., Xiao Z., Cao G. A convolutional neural network method based on Adam optimizer with power-exponential learning rate for bearing fault diagnosis. Journal of Vibroengineering, 2022, vol. 24, no. 4, pp. 666–678. https://doi.org/10.21595/jve.2022.22271
6. Liu M., Yao D., Liu Z., Guo J., Chen J. An improved Adam optimization algorithm combining adaptive coefficients and composite gradients based on randomized block coordinate descent. Computational Intelligence and Neuroscience, 2023, vol. 10, no. 1, pp. 4765891. https://doi.org/10.1155/2023/4765891
7. Demidenko E.Z. Linear and Nonlinear Regression. Moscow, Finstat Publ., 1981, 302 p. (in Russian)
8. Aranovskiy S., Bobtsov A., Ortega R., Pyrkin A. Performance enhancement of parameter estimators via dynamic regressor extension and mixing. IEEE Transactions on Automatic Control, 2017, vol. 62, no. 7. https://doi.org/10.1109/tac.2016.2614889
9. Ljung L. System Identification: Theory for the User. Prentice-Hall, 1987, 519 p.
Review
For citations:
Margun A.A., Zimenko K.A., Bobtsov A.A. Application of the dynamic regressor extension and mixing approach in machine learning on the example of perceptron. Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2025;25(1):169-173. (In Russ.) https://doi.org/10.17586/2226-1494-2025-25-1-169-173