Preview

Scientific and Technical Journal of Information Technologies, Mechanics and Optics

Advanced search

Image structural analysis and adaptive resonance in artificial neural networks (Review paper)

https://doi.org/10.17586/2226-1494-2025-25-2-273-285

Abstract

The observable world is hierarchically structured. The interaction technique of counter motion of information flows in the hierarchically organized systems was named “the adaptive resonance” and successfully modeled in artificial neural network of olfactory stimuli analysis and then applied for image recognition. The usefulness of application of the principles of hierarchical structural analysis and adaptive resonance was then forgotten for a long time. Recently, these principles were applied again in the capsule neural networks that outperformed the best modern models of other neural networks. This shows the necessity of systematizing the ways of practical implementation of these principles. The experience of application of structural analysis and adaptive resonance in the tasks of image recognition by artificial neural networks was inspected through the scientific and technical materials published last half-century. The comparative analysis carried out confirmed the efficiency of application of these principles in automatic image processing. The methods of most efficient realization of structural analysis and adaptive resonance in artificial neural networks were also determined. While the successful results of image recognition were reached by convolutional neural networks, their developers consigned to oblivion the principles of structural analysis and adaptive resonance following from the organization peculiarities of observable environment. However, the come-back to application of these principles would result in additional success in solving the tasks of image processing; thus, the further investigation of artificial neural networks is worth to be carried out in this area.

About the Authors

V. R. Lutsiv
Saint Petersburg State University of Aerospace Instrumentation
Russian Federation

Vadim R. Lutsiv — D.Sc., Professor.

Saint Petersburg, 190000, sc 6602625465



M. A. Mikhalkova
Pavlov Institute of Physiology of the Russian Academy of Sciences
Russian Federation

Maria A. Mikhalkova — Junior Researcher.

Saint Petersburg, 199034, sc 57218284288



V. O. Yachnaya
Saint Petersburg State University of Aerospace Instrumentation; Pavlov Institute of Physiology of the Russian Academy of Sciences
Russian Federation

Valeria O. Yachnaya — Junior Researcher, Pavlov Institute of Physiology of the Russian Academy of Sciences; PhD Student, Saint Petersburg SU of Aerospace Instrumentation.

Saint Petersburg, 199034, 190000, sc 57209076316



References

1. Grossberg S. Adaptive pattern classification and universal recoding: II. Feedback, expectation, olfaction, illusions. Biological Cybernetics, 1976, vol. 23, no. 4, pp. 187–202. https://doi.org/10.1007/BF00340335

2. Carpenter G.A., Grossberg S. Category learning and adaptive pattern recognition: a neural network mode. Proc. of the 3rd Army Conference on Applied Mathematics and Computing, 1986, pp. 37–56.

3. Carpenter G.A., Grossberg S. ART 2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics, 1987, vol. 26, no. 12, pp. 4919–4930. https://doi.org/10.1364/AO.26.004919

4. Rumelhart D.E. Learning Internal Representations by Error Propagation. Institute for Cognitive Science, 1985, 34 p.

5. Yamada K., Kami H., Tsukumo J., Temma T. Handwritten numeral recognition by multilayered neural network with improved learning algorithm. Proc. of the International 1989 Joint Conference on Neural Networks , 1989, pp. 259–266. https://doi.org/10.1109/IJCNN.1989.118708

6. Fukushima K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 1980, vol. 36, no. 4, pp. 193–202. https://doi.org/10.1007/bf00344251

7. Krizhevsky A., Sutskever I., Hinton G.E. ImageNet classification with deep convolutional neural networks. Proc. of the 26th International Conference on Neural Information Processing Systems (NIPS'12), 2012, vol. 1, pp. 1097–1105.

8. Fukushima K. A neural network model for selective attention in visual pattern recognition. Biological Cybernetics, 1986, vol. 55, no. 1, pp. 5–15. https://doi.org/10.1007/BF00363973

9. Hopfield J.J., Tank D.W. Neural computation of decisions in optimization problems. Biological Cybernetics, 1985, vol. 52, no. 3, pp. 141–152. https://doi.org/10.1007/bf00339943

10. Lutsiv V.R., Novikova T.A. On the use of neurocomputer for stereoimage processing. Pattern Recognition and Image Analysis. Advances in Mathematical Theory and Applications, 1992, vol. 2, no. 4, pp. 441–444.

11. Danilov E.P., Lutciv V.R., Novikova T.A., Malyshev I.A. Development of methods and algorithms for the implementation of intelligent computing devices using neural networks for the purposes of image analysis and comparison problems. Statement of work no. 22201-007-94, Code “Neural network”. St. Petersburg, Open Joint Stock Company «S. I. Vavilov State Optical Institute», 1994, pp. 39–50. (in Russian)

12. Dolinov D.S., Zherebko A.K., Lutsiv V.R., Novikova T.A. Using artificial neural networks in image processing problems. Journal of Optical Technology, 1997, vol. 64, no. 2, pp. 112–118.

13. Lutsiv V.R., Malyshev I.A., Pepelka V.A. Automatic fusion of multiple-sensor and multiple-season images. Proceedings of SPIE, 2001, vol. 4380, pp. 174–183. https://doi.org/10.1117/12.436990

14. Lutsiv V.R., Malyshev I.A., Pepelka V.A., Potapov A.S. The Target independent algorithms for description and structural matching of aerospace photographs. Proceedings of SPIE, 2002, vol. 4741, pp. 351–362. https://doi.org/10.1117/12.478732

15. Lutsiv V. Automatic image analysis: Object-independent structured approach. Lambert Academic Publishing, 2011, 308 p. (in Russian)

16. Slagle J.R. Artificial Intelligence: The Heuristic Programming Approach. McGraw Hill, 1973, 196 p.

17. Lutsiv V.R., Malyshev I.A., Potapov A.S. Hierarchical structural matching algorithms for registration of aerospace images. Proceedings of SPIE, 2003, vol. 5238, pp. 164–175. https://doi.org/10.1117/12.511770

18. Ponomarev S., Lutsiv V., Malyshev I. Automatic structural matching of 3D image data. Proceedings of SPIE, 2015, vol. 9649, pp. 96490M. https://doi.org/10.1117/12.2194312

19. Girshick R., Iandola F., Darrell T., Malik, J. Deformable part models are convolutional neural networks. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 437–446. https://doi.org/10.1109/cvpr.2015.7298641

20. Felzenszwalb P.F., Girshick R.B., McAllester D., Ramanan D. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, vol. 32, no. 9, pp. 1627–1645. https://doi.org/10.1109/tpami.2009.167

21. Yang Y., Ramanan D. Articulated human detection with flexible mixtures of parts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, vol. 35, no. 12, pp. 2878–2890. https://doi.org/10.1109/tpami.2012.261

22. Wang Y., Tran D., Liao Z. Learning hierarchical poselets for human parsing. CVPR 2011, pp. 1705–1712. https://doi.org/10.1109/cvpr.2011.5995519

23. Wan L., Eigen D., Fergus R. End-to-end integration of a convolutional network, deformable parts model and non-maximum suppression. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 851–859. https://doi.org/10.1109/cvpr.2015.7298686

24. Girshick R., Donahue J., Darrell T., Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587. https://doi.org/10.1109/CVPR.2014.81

25. Hinton G.E., Krizhevsky A., Wang S.D. Transforming auto-encoders. Lecture Notes in Computer Science, 2011, vol. 6791, pp. 44–51. https://doi.org/10.1007/978-3-642-21735-7_6

26. Sabour S., Frosst N., Hinton G.E. Dynamic routing between capsules. arXiv, 2017, arXiv:1710.09829v2. https://doi.org/10.48550/arXiv.1710.09829

27. Hinton G., Sabour S., Frosst N. Matrix capsules with EM routing. Proc. of the 6th International Conference on Learning Representations (ICLR 2018), 2018, pp. 1–15.

28. LeCun Y., Huang F.J., Bottou L. Learning methods for generic object recognition with invariance to pose and lighting. Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), 2004, pp. 97–104. https://doi.org/10.1109/cvpr.2004.1315150

29. Kosiorek A.R., Sabour S., Teh Y.W., Hinton G.E. Stacked capsule autoencoders. arXiv, 2019, arXiv:1906.06818v2. https://doi.org/10.48550/arXiv.1906.06818

30. Lee J., Lee Y., Kim J., Kosiorek A.R., Choi S., Teh Y.W. Set Transformer: a framework for attention-based permutation-invariant neural networks. arXiv, 2019, arXiv:1810.00825v3. https://doi.org/10.48550/arXiv.1810.00825

31. Haeusser P., Plapp J., Golkov V., Aljalbout E., Cremers D. Associative deep clustering: training a classification network with no labels. Lecture Notes in Computer Science, 2019, vol. 11269, pp. 18–32. https://doi.org/10.1007/978-3-030-12939-2_2

32. Sun W., Tagliasacchi A., Deng B., Sabour S., Yazdani S., Hinton G., Yi K.M. Canonical capsules: self-supervised capsules in canonical pose. arXiv, 2021, arXiv:2012.04718v2. https://doi.org/10.48550/arXiv.2012.04718

33. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin I. Attention is all you need. arXiv, 2017, arXiv:1706.03762v5. https://doi.org/10.48550/arXiv.1706.03762

34. Parmar N., Vaswani A., Uszkoreit J., Kaiser L., Shazeer N., Ku A., Tran D. Image Transformer. arXiv, 2018, arXiv:1802.05751v3. https://doi.org/10.48550/arXiv.1802.05751

35. Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., Uszkoreit J., Houlsby N. An image is worth 16x16 words: transformers for image recognition at scale. arXiv, 2021, https://doi.org/10.48550/arXiv.2010.11929

36. Hinton G. How to represent part-whole hierarchies in a neural network. Neural Computation, 2023, vol. 35, no. 3, pp. 413–452. https://doi.org/10.1162/neco_a_01557

37. Kohonen T. Self-Organization and Associative Memories. Springer, 1984, 255 p.

38. Lutciv V.R., Mikhalkova M.A., Iachnaia V.O. Computer Vision. A Tutorial in 3 parts. Part 3: Modern Modifications of Neural Network Architectures and Methods of Image Enhancement. St. Petersburg, GUAP, 2024, 196 p. (in Russian)


Review

For citations:


Lutsiv V.R., Mikhalkova M.A., Yachnaya V.O. Image structural analysis and adaptive resonance in artificial neural networks (Review paper). Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2025;25(2):273-285. (In Russ.) https://doi.org/10.17586/2226-1494-2025-25-2-273-285

Views: 33


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2226-1494 (Print)
ISSN 2500-0373 (Online)