Preview

Scientific and Technical Journal of Information Technologies, Mechanics and Optics

Advanced search

Analysis of the vulnerability of YOLO neural network models to the Fast Sign Gradient Method attack

https://doi.org/10.17586/2226-1494-2024-24-6-1066-1070

Abstract

The analysis of formalized conditions for creating universal images falsely classified by computer vision algorithms, called adversarial examples, on YOLO neural network models is presented. The pattern of successful creation of a universal destructive image depending on the generated dataset on which neural networks were trained using the Fast Sign Gradient Method attack is identified and studied. The specified pattern is demonstrated for YOLO8, YOLO9, YOLO10, YOLO11 classifier models trained on the standard COCO dataset.

About the Authors

N. V. Teterev
Saint Petersburg Electrotechnical University “LETI”; Research & Engineering Center JSC “R&EC ETU”
Russian Federation

Nikolai V. Teterev - Junior Researcher, Saint Petersburg, 197022;

Engineer, Saint Petersburg, 194021



V. E. Trifonov
Saint Petersburg Electrotechnical University “LETI”
Russian Federation

Vladislav E. Trifonov - Junior Researcher,

Saint Petersburg, 197022



A. B. Levina
Saint Petersburg Electrotechnical University “LETI”
Russian Federation

Alla B. Levina - PhD (Physica & Mathematics), Associate Professor, Associate Professor,

Saint Petersburg, 197022



References

1. Chakraborty A., Alam M., Dey V., Chattopadhyay A., Mukhopadhyay D. Adversarial attacks and defences: A survey. arXiv, 2018, arXiv:1810.00069v1. https://doi.org/10.48550/arXiv.1810.00069

2. Akhtar N., Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 2018, vol. 6, pp. 14410– 14430. https://doi.org/10.1109/access.2018.2807385

3. Goodfellow I., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. Proc. of the 3rd International Conference on Learning Representations, ICLR 2015, 2015.

4. Zhang C., Zhang H., Hsieh C.-J. An efficient adversarial attack for tree ensembles. Advances in Neural Information Processing Systems, 2020, vol. 33.

5. Xiong P., Tegegn M., Sarin J.S., Pal S., Rubin J. It is all about data: A survey on the effects of data on adversarial robustness. ACM Computing Surveys, 2024, vol. 56, no. 7, pp. 1–41. https://doi.org/10.1145/3627817

6. Zuo C. Regularization effect of fast gradient sign method and its generalization. arXiv, 2018, arXiv:1810.11711. https://doi.org/10.48550/arXiv.1810.11711

7. Yosinski J., Clune J., Nguyen A., Fuchs T., Lipson H. Understanding neural networks through deep visualization. arXiv, 2015, arXiv:1506.06579v1. https://doi.org/10.48550/arXiv.1506.06579

8. Carlini N., Wagner D. Towards evaluating the robustness of neural networks. Proc. of the IEEE Symposium on Security and Privacy (SP), 2017, pp. 39–57. https://doi.org/10.1109/sp.2017.49

9. Li Z., Chen P.-Y., Liu S., Lu S., Xu Y. Zeroth-order optimization for composite problems with functional constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, vol. 36, no. 7, pp. 7453–7461. https://doi.org/10.1609/aaai.v36i7.20709

10. Guo C., Gardner J., You Y., Wilson A., Weinberger K. Simple blackbox adversarial attacks. Proceedings of Machine Learning Research, 2019, vol. 97, pp. 2484–2493.


Review

For citations:


Teterev N.V., Trifonov V.E., Levina A.B. Analysis of the vulnerability of YOLO neural network models to the Fast Sign Gradient Method attack. Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2024;24(6):1066-1070. (In Russ.) https://doi.org/10.17586/2226-1494-2024-24-6-1066-1070

Views: 18


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2226-1494 (Print)
ISSN 2500-0373 (Online)