Detection of L0-optimized attacks via anomaly scores distribution analysis
https://doi.org/10.17586/2226-1494-2025-25-1-128-139
Abstract
The spread of artificial intelligence and machine learning is accompanied by an increase in the number of vulnerabilities and threats in systems implementing such technologies. Attacks based on malicious perturbations pose a significant threat to such systems. Various solutions have been developed to protect against them, including an approach to detecting L0- optimized attacks on neural image processing networks using statistical analysis methods and an algorithm for detecting such attacks by threshold clipping. The disadvantage of the threshold clipping algorithm is the need to determine the value of the parameter (cutoff threshold) to detect various attacks and take into account the specifics of the data sets, which makes it difficult to apply in practice. This article describes a method for detecting L0-optimized attacks on neural image processing networks through statistical analysis of the distribution of anomaly scores. To identify the distortion inherent in L0-optimized attacks, deviations from the nearest neighbors and Mahalanobis distances are determined. Based on their values, a matrix of pixel anomaly scores is calculated. It is assumed that the statistical distribution of pixel anomaly scores is different for attacked and non-attacked images and for perturbations embedded in various attacks. In this case, attacks can be detected by analyzing the statistical characteristics of the distribution of anomaly scores. The obtained characteristics are used as predictors for training anomaly detection and image classification models. The method was tested on the CIFAR-10, MNIST and ImageNet datasets. The developed method demonstrated the high quality of attack detection and classification. On the CIFAR-10 dataset, the accuracy of detecting attacks (anomalies) was 98.43 %, while the binary and multiclass classifications were 99.51 % and 99.07 %, respectively. Despite the fact that the accuracy of anomaly detection is lower than that of a multiclass classification, the method allows it to be used to distinguish fundamentally similar attacks that are not contained in the training sample. Only input data is used to detect and classify attacks, as a result of which the proposed method can potentially be used regardless of the architecture of the model or the presence of the target neural network. The method can be applied for detecting images distorted by L0-optimized attacks in a training sample.
About the Authors
D. A. EsipovRussian Federation
Dmitry A. Esipov — Assistant
Saint Petersburg, 197101
M. I. Basov
Russian Federation
Mark I. Basov — Student
Saint Petersburg, 197101
A. D. Kletenkova
Russian Federation
Alyona D. Kletenkova — Student
Saint Petersburg, 197101
References
1. Esipov D.A., Buchaev A.Y., Kerimbay A., Puzikova Y.V., Saidumarov S.K., Sulimenko N.S., Popov I.Yu., Karmanovskiy N.S. Attacks based on malicious perturbations on image processing systems and defense methods against them. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2023, vol. 23, no. 4, pp. 720–733. (in Russian). https://doi.org/10.17586/2226-1494-2023-23-4-720-733
2. Esipov D.A. An approach to detecting L0-optimized attacks on image processing neural networks via means of mathematical statistics // Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2024. V. 24. N 3. P. 490–499. https://doi.org/10.17586/2226-1494-2024-24-3-490-499f
3. Nguyen-Son H.Q., Thao T.P., Hidano S., Bracamonte V., Kiyomoto S., Yamaguchi R.S. Opa2d: One-pixel attack, detection, and defense in deep neural networks // Proc. of the International Joint Conference on Neural Networks (IJCNN). 2021. P. 1–10. https://doi.org/10.1109/IJCNN52387.2021.9534332
4. Alatalo J., Sipola T., Kokkonen T. Detecting One-Pixel Attacks Using Variational Autoencoders // Lecture Notes in Networks and Systems. 2022. V. 468 P. 611–623. https://doi.org/10.1007/978-3-031-04826-5_60
5. Wang P., Cai Z., Kim D., Li W. Detection mechanisms of one-pixel attack // Wireless Communications and Mobile Computing. 2021. V. 2021. N 1. P. 8891204. https://doi.org/10.1155/2021/8891204
6. Grosse K., Manoharan P., Papernot N., Backes M., McDaniel P. On the (statistical) detection of adversarial examples // arXiv. 2017. arXiv:1702.06280. https://doi.org/10.48550/arXiv.1702.06280
7. Guo F., Zhao Q., Li X., Kuang X., Zhang J., Han Y., Tan Y.A. Detecting adversarial examples via prediction difference for deep neural networks // Information Sciences. 2019. V. 501. P. 182–192. https://doi.org/10.1016/j.ins.2019.05.084
8. Su J., Vargas D.V., Sakurai K. One pixel attack for fooling deep neural networks // IEEE Transactions on Evolutionary Computation. 2019. V. 23. N 5. P. 828–841. https://doi.org/10.1109/TEVC.2019.2890858
9. Papernot N., McDaniel P., Jha S., Fredrikson M., Celik Z.B., Swami A. The limitations of deep learning in adversarial settings // Proc. of the IEEE European symposium on security and privacy (EuroS&P). 2016. P. 372–387. https://doi.org/10.1109/EuroSP.2016.36
10. Karmon D., Zoran D., Goldberg Y. Lavan: Localized and visible adversarial noise // arXiv. 2018. arXiv:1801.02608. https://doi.org/10.48550/arXiv.1801.02608
11. Lampert C.H. Kernel methods in computer vision // Foundations and Trends in Computer Graphics and Vision. 2009. V. 4. N 3. P. 193–285. http://dx.doi.org/10.1561/0600000027
12. Bounsiar A., Madden M.G. One-class support vector machines revisited // Proc. of the 5th International Conference on Information Science & Applications (ICISA). 2014. P. 1–4. https://doi.org/10.1109/ICISA.2014.6847442
13. Tax D.M.J., Duin R.P.W. Support vector data description. Machine Learning. 2004. V. 54. N 1. P. 45–66. https://doi.org/10.1023/B:MACH.0000008084.60811.49
14. Liu F.T., Ting K.M., Zhou Z.H. Isolation forest // Proc. of the 8th IEEE International Conference on Data Mining. 2008. P. 413–422. https://doi.org/10.1109/ICDM.2008.17
15. Ji Y., Wang Q., Li X., Liu J. A survey on tensor techniques and applications in machine learning // IEEE Access. 2019. V. 7. P. 162950–162990. https://doi.org/10.1109/ACCESS.2019.2949814
16. Howard S. The Elliptical Envelope // arXiv. 2007. arXiv:math/0703048. https://doi.org/10.48550/arXiv.math/0703048
17. Ashrafuzzaman M., Das S., Jillepalli A.A., Chakhchoukh Y., Sheldon F.T. Elliptic envelope based detection of stealthy false data injection attacks in smart grid control systems // Proc. of the IEEE Symposium Series on Computational Intelligence (SSCI). 2020. P. 1131–1137. https://doi.org/10.1109/SSCI47803.2020.9308523
18. Hearst M.A., Dumais S.T., Osuna E., Platt J., Scholkopf B. Support vector machines // IEEE Intelligent Systems and their applications. 1998. V. 13. N 4. P. 18–28. https://doi.org/10.1109/5254.708428
19. Ho T.K. The random subspace method for constructing decision forests // IEEE transactions on pattern analysis and machine intelligence. 1998. V. 20. N 8. P. 832–844. https://doi.org/10.1109/34.709601
20. Wright R.E. Logistic regression // Reading and understanding multivariate statistics. American Psychological Association, 1995. P. 217–244.
21. Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., Blondel M., Prettenhofer P., Weiss R., Dubourg V., Vanderplas J., Passos A., Cournapeau D., Brucher M., Perrot M., Duchesnay É. Scikit-learn: Machine learning in Python // Journal of Machine Learning Research. 2011. V. 12. P. 2825–2830.
22. Sedgwick P. Pearson’s correlation coefficient // British Medical Journal. 2012. V. 345. P. e4483. https://doi.org/10.1136/bmj.e4483
23. Abd Al-Hameeda K.A. Spearman's correlation coefficient in statistical analysis // International Journal of Nonlinear Analysis and Applications. 2022. V. 13. N 1. P. 3249–3255. https://doi.org/10.22075/ijnaa.2022.6079
24. Abdi H. The Kendall rank correlation coefficient // Encyclopedia of measurement and statistics. SAGE Publications, 2007. V. 2. P. 508–510.
25. Xu W., Hou Y., Hung Y.S., Zou Y. A comparative analysis of Spearman's rho and Kendall's tau in normal and contaminated normal models // Signal Processing. 2013. V. 93. N 1. P. 261–276. https://doi.org/10.1016/j.sigpro.2012.08.005
Review
For citations:
Esipov D.A., Basov M.I., Kletenkova A.D. Detection of L0-optimized attacks via anomaly scores distribution analysis. Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2025;25(1):128-139. https://doi.org/10.17586/2226-1494-2025-25-1-128-139