OPTICAL ENGINEERING
The article discusses the issue of controlling the growth process of monocrystals of thallium halides using the Bridgman-Stockbarger technique. The significance of maintaining a stable temperature gradient in the crystallization zone, which has a direct effect on the quality of the final monocrystal, is determined. The use of machine vision techniques to determine the position of the melt-crystal interface and subsequently automatic control of the temperature regime is proposed and scientifically justified. To control the temperature gradient automatically, it is suggested to utilize an algorithm that relies on visual tracking of the crystallization front. This front is identified using machine vision techniques, that allow calculating the corrective action on the upper heating zone of the apparatus. A brief overview of the main steps of the algorithm is provided, and a flowchart illustrating the process is included. Using the example of one iteration of the production cycle, the over time dynamics of changes in the height of the melt-crystal interface and the temperature of the upper furnace are analyzed. The compliance of the product obtained at the pilot apparatus with the accepted technical conditions confirms the effectiveness of the proposed approach in stabilizing the temperature profile. The developed algorithm eliminates manual parameter control at each apparatus, providing opportunities for horizontal scaling of production. It demonstrates advantages over traditional control methods, increasing the repeatability and quality of grown monocrystals. It can be used in the design and modernization of Bridgman-Stockbarger apparatuses. The main limitation of proposed approach is that it can only be applied to processes involving the growth of monocrystals with specific coloration.
This paper is devoted to the study of optical chiral cylindrical waveguides from the point of view of their application in optical spintronics. In the paper, it is proposed to use a chiral optical cylindrical waveguide as an optical spin diode. The mode structure of the waveguide under consideration is calculated and the dispersion equation for fundamental modes of the waveguide with an azimuthal number m = ±1 is numerically solved for various values of the chirality parameter of the waveguide material. Expressions for the energy flux and the optical spin current inside the waveguide are derived. It is shown that in the single-mode regime, the direction of the optical spin currents in the waveguide is determined exclusively by the sign of the chirality parameter of the waveguide material, regardless of the azimuthal number and the direction of mode propagation. Due to this, the superposition of m = 1 and m = –1 modes propagating in opposite directions will have a zero energy flux, but a nonzero optical spin current. Our results expand the element base of optical spintronics and open up new ways for creating energy-efficient optical computing systems.
The production of optical components with a large radius of spherical surfaces requires exceptionally high surface profile accuracy. Minor deviations in the positioning of the cutting tool caused by factors, such as mechanical backlash, thermal deformation, and incorrect tool positioning, can result in dimensional errors of the machined surface, particularly in the form of protrusions that indicate processing defects. Despite a wide range of studies focused on tool wear and general machining errors, insufficient attention has been given to the geometric modeling and correction of defects caused by tool positioning errors. This study presents a comprehensive approach to geometrically modeling the impact of cutting tool positioning errors on the machined surface profile. A mathematical model has been developed to model the interaction between the tool and the spherical surface, enabling precise estimation of the radial machining error. Based on these data, a new error compensation method is proposed, allowing for the correction of errors by modifying the tool movement trajectory. The proposed model accurately predicts the formation and characteristics of protrusions resulting from tool displacement during the machining of spherical surfaces with a large radius. Implementation of the compensation method significantly reduces the defect rate, improves geometric accuracy, and decreases the need for additional processing. Addressing defects caused by positioning errors enables the proposal of a new method that has not previously been considered in precision machining research. The proposed model and tool positioning error compensation method offer an effective and practical solution for improving the surface profile accuracy of optical components, thereby enhancing the precision and efficiency of manufacturing processes. The proposed method contributes to the advancement of highprecision optical component manufacturing with minimal post-processing costs, providing a novel approach in the fields of instrument engineering and precision mechanical engineering.
COMPUTER SCIENCE
The possibility of increasing the accuracy and efficiency of using infrared spectra of thermodynamic inhibitors to control their composition and calculate the dosage required for preventing hydrate formation in the oil and gas industry has been studied. The proposed method consists of determining the amount of inhibitor for the studied “gas-water” system and the magnitude of the decrease in the temperature of the onset of hydrate formation. The relevance of the work and its novelty in comparison with the traditional experimental approach consists in the emergence of the possibility of qualitative and quantitative identification of up to nine components in the composition of the thermodynamic inhibitor, reducing the time costs for calculation processes. To solve the problem of determining the concentration of substances, the method of infrared spectrometry with Fourier transformation is used. The infrared spectra of the solutions were measured in the mode of attenuated total internal reflection. To improve the accuracy of measuring the concentration of substances by the infrared spectrum in conditions of multicomponentity and similarity of components by chemical structure, the use of a regression neural network is proposed. The training sample included infrared spectra of pure substances, two-component and three-component mixed aqueous solutions (water + alcohol + glycol), as well as a number of four-component solutions (glycols + water). The obtained data on the composition of the inhibitor were then used to calculate its dosage to prevent hydrate formation under specified conditions. The capability of the trained neural network to determine the concentrations of up to nine substances similar in their properties in the composition of thermodynamic hydrate formation inhibitors has been demonstrated: methanol, ethanol, propanol, monoethylene glycol, diethylene glycol, triethylene glycol, propylene glycol, glycerol. It has been shown that the use of the neural network ensures the accuracy of concentr ation determination up to 2 % vol. Testing of the proposed method for processing the results of composition control and determining the dosage of the thermodynamic inhibitor for suppressing the hydrate formation process has shown good agreement with the results of the traditionally used method. The proposed approach allows increasing the efficiency of inhibitor dosage selection. The results of the work can be used in oilfield chemistry for incoming control and forecasting the efficiency of using thermodynamic type hydrate inhibitors during the extraction, preparation or transportation of hydrocarbon raw materials.
An analysis of multisensor data obtained from an electromyograph, inertial measurement devices, a computer-vision system, and virtual-reality trackers was performed in order to solve the problem of classifying human motor activity. The relevance of solving this problem is determined by the necessity of analyzing and recognizing human motor activity when using various hardware and software complexes, for example, rehabilitation and training systems. For the optimal solution of the task of recognizing the type of hand movements with the highest accuracy, the contribution of each signal source is evaluated, and a comparison of various machine-learning models is performed. The approach to processing multisensor data includes: synchronized acquisition of streams from different sources; labeling of the initial data; signal filtering; dual alignment of time series by frequency and duration with approximation to a common constant; formation of a common dataset; training and selection of a machine-learning model for recognizing motor activity of the hands. Nine machine-learning models are considered: logistic regression, k-nearest neighbors, naïve Bayes classifier, decision tree, and ensembles based on them (Random Forest, AdaBoost, Extreme Gradient Boosting, Voting, and Stacking Classifier). The developed approach of synchronization, filtering, and dual alignment of data streams makes it possible to form a unified dataset of multisensor data for model training. An experiment was carried out on the classification of nine categories of hand movements based on the analysis of multisensor data (629 recordings collected from 15 participants). Training was performed on 80 % of the collected data with five-fold cross-validation. The AdaBoost ensemble provides a classification accuracy of 98.8 % on the dataset composed of the combined information from four different sources. In the course of ablation analysis for comparing the data sources, the greatest influence on the final classification accuracy is exerted by information from virtual-reality trackers (up to 98.73 % ± 1.78 % accuracy on the AdaBoost model), while data on muscle activity from the electromyograph turned out to be the least informative. It was determined that high classification accuracy of motor activity can be obtained using inertial measurement devices. The considered study formalizes a reproducible approach to processing multisensor data and makes it possible to objectively compare the contribution of different sources of information and machine-learning models in solving the problem of classifying the motor activity of the user’s hands within rehabilitation and virtual training systems. It is shown that under resource limitations it is possible to refuse part of the data sources without significant loss of classification accuracy, simplifying the hardware configuration of tracking systems and making it possible to move from closed commercial systems (virtual-reality trackers) to more accessible and compact inertial measurement devices.
Machine Learning (ML) and Artificial Intelligence (AI) methods are used to process and intelligently analyze medical data. The application of ML/AI methods requires specialized sets of labeled medical data of large dimensions. Process organization of quality medical data labeling requires the involvement of a large number assessors and specialists in a particular field of medicine as well as the availability of specialized tools for labeling process optimization considering the specifics of medical data processing. In this paper a universal architectural model of a crowdsourcing system specifically designed for medical data labeling was proposed. The model supports processing of diverse medical data formats, incorporates data anonymization mechanisms and multi-level quality control, while enabling a distributed annotation process with expert community involvement. As a result, classification of actual problems of the process of medical data labeling and data collection, and a quality and safety criteria for comparative analysis of medical data labeling systems was detected and formulated. The scheme of generalized scenario of users’ groups interaction with crowdsourcing system in the context of solving AI problems in the field of medicine was proposed. A universal model of such system architecture was designed and a specialized crowdsourcing system of medical data labeling based on Computer Vision Annotation Tool was implemented on its basis. Testing and approbation of the realized system was carried out at the Pirogov Clinic of High Medical Technologies. The proposed universal model of crowdsourcing system architecture can be used to improve the efficiency and safety of organization and construction of the process of labeling patients’ medical data in the context of solving various applied ML/AI tasks, such as semantic segmentation of internal organs and their pathologies, detection and classification of diseases based on medical images (e.g. computed tomography scans). The developed solution can be used by doctors of various specializations, researchers and developers aimed at the development and creation of methods and technologies of AI in the field of medicine.
This paper presents the development and evaluation of a method for solving the Multiple Traveling Salesman Problem (mTSP), with the objective of minimizing the maximum route length (“minimax” optimization). The study addresses the combinatorial route-space arising from distributing cities among multiple agents, requiring balanced workload distribution to avoid overloading individual routes. The novelty of the proposed approach lies in creating a discrete analogue of the classical Particle Swarm Optimization (PSO) algorithm adapted specifically for permutation-based route representations, and integrating it with local heuristic procedures and the Ant Colony Optimization (ACO) algorithm. The proposed method transforms the original mTSP into a classical single-agent Traveling Salesman Problem (TSP) by introducing artificial (dummy) depots, thus allowing an unambiguous separation of the overall route into individual segments for each agent. A key element of the solution involves adapting the PSO algorithm through novel discrete operations, such as computing the minimal sequence of exchanges (transpositions) between permutations, scaling velocity, and applying this velocity to routes. This approach ensures efficient exploration of the combinatorial solution space and prevents premature convergence of the algorithm. The experimental study was conducted on benchmark instances from the TSPLIB library (eil51.tsp, berlin52.tsp, eil76.tsp, rat99.tsp) for the TSP, comparing two scenarios: a classical PSO with random initialization and a hybrid PSO_ACO method where the ACO algorithm is used to generate the initial population. The results demonstrate a significant improvement in the minimax criterion compared to CPLEX, LKH3, OR-Tools as well as state-of-the-art approaches DAN, NCE, and EA, confirming the effectiveness of the proposed solution. The practical importance of this research lies in potential applications of the developed algorithm in logistics, transport planning, network traffic management, and other domains where optimal resource allocation is crucial. The proposed method is valuable for specialists in optimization, algorithmic modeling, and practitioners developing planning and management systems.
The computational demands of the shortest path algorithms on large-scale graphs with millions of vertices and edges pose significant challenges for serial implementations, often requiring hours of execution time even on powerful CPUs. This paper evaluates Graphic Processing Units implementations of three fundamental shortest path algorithms — BellmanFord, Dijkstra, and Floyd-Warshall using NVIDIA CUDA platform. We implemented and compared multiple variants of each algorithm, starting with basic parallel approaches and applying various optimization techniques, including gridstride loops, shared memory utilization, memory coalescing, and algorithm-specific enhancements such as flag-based early termination for Bellman-Ford and tiled computation for Floyd-Warshall. Our study provides performance analysis comparing different optimization strategies and their effectiveness across various graph datasets.
In the context of the Industrial Internet of Things (IIoT), cybersecurity refers to preventing unauthorized access, attacks, and vulnerabilities to interconnected devices, networks, and data. Given the inherent interconnectedness of IIoT devices, ensuring security is of paramount importance to mitigate potential disruptions, data breaches, and malicious activities. As IIoT systems continue to proliferate, the significance of robust security measures, effective intrusion detection, and intelligent detection techniques escalates to safeguard critical infrastructure and sensitive data from cyber threats. This work aims to contribute towards establishing a secure and resilient industrial environment through the utilization of a hybrid model: Convolutional Neural Network with Deep Neural Network, accommodating distinct class distributions. The recent “Edge IIoTset” dataset is harnessed to enhance the model efficacy. Throughout the evaluation process, diverse metrics are employed, encompassing Accuracy, Precision, Recall, and the F1-score. By applying thorough preprocessing and using various class distribution scenarios (2, 6, 9, 10, and 15 classes), the model achieved excellent classification results. Notably, the 9-class configuration reached an Accuracy of 99.13 %, while the 6-class and 10-class setups also delivered strong performance at 97.13 % and 96.11 %, respectively. Our architecture effectively combines feature extraction and deep classification layers, resulting in a robust solution adaptable to complex IIoT traffic.
Hidden Markov Models (HMMs) trained to identify binding regions in peptide sequences have demonstrated the ability to uncover shared amino acid patterns in peptides bound to major histocompatibility complex molecules. In this work, we present an enhanced approach for predicting peptide binding using an ensemble of HMMs. Building on a previously proposed method, we extend it to a classification setting by incorporating both binding (positive) and non-binding (negative) peptide sequences. Our strategy involves training two sets of models on these distinct datasets and selecting ensemble members based on conditional probability estimates. The method was evaluated across six alleles of major histocompatibility complex using two model architectures: simplified architecture with 9 states representing the peptide binding core region and two cycle-states for the amino acids outside this region, and extended architecture, in which each cycle state was replaced by 9 additional states. Models evaluated in comparison with the state-of-the-art MixMHC2pred predictor. Results show a statistically significant improvement in prediction accuracy. Notably, incorporating non-binding peptides during training improved performance in several cases, highlighting the importance of background sequence information in distinguishing binding-specific patterns.
The modern approach to search textual and multimodal data in large collections involves the transformation of the documents into vector embeddings. To store these embeddings efficiently different approaches could be used, such as quantization, which results in loss of precision and reduction of search accuracy. Previously, a method was proposed that reduces the loss of precision during quantization. In that method, clustering of the embeddings with k-Means algorithm is performed, then a bias, or delta, being the difference between the cluster centroid and vector embedding, is computed, and then only this delta is quantized. In this article a modification of that method is proposed, with a different clustering algorithm, the ensemble of Oblivious Decision Trees. The essence of the method lies in training an ensemble of binary Oblivious Decision Trees. This ensemble is used to compute a hash for each of the original vectors, and the vectors with the same hash are considered as belonging to the same cluster. In case when the resulting cluster count is too big or too small for the dataset, a reclustering process is also performed. Each cluster is then stored using two different files: the first file contains the per-vector biases, or deltas, and the second file contains identifiers and the positions of the data in the first file. The data in the first file is quantized and then compressed with a general-purpose compression algorithm. The usage of Oblivious Decision Trees allows us to reduce the size of the storage compared to the same storage organization with k-Means clustering. The proposed clustering method was tested on Fashion-MNIST-784-Euclidean and NYT-256-angular dataset against the k-Means clustering. The proposed method demonstrates a better compression quality compared to clustering via k-Means, demonstrating up to 4.7 % less storage size for NF4 quantization for Brotli compression algorithm. For other compression algorithms the storage size reduction is less noticeable. However, the proposed clustering algorithm provides a bigger error value compared to k-Means, up to 16 % in the worst-case scenario. Compared to Parquet, the proposed clustering method demonstrates a lesser error value for the Fashion-MNIST-784Euclidean dataset when using quantizations FP8 and NF4. For the NYT-256-angular dataset, compared to Parquet, the proposed method allows better compression for all tested quantization types. These results suggest that the proposed clustering method can be utilized not only for the nearest neighbor search applications, but also for compression tasks, when the increase in the quantization error can be ignored.
The widespread adoption of Kubernetes as a platform for orchestrating containerized applications has heightened the need for effective security mechanisms, particularly to counter Denial-of-Service (DoS) attacks. This article proposes an approach to DoS attack detection based on two key components the use of comprehensive metrics and the application of ensemble Machine Learning models. The approach involves the collection and analysis of comprehensive metrics from node-level (CPU, memory) and application-level (network activity, file descriptors) data from containers running on various frameworks (Flask, Django, FastAPI, Node.js, Golang). To implement this approach, a dataset containing 49,990 instances of network activity, characterized by 28 features (comprehensive metrics), was created. Statistical analysis (Student’s t-test, Pearson correlation) identified the metrics most relevant for attack detection, including total CPU time (cpu_sec_total) and resident memory usage (resident_memory_total). A comparison of nine Machine Learning models for attack detection was conducted, including ensemble methods (Random Forest, XGBoost, LightGBM) which demonstrated the highest effectiveness, achieving 100 % accuracy (F1-score equals 1.0) and perfect class separation (AUC equals 1.0). The XGBoost model also eliminated false positives (precision equals 1.0). Feature importance analysis revealed the most significant metrics for classification: CPU usage (cpu_sec_total, cpu_sec_idle), network packet transmission (transmit_packets), system load average, and memory usage (virtual_memory_total, resident_ memory_total). The work emphasizes the importance of integrating multi-level metrics for building resilient anomaly detection systems. The proposed approach is scalable and independent of specific frameworks, making it applicable for protecting containerized environments. The research results serve as a foundation for developing proactive Kubernetes security systems capable of countering sophisticated attack vectors.
In the Internet of Things (IoT), Low Power Wide Area Networks (LPWAN) technologies have been obtaining considerable attention. Long-Range Wide-Area Networks (LoRaWAN) was created by the Long Range (LoRa) Alliance as an open standard operating over the unlicensed band. Its advantages include a large coverage area, low power consumption, and inexpensive transceiver chips. The standard of LoRaWAN encryption uses a 128-bit symmetric algorithm called Advanced Encryption Standard (AES). This standard secures communication and entities which are beneficial for resource-constrained devices on the IoT for efficient communication and security. The security problems with LoRa networks and devices remain an important challenge considering the technology large deployment for numerous applications. Even though LoRaWAN network architecture and security have been enhanced by the LoRa Alliance, the most recent version still has some weaknesses such as its susceptibility to attacks. Many studies and researchers have indicated that LoRaWAN versions 1.0 and 1.1 have security risks and vulnerabilities. This research proposes a method to construct and integrate cryptographic algorithms (AES-128) within widely utilized wireless Network Server Simulators NS-3. This module aims to increase the security of data in LoRa networks by protecting critical information from unauthorized access. Consequently, implementing the AES-128 encryption algorithm within the NS-3 simulator will benefit the scientific community greatly. This will enable an examination of the impact of various security measures on network performance metrics, including latency, overhead, energy consumption, throughput, and packet size.
MODELING AND SIMULATION
When building drone navigation systems, the main requirements for them are autonomy, accuracy and miniaturization of execution. Drone navigation autonomy can be achieved using strapdown, but its disadvantage is that the accuracy of solving the navigation problem deteriorates over time. To correct strapdown errors, its integration with various noninertial navigation systems is used, among which one of the most promising in terms of meeting the above requirements is the optical flow measurement navigation system. However, in its traditional use, only the components of the linear and angular velocities of unmanned aerial vehicles are determined. Such a determination of speeds is only part of the overall navigation task and does not allow us to solve it as a whole. In this regard, the article considers an approach that allows combining the capabilities of a free-form inertial navigation system that provides a solution to the navigation problem as a whole, and an optical flow navigation system that allows for autonomous monitoring of linear and angular motion parameters with minimal hardware costs. The proposed solution to the drones autonomous navigation problem is based on the strongly coupled integration of strapdown and an optical flow navigation system using stochastic nonlinear filtering methods. The synthesis of the navigation algorithm is based on the formation of equations of the estimated vector of navigation parameters based on inertial measurements, and the equations of its observer based on optical flow measurements, followed by the implementation of a single navigation filter based on them, taking into account the discrete nature of the measurements used. To estimate the full vector of motion parameters of drones based on measurements of the integrated inertial optical navigation system, a modified extended discrete Kalman filter was used for correlated object and observer noise. The proposed approach was tested on the basis of a numerical experiment during which the spatial and angular motion of a medium-speed drone was modeled with the simultaneous formation of noisy measurements of its motion parameters. The measurement interference level is selected according to the interference level of the medium-range inertial and optical meters. The algorithm for estimating the vector of navigation parameters of the drone is implemented based on the proposed modified extended discrete filter Kalman. The obtained error values for estimating all drone motion parameters have shown that it is possible to meet the accuracy requirements of not only modern, but also promising autonomous navigation systems. The highly coupled integration of inertial and optical navigation systems in terms of computational costs and accuracy of estimating motion parameters turns out to be more effective than the traditional method of determining only the components of the linear and angular velocities of an object based on the parameters of the optical flow. The main advantages of the proposed inertial optical navigation system are autonomy and the ability to monitor all motion parameters of an unmanned aerial vehicle. The stability and accuracy of the assessment, the simplicity of the technical implementation make it possible to use the proposed solution for autonomous noise-resistant navigation of drones for various purposes.
Due to the increasing requirements for the performance characteristics of gyroscopes with electrostatic non-contact rotor suspension, there is a need to improve the technology for manufacturing parts and assembling devices. The most important component of the sensitive element of an electrostatic gyroscope is a spherical beryllium rotor. The disturbing moments from the suspension forces are proportional to the voltage supplied to the electrodes and the deviation of the rotor surface from the spherical shape. For this reason, the technology of finishing the rotor surface must ensure that high requirements for the sphericity of the rotor are met. In the manufacture of rotors of all known types of electrostatic gyroscopes, the technology of centerless finishing with cup laps with free abrasive is used. One of the key factors influencing the resulting sphericity is the parameters of the rotor motion in the finishing machine. The article presents a mathematical model that allows one to determine the parameters of the rotor motion in the finishing machine under the action of friction forces from the rotation of the cup laps. The method of mathematical modeling was used in the work. The process of centerless finishing with cup laps is considered as a type of friction drive. The rotor motion is considered as the motion of an absolutely rigid body. To determine the motion parameters, the Euler differential equations for rotational motion are used the solution of which is carried out numerically using the MATLAB software package. The pressure distribution in the lap-rotor pairs is considered by analogy with the expression of effects in a ball joint. The result of the work is a mathematical model of the rotor motion during finishing with cup laps, which made it possible to identify the main patterns of rotor motion during centerless finishing. The model made it possible to reveal that the difference in the moments of inertia of the rotor can have a significant effect on the rotor motion during processing, in particular, during polishing. Boundary conditions were determined under which the rotor motion can be permissibly considered as the motion of a ball with equal moments of inertia. The proposed model of rotor motion can be used in designing algorithms and control systems for machines for centerless finishing of spheres with free abrasive as well as a component of mathematical and physical models describing the processing of the rotor surface by finishing with cup laps.
The article presents the results of an experimental study of the flow structure and temperature field in a plume formed above a low-power burner flame. The pulsation and spectral characteristics of the flow at key sampling points were analyzed, which allowed us to draw a conclusion about the nature of the flow at the main points of the jet. It is proposed to use time series of changes in the point displacement field to analyze the spectral characteristics of the flow. In this work, the Background Oriented Schlieren method was used to visualize the flow and determine temperatures followed by post-processing in the program developed during the study. The advantage of this approach compared to the traditional optical Schlieren method is that there is no need for parabolic mirrors as well as the ability to obtain results in digital form convenient for further processing. During the experiment, a special background with randomly located bright dots was placed behind the object of study which was filmed by a video camera. Fluctuations in the medium density caused changes in the refractive indices of the medium, as a result of which the points on the background of the video frames displaced, and the displacements of the points was proportional to the change in the refractive index which in turn is proportional to the density gradient and, accordingly, to the temperature gradient of the medium. The displacement of the points was determined by cross-correlation analysis of each frame in comparison with the frames in the absence of disturbances. Then the displacement field was filtered by a median filter in order to minimize noise and statistical outliers. The filtered displacement field was used to calculate the temperature field, while solving the Cauchy problem for temperature with a known derivative at a point and specified boundary conditions. A set of instantaneous point displacement fields, instantaneous and time-averaged temperature fields was obtained, which allowed us to draw conclusions about the flow structure. At characteristic points of the jet, oscillograms of the displacement value were obtained as well as pulsation spectra with an inertial interval corresponding to the “–5/3” law. The approach proposed in the work allows, in addition to contactless study of the temperature field, also studying turbulent flow pulsations in the case of close to two-dimensional or axisymmetric flows.
The actual problem of acoustic diagnostics of autonomously operating industrial equipment is investigated. An overview of existing approaches to acoustic diagnostics, including methods based on convolutional neural networks and learning algorithms with a teacher, is provided. Their limitations have been identified, such as the need to use large amounts of labeled data for training, poor adaptation to changing conditions, and the lack of a real-time decision-making mechanism. A new approach to acoustic diagnostics based on reinforcement learning methods is proposed, characterized by adaptability, high resistance to noise and the possibility of continuous learning in a dynamic environment. The proposed method for determining the state of equipment operability uses an approach based on the study of acoustic signals emitted by operating equipment. The method includes building a neural network, selecting audio recordings from open audio file libraries, and training the network using a reinforcement learning algorithm. The process of acoustic diagnostics of the state of serviceability/ malfunction of industrial equipment involves four stages: real-time recording of acoustic data of working equipment, extraction of signs of equipment condition, training with reinforcement of a neural network and making a decision on the serviceability / malfunction of the equipment. Based on tagged WAV audio files from open databases, an experiment was conducted to identify various states of the equipment: normal condition, initial stage of the defect, critical malfunction. The results showed classification accuracy from 89.7 % to 98.5 % and average response time from 0.5 to 0.7 seconds with low computing load (on average 36.5 % CPU and 509 MB RAM). Unlike the wellknown acoustic diagnostic systems based on teacher-learning algorithms for neural and convolutional neural networks on pre-marked datasets containing acoustic signals emitted by running equipment, the proposed approach implements the decomposition of the initial acoustic signals into spectral components. Each of these components is analyzed and provided with signs reflecting the state of serviceability or malfunction of the equipment. This approach allows you to: use reinforcement learning algorithms for strategic decision-making; reduce model training time by pre-selecting significant features; improve diagnostic accuracy; reduce computational load and hardware resource requirements. The developed algorithm can be used for continuous monitoring of equipment condition and predictive maintenance in autonomously functioning industrial systems. Its use will allow reliable and timely detection and classification of industrial equipment malfunctions. It is possible to refine the algorithm to meet the requirements for integration with the IoT infrastructure, increase resistance to external noise, and implement more advanced RL algorithms such as PPO.
The problem of forecasting methodology for special modes of dynamic processes the nonlinear effect that occurs in the marine environment, called “rogue waves”, is considered. Rogue waves are waves that occur in the ocean, as a rule, suddenly, exist for a short period of time and have a huge destructive potential. There are many directions in the study of this phenomenon based on the application of computer modeling and numerical methods. At the same time, there is a tendency to search for rogue waves not only in hydrodynamics, but also in other subject areas, in which, when constructing models of the phenomena and processes under study, the apparatus for solving the corresponding initial boundary value problems for systems of differential equations is used. As a rule, the authors try to find solutions to differential equations, based on which it is possible to demonstrate the occurrence of abnormally high waves. It should be noted that the search for analytical solutions for some differential equations is an extremely difficult task or even impossible to solve. An alternative approach is proposed that makes it possible to prove the existence of the possibility of an anomaly without the need to solve the corresponding system of differential equations, and a model of a dynamic system is constructed similar to the formalism of Koopman theory which takes into account the asymptotic growth rate of the image of a dynamic operator in the energy space, on the basis of which an ordered hierarchy of classes of dynamic operators arises. The definition of an anomaly in the formalism of the mathematical apparatus under consideration is proposed, while the phenomenon of a rogue wave is interpreted as a special case of the occurrence of an anomalous phenomenon in a hydrodynamic system with a sufficiently high average value of the wave background. Within the framework of the proposed approach, it is possible to formulate the necessary conditions for the occurrence of an abnormal phenomenon and sufficient conditions for the absence of anomalies. A time series processing method is proposed that considers the hypothesis of the frequency of occurrence of anomalous phenomena. The existence of anomalies in magnetohydrodynamic processes is demonstrated, which is proved by constructing a model of magnetic field inversion, and the solution of the corresponding dispersion equation is carried out using a modification of the numerical Ivanisov-Polishchuk method consisting in combining the Ivanisov-Polishchuk algorithm and the Adam optimization method. The results obtained may be in demand for further development of the study of the structure of dynamic systems and for identifying more interdisciplinary connections that allow constructive transfer of some of the results from one subject area to another.
The quality of regression is determined by the choice of an approximation function, more or less accurately reflecting the process which generated the data. An important class of such processes is cognitive processes of largely wave nature. Here, the corresponding wave-like calculus is used in the new method of behavioral regression. We generalize classical linear regression from real weights to complex-valued amplitudes the modules and phases of which encode the amplification and delay of cognitive waves. The target feature then emerges as squared module of total amplitude influences of all basis features. The obtained regression models are tested on the data of academic performance of the study group in comparison with linear regressions of the same number of parameters. When using all basis features, the accuracy of wave regression is close to the accuracy of linear models. With fewer basis features the quality of linear regression degrades, while the performance of wave regression improves. The largest difference is observed in triadic regime when the target feature is produced by two basis features. In this case, the error of three-parameter wave regression is 2.5 % lower than that of full linear regression with 21 parameters. This dramatic improvement is due to a special nonlinearity of wave regression, typical to pragmatic heuristics of natural thinking. This nonlinearity takes advantage of semantic correlations of features missed by classical regressions. The wave-like reduction of computational complexity opens up ways for developing more efficient and nature-like algorithms of data analysis and artificial intelligence.
Container virtualization technology is increasingly being used in the development of fault-tolerant clusters with high availability and low request processing latency. In designing highly reliable clusters, a key task is the structuralparametric model-oriented synthesis which takes into account the impact of the number of deployed containers on performance, request processing latency, and system reliability. Justifying the choice of solutions to ensure high cluster reliability currently requires the development of reliability models for recoverable container virtualization clusters during reconfiguration, considering the migration of virtual containers. The basis for decisions to ensure high cluster availability is the development of models for a recoverable cluster during reconfiguration, taking into account the migration of virtual containers. The novelty of the proposed Markov model of a cluster lies in considering a two-stage recovery of its operability, determining the impact of the number of containers to be migrated during reconfiguration — both before and after the physical recovery of failed servers — on cluster reliability. Two options for container migration during cluster recovery are considered. In the first scenario, during the physical recovery phase of a failed server, container migration to a functional server does not occur, while in the second scenario it does. In the second stage of reconfiguration, following the physical recovery of a failed server, container migration takes place, allowing for either an increase or decrease in the number of containers deployed on them. Based on the proposed Markov models of cluster reliability with container virtualization, an evaluation of its readiness coefficient is provided, and the influence of the number of containers loaded during migration at the two reconfiguration stages on system reliability is determined. The proposed Markov models of cluster reliability with container virtualization are aimed at justifying design decisions for organizing and restoring cluster operability after server failures, considering the impact of container migration implementation options on system availability. Future research will analyze the impact of container migration options on both cluster availability and request processing latency at the two considered reconfiguration stages.
BRIEF PAPERS
The paper describes three ways of calculating the k-dimensional volume of the k-dimensional simplex in the n-dimensional Euclidean space (n ≥ k) in the canonical barycentric coordinate system. The first method is to calculate for the n-dimensional simplex using the determinant of the barycentric matrix, the columns of which are the barycentric coordinates of the simplex vertices. The second method is to calculate the volume for k-dimensional simplex using the Cayley–Menger determinant through the lengths of the simplex edges which can be found from the barycentric coordinates of the vertices. The third method is to compute using a Gram determinant for a system of vectors constructed from the vertices of a given simplex in a (n + 1)-dimensional Euclidean space.
We propose a probabilistic matrix-clustering method that leverages a prior distribution of features and dimensionality reduction (Singular Value Decomposition, SVD). The approach identifies, within a large control pool, a cluster statistically comparable to the test cohort, thereby reducing systematic bias in downstream comparative analyses. We show that the method correctly selects control groups in scenarios where standard nearest-neighbor matching produces false positives. The method has been used to construct control groups in studies based on the Russian Biobank at the Almazov National Medical Research Centre (Ministry of Health of the Russian Federation).
ISSN 2500-0373 (Online)






























