Preview

Scientific and Technical Journal of Information Technologies, Mechanics and Optics

Advanced search
Vol 25, No 6 (2025)
View or download the full issue PDF (Russian)

OPTICAL ENGINEERING

1003-1013 38
Abstract

   Femtosecond laser processing of chalcogenide glasses is a promising method for high-precision modification of their structure and properties for the development of optical elements for infrared photonics. One of the key challenges is to increase the processing speed while maintaining high spatial accuracy and minimal thermal damage. When using laser irradiation modes with a high pulse repetition rate, which ensures increased productivity of technologies, the mechanisms of phase-chemical transformations change and the contribution of accumulative heating increases. However, the dynamics of these processes in bulk material remains insufficiently studied. This paper studies the mechanism of phase and chemical composition transformation of As2Se3 bulk chalcogenide glass under the action of femtosecond laser pulses in the intense ablation modes.

   The objects of study are plates of chalcogenide glass As2Se3 irradiated by femtosecond laser pulses with a wavelength of 515 nm at repetition rates up to 1 MHz.

   The irradiated samples are analyzed using digital optical microscopy and Raman spectroscopy. Theoretical analysis includes both calculations of photoexcitation and heating of the semiconductor by a single laser pulse as well as calculations of accumulative heating of the sample surface, taking into account three-dimensional heat removal. The single-pulse laser ablation threshold was established at a laser pulse repetition rate of 1 kHz and the parameters of the power-law dependence of the ablation threshold on the number of pulses (incubation effect) were determined. A detailed analysis of the morphology of the irradiated samples and the chemical composition of the laser ablation products was carried out revealing the formation of amorphous selenium (a-Se) and arsenolite crystals (As2O3). Theoretical analysis allowed us to estimate the degree of heating and photoexcitation of As2Se3 chalcogenide glass by a single laser pulse and revealed a significant contribution of the heat accumulation effect to the surface temperature rise at a pulse repetition rate of 1 MHz. Based on the combined experimental and theoretical results obtained, a vapor-phase mechanism of phase and chemical transformation in As2Sebulk glass has been established during femtosecond laser ablation with a pulse repetition rate of 1 MHz. These findings open up prospects for the development of high-performance technologies for femtosecond laser microstructuring of chalcogenide materials in photonics and sensing applications.

1014-1023 49
Abstract

   Fiber Bragg Grating (FBG) interrogators contain a movable scattering element that tracks the FBG central wavelength. The movable element of the interrogator limits the interrogation speed. This paper proposes an interrogation method that does not use movable elements. This is achieved by using an Array Waveguide (AWG) to split the FBG reflected spectrum and a Convolutional Neural Network (CNN) for training to determine the central wavelength. Most of the known studies consider the AWG output as a one-dimensional data array for training the neural network. However, CNNs work best with two-dimensional images. This paper proposes to transform the AWG output using a two-dimensional image sensor with a circular configuration. This allows for higher accuracy and improved resolution in predicting the central wavelength. The AWG signal is projected onto a two-dimensional image sensor which has either a grid or a circular configuration. The number of AWG channels used is 32, which corresponds to a distance between channel wavelengths of 0.0625 nm. The circular configuration enables more accurate feature extraction using CNN. A 32-beam passive waveguide array in a circular configuration is used for FBG interrogation. It projects the FBG output signals onto the image sensor, enabling high-resolution Bragg wavelength prediction. Computer simulation of the proposed interrogation device demonstrated a predicted resolution of ±1 pm with 98 % accuracy. It should be noted that the presented values are estimates and are subject to refinement using a hardware prototype. Such devices are relatively easy to manufacture and are readily available to consumers.

1024-1032 39
Abstract

   The paper deals with the issues of energy calibration and precise measurements of the spectral characteristics of optoelectronic systems elements designed for the analysis of radiation from remote objects. Modern calibration methods require the consideration of environmental variables, particularly in the infrared spectrum range. This leads to significant measurement errors and creates difficulties during ground-based tests. The complication of compensating for atmospheric effects, the low accuracy, and the labor-intensive nature of their consideration when using traditional techniques result in significant errors and reduce the quality of the obtained data. The proposed approach is based on the use of narrow-spectrum emitters located directly in front of the instrument input port, eliminating the need to account for atmospheric spectral transmission. Instead of traditional methods based on the use of standard sources, such as “black bodies” or photodetectors that have to take into account the transmission of the air gap, it is proposed to use a series of narrow-spectrum (spectral-zone) radiation fluxes as a calibration emitter that affects the testing optical-electronic system directly in its input window plane, which allows to provide the direct measurement of the spectral characteristics of the elements under study, bypassing the step of determining the transmission of the air gap. This approach reduces the measurements uncertainty and allows carrying out calibration without the need for complex compensating measurements. The conducted experiments confirmed that the proposed method reduces the measurement uncertainty by at least two orders of magnitude compared to traditional approaches. The effectiveness of the method is demonstrated through specific examples that show the advantages of the method in the study of light sources and measurements of the spectral sensitivity of instruments. The novelty of the proposed approach is the elimination of the main source of uncertainty (accounting for the spectral transmission of the atmosphere), which significantly improves the metrological performance of calibration. The proposed method is effective in almost all application situations and provides a significant increase in measurement accuracy. Compared to classical solutions, this method is easier to implement technically and it provides significantly better results in the fields of remote optical reconnaissance, medicine, agriculture, and ecology.

PHOTONICS AND OPTOINFORMATIСS

1033-1046 45
Abstract

   In recent years, laser-structured titanium dioxide (TiO2) surfaces have attracted considerable attention due to their combination of high specific surface area, biocompatibility, and unique optical properties, offering promising opportunities for photonics, sensing, and energy applications. Of particular interest is the study of the optical manifestations of porous Ti/TiO2 films fabricated via laser structuring, with potential evidence of plasmonic resonances and anomalous dispersion. The samples were prepared from titanium foil subjected to anodization in potassium hydroxide solution, followed by nanosecond laser structuring at the wavelength of 1064 nm and an energy density of (3.2 ± 0.2)∙103 J/cm2. Surface morphology was analyzed using scanning electron microscopy and optical profilometry, while optical characteristics were investigated by spectrophotometry and ellipsometry. To interpret the spectral data, a modified Adachi-Forouhi model within the dipole approximation was applied, enabling quantitative description of the contributions of interband transitions and plasmonic modes. The surfaces produced by laser structuring exhibited pronounced porosity (pore sizes of 300–1100 nm, depth ~200 nm), submicron cracks, and nanoparticles of the laser-structured material. Reflection spectra revealed minima corresponding to the excitation of surface plasmons and interference modes. Dielectric permittivity spectra displayed a region of anomalous dispersion and field localization at a wavelength of 625 nm. Calculated parameters included the skin layer thickness, Purcell factor for a nanopore, damping length of plasmon oscillations on the surface, propagation length of surface plasmons, and the critical value of polarizability enhancement in the plasmon resonance localization region. Modeling indicated a narrowing of the bandgap to 1.016 eV. Contributions to the dielectric permittivity of the semiconductor component from interband absorption saturation, changes in band structure, and free carriers were determined. While the bandgap narrowing played a decisive role, the dominant contribution to the experimentally observed dielectric behavior arose from the generation of resonant plasmonic modes. It was established that the key mechanism of the optical response is the resonant localization of the electromagnetic field within the nanopores, confirming the manifestation of hyperbolic metamaterial behavior. The material obtained exhibited significant bandgap narrowing due to the nanosecond laser treatment. The results highlight the potential of porous laser-structured anodized titanium surfaces for photonic and sensing devices, as well as for use in waveguiding structures.

MATERIAL SCIENCE AND NANOTECHNOLOGIES

1047-1057 44
Abstract

   The development of organic electronics stimulates the search for new materials. The priority task is to find compounds with high brightness, efficiency and stability of luminescence. Coumarin derivatives are considered as promising candidates for solving this problem. This paper presents the results of a study of organic light-emitting diodes in the emission layer of which a number of coumarin dyes with pronounced donor-acceptor properties are used.

   The aim of the study was to identify the influence of the structure of synthesized molecules on the photophysical characteristics as well as on the emission efficiency of LEDs based on them.

   A series of organic compounds of the coumarin series has been synthesized: (E)-3-(3-(anthracene-9-yl)acryloyl)coumarin (compound 1), 4-hydroxy-3-(5-(4-methoxyphenyl)-1-(p-tolyl)-4,5-dihydro-1H-pyrazol-3-yl)coumarin (compound 2), 3-(1-acetyl-5-(4-methoxyphenyl)-4,5-dihydro-1H-pyrazol-3-yl)-4-hydroxycoumarin (compound 3), ethyl 7-(diethylamino)Coumarin-3-carboxylate (compound 4) as well as the well-known laser dye Coumarin 6 (3-(benzo[d]thiazole-2-yl)-7-(diethylamino)coumarin) used as a reference compound. The LEDs were produced by vacuum thermal spraying and spin-coating. The fluorescence and electroluminescence spectra were studied using an Ocean Optics Maya 200 PRO spectrometer. A photomultiplier was used to obtain the luminescence decay curves, PicoQuant PMA-C 192-N-M. Spectral data (absorption, photoluminescence) as well as time-resolved measurements (fluorescence attenuation time) indicate the key role of donor-acceptor interactions as well as spatial effects in the formation of electronic transitions. The current-voltage characteristics confirmed the presence of conduction modes limited by spatial charge and conduction limited by carrier capture processes. The study of the voltage-brightness characteristics showed that compound 2 demonstrates brightness comparable to the reference compound Coumarin 6, which makes it the most promising for further optimization of organic light-emitting diodes. In addition, it is shown that compound 4 in the device provides white emission with chromaticity coordinates close to daylight, which makes it potentially possible for practical use in lighting systems. The data obtained confirm the influence of donor-acceptor interactions on the properties of coumarins. The degree of conjugation of donor and acceptor fragments is directly determined by spectral shifts in the absorption and fluorescence spectra. The high brightness of compound 2-based diodes, comparable to the Cou standard, is due to its efficient donor-acceptor system which optimizes intramolecular charge transfer and increases the likelihood of radiative transitions (long lifetime of the excited state 3.5 ns). On the contrary, the acetyl group in compound 3 disrupts conjugation, leading to low brightness and short fluorescence lifetime (1.7 ns) due to nonradiative relaxation. The ability of compound 4 to provide white radiation in diodes (correlated color temperature 6410 K, close to daylight) is related to the contribution of the electron transport layer to the emission spectrum.

1058-1066 47
Abstract

   Epitaxy of highly strained InGaAs quantum wells with a mole fraction of indium exceeding 35 % is a technologically challenging task. The structural quality of these elastically strained epitaxial layers greatly affects the photoluminescence efficiency of quantum wells. Therefore, in order to achieve high structural quality, optimization of the epitaxial growth parameters is required, one of which is the growth rate of the epitaxial layer. Heterostructures containing the InxGa1–xAs (0.37 ≤ x < 0.41) quantum well were produced on GaAs substrates by molecular beam epitaxy with different growth rates of InGaAs ranging from 0.24 to 3.3 Å/s. The actual thickness and composition of the quantum well were determined by X-ray diffractometry, as well as the structural quality of the heterostructures was investigated. The photoluminescence spectra of the manufactured heterostructures were measured at temperatures of 20 K and 300 K at different optical pumping powers. Based on the dependence of photoluminescence intensity on pumping power, recombination currents were calculated and the time of non-radiative recombination in the studied structures was estimated. The InAs content in the quantum wells of the manufactured heterostructures ranged from 37.0 % to 40.6 %. Based on analysis of X-ray rocking curves, deterioration of structural quality at low deposition rate of 0.24 Å/s was observed. The photoluminescence spectroscopy measurements showed a significantly higher photoluminescence intensity of quantum wells at moderate growth rates of InGaAs (0.9–2.5 Å/s) compared to other samples. The calculated values for the non-radiative recombination lifetime of quantum wells produced at these moderate growth rates were in the order of 10–6 s at 20 K and 10–9 s at 300 K. At higher or lower growth rates, the values of the non-radiative recombination lifetime decreased. The results obtained demonstrate the achievement of the best structural quality for highly strained InGaAs layers produced at 0.9–2.5 Å/s growth rate. These results can be used to optimize the parameters of epitaxial growth processes for highly strained quantum wells based on InGaAs, for fabricating monolithic vertical-cavity surface-emitting lasers, based on GaAs substrate, operating in the 1200–1300 nm spectral range.

1067-1079 43
Abstract

   Silicon microelectromechanical pressure sensors of the resonant-frequency type are distinguished by high linearity and stability of their output characteristics, making them particularly promising for precision measurements. This paper presents a study of the influence of membrane geometry and stress-strain state on the sensitivity of resonant-frequency pressure sensors. Recommendations for optimal resonator placement and the selection of a process route for membrane formation are also developed. Using three-dimensional models of membranes of various geometric shapes, numerical simulation of their stress-strain state under static pressure was performed using the finite element method. This method allowed us to identify the zones of localized deformation most suitable for resonator placement. Wet etching with preliminary wafer thinning and subsequent finishing machining was used to fabricate test samples of silicon membranes. It is shown that maximum sensitivity is achieved by positioning the resonator in zones of peak tensile and compressive stresses. An analysis of the membrane shape relationship to stress distribution and resonator response was conducted, enabling the identification of optimal resonator locations in terms of manufacturing tolerances and sensitivity. Membrane preparation methods were compared: chemical and mechanical thinning followed by polishing. Based on roughness measurements for membranes manufactured using different methods, the optimal preparation technology was described. The obtained results enable optimization of the geometry and manufacturing process of the resonant-frequency pressure sensor, which contributes to increased sensitivity, wider manufacturing tolerances, reduced production costs, and improved reliability in industrial operation.

AUTOMATIC CONTROL AND ROBOTICS

1080-1088 37
Abstract

   The paper considers the problem of compensation for unknown external disturbances for a class of linear stationary multidimensional systems with distinct input delays. It is assumed that external disturbances are harmonic signals with unknown frequencies, phases, amplitudes, and biases that simultaneously affect both the input and output of the system. To solve the problem, the direct disturbance compensation method based on the internal model principle is used in combination with the classical Falb-Wolovich linear state feedback decoupling method which allows increasing the convergence rate of output signals with a small adaptation parameter. In order to eliminate cross-interactions between control loops, the channel decoupling method based on Falb-Wolovich linear state feedback decoupling approach is applied to the system. Then, an observer is constructed to estimate the state vector of the external disturbance model and, based on the estimations, an adaptive control law with the memory regressor extension is designed to compensate for external disturbances based on the internal model principle. The system is stabilized simultaneously with the decoupling of control channels, which allows one to proceed to the problem of compensating for external unknown disturbances, bypassing the design phase of the stabilizing component of the control signal. There are no restrictions on the observability and stability of the control plant. An adaptive algorithm with the memory regressor extension combined with the Falb-Volovich linear state feedback decoupling method is proposed to compensate for unknown external disturbances for a class of linear stationary multidimensional systems with distinct control delays. The efficiency of the proposed approach is illustrated by an example of numerical simulation in the MATLAB/Simulink environment. The resulting transient response plots demonstrate that the proposed algorithm ensures the boundedness of all closed-loop signals and the asymptotic stability of the output variables in the presence of distinct input delays under external harmonic disturbances. The proposed approach allows obtaining an improved rate of convergence of processes and can be applied in engineering systems and complexes the mathematical description of which is given in the form of linear multidimensional systems with distinct input delays.

1089-1097 53
Abstract

   Omnidirectional mobile platforms, known for their exceptional maneuverability in confined spaces, often encounter not only energy efficiency challenges due to the design of roller-bearing wheels but also operational limitations in real-world environments such as height differences and uneven terrain. To overcome these limitations, it is necessary to enable switching between omnidirectional and conventional driving modes through adaptive motion mode switching. This approach combines the maneuverability required for navigation in tight spaces with improved off-road capability and energy efficiency on uneven surfaces and slopes. This study proposes an algorithm for adaptive motion mode switching, providing transitions from an omnidirectional to a classical kinematic scheme and back via a specially developed compact switching mechanism. To achieve this, enhanced kinematic, dynamic, and energy models were utilized in combination with laboratory experiments conducted on a reconfigurable platform. The proposed improvements make it possible to perform a simple and rapid transition between kinematic configurations using the compact switching mechanism. Experimental studies were carried out under laboratory conditions on a flat concrete surface where the robot followed a closed trajectory. During the experiments, energy consumption and trajectory-tracking errors were recorded for holonomic, nonholonomic, and reconfigurable motion modes. Comparative analysis demonstrated that the proposed switching algorithm reduces energy consumption by an average of 8 % while maintaining maneuverability. For larger robots whose total mass significantly exceeds that of the reconfiguration mechanism energy savings in real-world scenarios can be even greater due to the system ability to optimize energy usage and select the most efficient configuration for different trajectory segments. The system retains high maneuverability and ensures efficient navigation in complex environments. The presented algorithm enables the platform to achieve a crucial balance between mobility, efficiency, and control accuracy. This opens the possibility for the practical implementation of reconfigurable robots in real-world service applications. The obtained results have practical significance for the design of adaptive mechanical and control systems that enhance the operational flexibility of mobile platforms under resource-constrained conditions.

1098-1106 40
Abstract

   The classical output control problem for a linear system with an input delay and constant known parameters is considered. The plant may be unstable, making most of the known methods ineffective or unconstructive. A new control algorithm based on the Luenberger observer and the Smith predictor is proposed, incorporating correction terms defined by simple expressions that eliminate the need for complex calculations. The resulting regulator has a linear structure; however, the correction term provides for a periodic reset of the corresponding regulator variable. It is analytically proven that a closed system of a plant with an input delay and a modified Smith predictor is globally exponentially stable. The resulting method for controlling systems with an input delay surpasses all analogues known to the authors in terms of simplicity of implementation and effective performance for unstable systems. In future works, this approach will be extended to nonlinear and parametrically uncertain systems with an input delay.

COMPUTER SCIENCE

1107-1116 44
Abstract

   A trending task of automatic psycho-emotional human state detection was studied in this work. A scientific interest to researches devoted to the automatic multimodal depression detection can arise out of the widespread of anxiety-depressive disorders and difficulties of their detection in primary health care. A specificity of the task was caused by its complexity, lack of data, imbalance of classes and inaccuracies in it. Comparative researches show that classification results on semi-automatic annotated data are higher than ones on automatic-annotated data. The proposed approach for depression detection combines a semi-automatic data annotation and deterministic machine learning methods with the utilization of several feature sets. To build our models, we utilized the multimodal Extended Distress Analysis Interview Corpus (E-DAIC) which consists of audio recordings, automatically extracted from these audio recordings texts and video feature sets extracted from video recordings as well as annotation including Patient Health Questionnaire (PHQ-8) scale for each recording. A semi-automatic annotation makes it possible to get the exact time stamps and speech texts to reduce the noisiness in the training data. In the proposed approach we use several feature sets, extracted from each modality (acoustic expert feature set eGeMAPS and neural acoustic feature set DenseNet, visual expert feature set OpenFace and text feature set Word2Vec). A complex processing of these features minimizes the effect of class imbalance in the data on classification results. Experimental researches with the use of mostly expert features (DenseNet, OpenFace, Word2Vec) and deterministic machine learning classification methods (Catboost) which have the property of interpretability of classification results yielded the experimental results on the E-DAIC corpus which are comparable with the existing ones in the field (68.0 % and 64.3 % for Weighted F1-measure (WF1) and Unweighted Average Recall (UAR) accordingly). The usage of a semi-automatic annotation approach and modalities fusion improved both quality of annotation and depression detection comparing to the unimodal approaches. More balanced classification results are achieved. The usage of deterministic machine learning classification methods based on decision trees allows us to provide an interpretability analysis of the classification results in the future due to their interpretability feature. Other methods of results interpretation like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) also can be used for this purpose.

1117-1124 32
Abstract

   The study examines approaches to quantifying various effects, such as Position bias, Popularity Bias, and others, in recommender systems. A new quality model of the recommendation algorithms is proposed which reduces the selected metrics to one unit of measurement and determines its impact on the system for each effect. The obtained scores allow for a deeper comparative analysis of various algorithms as well as investigation the behavior of the algorithm in different user segments. For each metric, two conditional marginal distribution densities are built within the framework of the model: separately based on relevant and irrelevant recommendations. Based on the comparison of these densities, the set of possible metric values is divided into normal and critical. The model evaluates the impact of each effect on the system based on the frequency of hitting the values of the corresponding metric in its critical area. To demonstrate how the model works, four recommendation algorithms were analyzed on the MovieLens-100K academic dataset. During the testing, Popularity Bias, the lack of novelty in recommendations, and the tendency of algorithms to recommend objects solely based on user demographic data were evaluated. For each effect, an assessment of its impact on the system is constructed, and an example of predicting an upper estimate of the system quality is given if the corresponding effect is eliminated. The study demonstrated that metrics of effects such as Popularity or Position Bias can change the distribution of absolute values depending on the system. One of the ways to compare different recommendation algorithms more reliably is the proposed quality model. The model is suitable for evaluating personal recommendations, regardless of the scope of application and the algorithm that was used to build them.

1125-1133 46
Abstract

   The development of decentralized control technologies for swarms of Unmanned Aerial Vehicles requires the development of new methods for ensuring their resilience to internal threats. The emergence of an intruder agent within a swarm creates threats of energy or information attacks. The situation is especially critical when the intruder agent is located at the center of the swarm, with its influence on its neighbors being greatest. Existing research has focused primarily on detecting intruder agents, while countermeasures, particularly spatial exclusion of the intruder agent from the group, remain poorly understood. This study develops and analyzes a method for spatial countermeasures against the intruder agent that does not require its explicit detection or information exchange between agents. The proposed method is based on the original idea of analogizing the control of a swarm of unmanned aerial vehicles (agents) with the processes occurring in a semiconductor crystal. Countermeasures against the intruder agent are achieved through the temporary modification of certain swarm interaction parameters by the agents. As a result, the spatial structure of the swarm changes, and the intruder, who does not change its interaction parameters, begins to move relative to the other agents, ending up at the edge of the group. Three implementation options for the proposed countermeasure method, based on compression, expansion, and sequential restructuring of the swarm structure, are investigated. Simulation modeling of the behavior of a swarm of unmanned aerial vehicles in the presence of an intruder was performed. The success of the method was measured by the probability of the intruder agent having fewer than five neighboring unmanned aerial vehicles, i. e., being at the edge of the group. The best performance (probability close to 1.0) was demonstrated by the swarm compression option. The swarm expansion option showed lower performance (non-guaranteed option). The sequential restructuring option proved ineffective. It is shown that the degree of change in the distance between agents makes the main contribution to the effectiveness of the method. The proposed method implemented in the swarm
compression mode demonstrated effectiveness for swarm numbers ranging from 19 to 91. The proposed method reduces the likelihood of destructive intruder influence in a swarm of unmanned aerial vehicles, relying solely on the agents local navigation data, without resorting to intruder detection. This makes the method applicable to systems with limited communication capabilities. Some increase in energy consumption can be mitigated by optimizing the intrusion duration and maintaining the swarm structure.

1134-1141 51
Abstract

   Autoprompting is the process of automatically selecting optimized prompts for language models, which has been gaining popularity with the rapid advancement of prompt engineering driven by extensive research in the field of Large Language Models. This paper presents ReflectivePrompt — a novel autoprompting method based on evolutionary algorithms that employs a reflective evolution approach for more precise and comprehensive search of optimal prompts. ReflectivePrompt utilizes short-term and long-term reflection operations before crossover and elitist mutation to enhance the quality of the modifications they introduce. This method allows for the accumulation of knowledge obtained throughout the evolution process and updates it at each epoch based on the current population. ReflectivePrompt was tested on 33 datasets for classification and text generation tasks using open-access large language models: T-lite-instruct-0.1 and Gemma3-27b-it. The method demonstrates, on average, a significant improvement (e.g., 28 % on BBH compared to EvoPrompt) in metrics relative to current state-of-the-art approaches, thereby establishing itself as one of the most effective solutions in evolutionary algorithm-based autoprompting.

1142-1149 45
Abstract

   Automated segmentation of coronary arteries in coronary computed tomography angiography plays an important role in the diagnosis and treatment of coronary artery disease. Manual segmentation of coronary arteries requires significant labor costs and is accompanied by subjective errors, which necessitates the development of accurate and reliable automated methods for coronary artery segmentation. The paper presents an approach based on a deep neural network with the Swin-UNETR architecture which combines the advantages of visual transformers and the U-Net structure. To improve the accuracy, a domain-specific transfer learning strategy was used: the model was pre-trained on the ImageCAS dataset, and then further trained on a specialized dataset created for Automated Segmentation of Coronary Arteries (ASOCA) Challenge with expert labeling of coronary arteries. The accuracy of the model was assessed on 10 test Computed Tomography Coronary Angiography cases from the ASOCA dataset. The average Dice coefficient was 0.8778, and the average 95th percentile Hausdorff distance (HD95) was 11.66 mm. The obtained results demonstrate that the accuracy of the proposed method is at the level of the leading models presented in the official ASOCA Challenge rating and exceeds the average inter-rater labeling. The proposed method provides high accuracy of coronary artery segmentation. In the future, the introduction of post-processing methods such as connected component filtering or vessel tracking, and spatial attention mechanisms can improve the accuracy of arterial contour localization and the adaptability of the model to various types of computed tomography data.

1150-1159 43
Abstract

   Time series forecasting has been used in research and applications in a number of domains such as environmental forecasting, healthcare, finance, supply chain management, and energy consumption. Accurate prediction of future values is necessary for strategic planning operational efficiency and well-informed decision-making regarding time-dependent variables. A hybrid time series forecasting architecture is proposed that combines the strengths of machine learning and statistical models, in particular Gradient Boosting Machines (GBM), Auto-Regressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM) networks. While LSTM networks and GBM are able to capture complex dependencies and nonlinear patterns, the ARIMA model captures the linear components within the time series. The hybrid model exploits ARIMA interpretability, LSTM temporal memory ability, and GBM ensemble learning efficiency by integrating these three models. Comprehensive experiments conducted on benchmark data sets have shown that the accuracy and reliability of predictions of the proposed hybridization significantly exceeds both individual models and traditional baseline models. The results show that for a variety of real-world applications, hybrid architectures can deliver reliable and accurate time series predictions.

1160-1167 45
Abstract

   The increasing complexity of autonomous agents’ systems and the constant change of the environment require the development of decision-making algorithms that operate under conditions of incomplete data to achieve group goals. A multi-agent approach is used to describe a group of autonomous agents considering the system as a set of interacting intelligent agents. A model of the behavior of longhorn beetles is used to develop a method for collective analysis of the external environment by agents. A method based on continuous exchange of information between agents and aimed at minimizing resource costs when collecting information about the external environment is presented. During the empirical study of the developed method, an increase in the information received by the group and a decrease in the resources expended were obtained in comparison with the Model Predictive Control and Cooperative Decision-Making for Mixed Traffic algorithms. The proposed method allows reducing the resource costs of the agent group and increasing the system performance when achieving group goals under conditions of incomplete data.

MODELING AND SIMULATION

1168-1176 46
Abstract

   This study addresses the problem of finding an optimal temperature profile for a complex physico-chemical process. The greatest difficulties arise in the study and optimization of multicomponent systems, which determines both scientific and practical interest in developing the most effective tools for identifying optimal production regimes. One of the key aspects is the consideration of dynamic constraints that affect the rate of change of control parameters and ensure the construction of physically feasible trajectories of temperature variation. To solve this problem, a modified genetic algorithm is proposed, allowing for the incorporation of predefined constraints. An optimization problem is formulated for a complex physico-chemical process, aiming to determine the optimal temperature profile that maximizes (or minimizes) a given target parameter while satisfying constraints on the rate of temperature change. The method is based on discretizing the total process duration and representing the temperature profile as a piecewise linear function, with segment values determined using a genetic optimization algorithm. The main stages of the genetic algorithm have been modified and presented as an adaptive evolutionary search scheme that accounts for permissible control parameter variations. These modifications enhance the algorithm robustness against local extrema and ensure more precise adherence to predefined constraints. The efficiency of the algorithm, the functionality of the software module, and the interaction mechanism were tested through a computational experiment investigating the kinetics of the reaction of dimethyl carbonate with alcohols in the presence of dicobalt octacarbonyl. Numerical simulations demonstrated that the temperature regime significantly influences reaction kinetics, and computational trials enabled the unambiguous identification of the optimal temperature profile under constraints on temperature increase and an additional requirement for the linear variation of the target product concentration. The proposed modification of the genetic algorithm significantly improved its robustness against local extrema and ensured stricter compliance with technological constraints. In particular, an analysis of the obtained profiles showed that the proposed method allows for solutions that ensure a more uniform distribution of the target product concentration, which is especially important in the design of reaction systems highly sensitive to parameter variations. This optimization approach can be useful for the design and scaling of chemical-technological processes, and the conducted study confirms the effectiveness of numerical methods and evolutionary algorithms for optimizing chemical reaction conditions.

1177-1184 36
Abstract

   In measurements of the magnetic azimuth of the borehole axis, calculations are based on the superposition of the Earth’s magnetic field and parasitic fields from the remanent magnetization of the geophysical tool assembly and the drill string. At high latitudes the horizontal component of the geomagnetic field is very small. As a result, even weak parasitic fields — on the order of 1 % of the geomagnetic field — can cause azimuth errors of 4° or more. Many methods to mitigate this effect have been reported in the literature. However, almost all of them require either additional equipment and preliminary measurements, or knowledge of the exact values of the magnitude and inclination of the geomagnetic field at the survey location. In connection with all of the above, there is a problem of creating a compensation method that would not require preliminary measurements of the parameters of the parasitic field or the modulus and inclination of the geomagnetic field. This paper proposes using an additional magnetometer in the inclinometer to measure the gradient of the superpositional magnetic field. From simulation, and using the measured gradient, an equivalent magnetic source in the form of a circular current loop is determined. The calculated field of this loop is then subtracted from the reference magnetometer readings. In the laboratory experiments, ring neodymium magnets (three variants with different magnetic flux densities) placed on the inclinometer axis were used as parasitic-field sources. A magnetic gradiometer was formed by two magnetometer sensors spaced 0.307 m apart. In experiments, the developed algorithm identified parameters of current loops equivalent to the sources in terms of magnetic effect. This enabled compensation of the reference magnetometer readings and improved azimuth accuracy from −1°15′36″ (source 1), −3°9′36″ (source 2) and +12°30′36″ (source 3) to ±0°39′ for all sources. In the experiment the field magnitudes at the reference magnetometer location were 0.42 %, 1.59 % and 5.60 % of the geomagnetic field, respectively. The proposed method increases azimuth measurement accuracy without requiring measurements of parasitic or geomagnetic field parameters. In addition, the use of the method allows reducing the length of nonmagnetic collars on both sides of the inclinometer during drilling. Thus, the method can be implemented in a sensor that computes and compensates for parasitic fields in real time during logging or drilling.

1185-1196 39
Abstract

   DNA microarray technology produces high-dimensional gene expression data, where many genes are irrelevant to disease. Effective feature selection is thus essential to mitigate the curse of dimensionality and enhance classification performance. This study introduces a multi-objective feature selection approach employing a Clustering-Based Binary Differential Evolution (CBDE) mutation to identify a compact set of disease-relevant genes. The proposed DeFs-CBDE algorithm was assessed on four gene expression datasets, i.e. brain, breast, lung, and central nervos system cancer by selecting informative feature subsets and evaluating them using five state-of-the-art classifiers, i.e., Support Vector Machine (SVM), Naive Bayes, K-Nearest Neighbors, Decision Tree (DT), and Random Forest. The DeFs-CBDE method achieved of 100 % accuracy on the brain dataset with three classifiers. On the lung dataset, DeFs-CBDE reached 97.56 % accuracy with SVM and DT. For the breast dataset, DeFs-CBDE attained 93.33 % accuracy very close to the highest score of 93.81 % accuracy. The CNS dataset proved the most challenging, where it achieved 91.67 % accuracy with SVM. Across all datasets, DeFs-CBDE consistently achieved high classification performance.

1197-1207 51
Abstract

   The durability and wear resistance of ceramic implants, used under high operating loads largely, depend on their mechanical characteristics. Ceramic composite based on hydroxyapatite is considered as a promising biomaterial for reconstruction of damaged bone tissues and replacement bone defects due to its biocompatibility and ability to provide osseointegration with bone tissue. In this work, to increase mechanical strength, the hydroxyapatite ceramics was reinforced by multi-walled carbon nanotubes additives which have high physical and mechanical properties. The potential of such research lies in the use of these composites in implantation areas that experience significant mechanical loads. The effectiveness of nanotubes reinforcement depends on the ceramics composition, synthesis technology, and testing conditions, resulting in high variability in the final characteristics. At the same time, direct experimental study of the properties of each sample requires significant time-cost. The use of mathematical models based on machine learning methods to optimize the process of analysis the mechanical characteristics of composite materials is relevant study. This will allow us to predict Vickers microhardness depending on the indentation load. In this study, experimental microhardness tests were carried out by using Vickers method for six sets of ceramic materials which were exposed to indentation loads ranging from 0.98 N to 9.8 N. Three machine learning methods were used to predict the data obtained: neural network, random forest and gradient boosting. After averaging the values for all loads, it was determined that with the increasing the concentration of multi-walled carbon nanotubes to 0.5 wt. % the Vickers microhardness of the composite increases from 3.83 ± 0.39 GPa to 4.71 ± 0.40 GPa compared to hydroxyapatite without additives, and reinforcement becomes effective by 19 %. Thus, the greatest contribution to the increase in the microhardness of the composite was made by the additives of multi-walled carbon nanotubes with a concentration of 0.5 wt.%, while the addition of 1 and 2 wt.% led to a significant decrease in microhardness, which is associated with the appearance of multi-walled carbon nanotubes agglomeration in the ceramic hydroxyapatite matrix. The simulation results based on experimental data allowed us to determine the optimal machine learning method for constructing a predictive model of the microhardness of hydroxyapatite - multi-walled carbon nanotubes ceramic composite in a wide range of loads. In addition, it was possible to establish the relationship between the composition of the composite and its mechanical characteristics, which opens up new possibilities for designing strong and durable ceramic implants.

1208-1219 55
Abstract

   This work concentrates on the effect of a flex-skin trailing edge flap on the aerodynamic characteristics of SD7037 airfoil at low Reynolds numbers, in the range of 2·105 to 5·105 using computational methods. The study used a range of angle of attack (AOA) associated with the take-off phase and different flap angles. The numerical model was set up in Siemens STAR-CCM+ package using the κ-ω shear stress transport turbulence model and the (γ-Reθ) transition model which ensured the approximate solution of Navier-Stokes equations. The verification of the computational solution was done by the comparison with the available experimental data of the plain flap, and it was discovered that the results matched pretty well at lower AoAs. Results indicated that certain sets of AoAs and flap angles can notably achieve the lift over the drag ratio above the baseline conditions, thus improved the performance especially during take-off stage. Besides, some combinations were found to be inefficient, and these were recommended to be discarded. Additionally, the results showed that the flex-skin flap generated higher lift coefficient but also higher drag coefficient at the same range of AoAs as compared to that of the plain flap.

1220-1228 36
Abstract

   This study addresses three-dimensional modeling of the thermal interaction between the core melt and the melt localization device (trap) during a severe accident at a nuclear power plant. An optimized configuration for filling the localization device with sacrificial material is proposed. The calculations incorporate the Reynolds-averaged Navier–Stokes equations, numerical solutions of the heat conduction equation, and a two-fluid interface dynamics model, enabling simultaneous consideration of turbulent flow within the liquid phases, the moving boundary of the melting sacrificial material (Stefan problem), and stratification with inversion. The analysis proceeds in three consecutive stages. The first stage models the melting of the sacrificial material; the second simulates the stratification of layers; the third evaluates heat transfer after stratification. Based on the results, an optimal filling configuration for the trap is developed. The study presents detailed volumetric temperature distributions throughout all three stages, the heat flux distribution on the trap walls, and the maximum thickness of the melted shell caused by intense thermal interaction. Comparison between three-dimensional simulations and similar two-dimensional studies demonstrates that 3D modeling more accurately captures the characteristic timing of solidification and subsequent melting processes. The advantages of the proposed approach over existing methods are highlighted. Its applicability for designing and optimizing melt localization devices is shown, and prospects for future development are discussed, including incorporating chemical reactions and adapting the model to other reactor types. The data convincingly suggest that the adopted configuration has significant potential to extend the period during which effective mitigation of severe accident consequences at nuclear power plants can be maintained.

BRIEF PAPERS

1229-1233 39
Abstract

   The rapidly growing demand for the design of specialized computing systems, primarily embedded systems and systems-on-chip, constrained by the limited capabilities of developers, due to the high complexity of the task. A Design Mechanism is proposed as an abstraction (a conceptual construct) that explicitly separates architectural logic from the implementation level according to a comprehensive criterion. Its use is most effective in the design of subsystems characterized by high internal variability and numerous technological alternatives. Such an abstraction, applied at all stages of the design process, enables more efficient formation and analysis of the design solution space, radically reducing conceptual errors and expanding the number of design outcomes available for reuse.



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2226-1494 (Print)
ISSN 2500-0373 (Online)