metadata of articles for the last 2 years
Работая с сайтом, я даю свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта обрабатывается системой Яндекс.Метрика
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

metadata of articles for the last 2 years

Structural and mechatronic aspects of the development of a robotic device for assessing geometric defects of pipelines

2025. T.13. № 3. id 1934
Khasanov I.I. 

DOI: 10.26102/2310-6018/2025.50.3.008

Reliable operation of small-diameter pipeline systems is an important task in ensuring the process safety of industrial facilities operating under high temperatures and pressures. One of the key factors influencing the occurrence of emergency situations is the thinning of pipe walls caused by erosion, corrosion, and stress corrosion cracking. In conditions of limited space and the impossibility of using standard non-destructive testing tools, there is an increasing need to develop compact automated solutions for internal diagnostics of pipeline geometric parameters. This paper presents the development and experimental study of a robotic diagnostic device designed for internal scanning of pipes with a minimum cross-sectional diameter of 130 mm. The device is a mechatronic system with eight drive wheels driven by gear motors and controlled by a Raspberry Pi 3 microcomputer. The body design is made using additive technologies and includes measurement and power modules located in separate sections. Laboratory tests have been conducted to confirm the operability of the device and its control algorithms. The developed software package ensures autonomous movement of the device, collection and recording of diagnostic data. The results obtained allow forming a detailed geometry of the pipeline, identifying areas with an increased level of ovality and deformations, which is important for assessing the residual resource. The developed solution can be used both in research tasks and as part of industrial non-destructive testing systems for small-diameter pipes.

Keywords: modeling, diagnostic device, mechatronics, control, defects, robotic system, small diameter pipelines

Investigation of the relationship between the wetting angle and the surface microprofile parameters

2025. T.13. № 2. id 1933
Anisimov A.D.  Masterenko D.A. 

DOI: 10.26102/2310-6018/2025.49.2.039

A brief overview of new approaches to characterizing the quality of surfaces with hydrophobic properties is given. These approaches are based on mathematical procedures involving a large amount of computation, including fractal methods. The relationship between the wetting angle of a hydrophobic surface and surface parameters such as roughness and fractal dimension of the profile has been studied. A model of a superhydrophobic surface is developed, its parameters are described, such as the effective hydrophobic wetting angle, the proportion of the solid phase of the surface in contact with the liquid, and the parameters of the hierarchical structure. It has been established that the use of nanostructured columns in the formation of a superhydrophobic surface, taking into account the hierarchical structure, makes it possible to significantly increase the values of the wetting edge angle. The dependence of the wetting edge angle on the fraction of the liquid-solid contact at the interface is determined, which is explained by the complication of the surface structure, and the relationship between the fraction of the solid phase and the fractal dimension is determined. It was found that when estimating the wetting edge angle, the relationship of the fractal dimension is significantly higher in comparison with the roughness parameters Ra and Rz. The correlation coefficients between the wetting angle and other parameters of the hydrophobic surface were determined using regression analysis. The results obtained can be used in the processing of measurement information in accordance with modern standards in the field of geometric characteristics of surfaces, including in the development of software for measuring parameters of hydrophobic surfaces.

Keywords: hydrophobicity, roughness, geometric characteristics of the surface, fractal dimension, surface microprofile, scale

Development of an improved differential activation module using Grad-CAM++ and semantic segmentation for facial attribute editing

2025. T.13. № 2. id 1932
Gu Chongyu  Gromov M.L. 

DOI: 10.26102/2310-6018/2025.49.2.046

Modern methods of facial attribute editing suffer from two systemic issues: unintended modification of secondary features and loss of contextual details (such as accessories, background, and hair textures, etc.), which lead to artifacts and restrict their application in scenarios requiring photographic accuracy. To address these problems, we propose an improved differential activation module designed for precise editing while preserving contextual information. In contrast to the existing solution (EOGI), the proposed solution includes: the use of second- and third-order gradient information for precise localization of editable areas, applying test-time augmentation (TTA) and principal component analysis (PCA) to center the class activation map (CAM) around objects and remove a lot of noise, the integration of semantic segmentation data to enhance spatial accuracy. The evaluation on the first 1,000 images of the CelebA-HQ dataset (resolution 1024×1024) demonstrates significant superiority over the current method EOGI: a 13.84 % reduction in the average FID (from 27.68 to 23.85), a 7.03 % reduction in the average LPIPS (from 0.327 to 0.304), and a 10.57 % reduction in the average MAE (from 0.0511 to 0.0457). The proposed method outperforms existing approaches in both quantitative and qualitative analyses. The results demonstrate improved detail preservation (e.g., earrings and backgrounds), which makes the method applicable in tasks demanding high photographic realism.

Keywords: deep learning, facial attribute editing, differential activation, class activation maps (CAM), semantic segmentation, generative adversarial network (GAN)

User authentication based on analysis of the length and timing parameters of handwritten signature

2025. T.13. № 2. id 1929
Dzyamko-Gamulets R.N.  Ievlev O.P. 

DOI: 10.26102/2310-6018/2025.49.2.040

This article presents methods for user authentication based on handwritten signature features characterizing its length as a scalar value and as a function of the dependence of the signature part curve length on time. The main emphasis is on methods for extracting static and dynamic features from a handwritten signature, these features are unique to each person and can be used to accept the truth or falsity of a particular user. During the analysis, data on time characteristics are collected, including the time spent writing each symbol and pauses between individual signature elements. The relevance of this study is due to the need to improve the security of user authentication in various systems where a handwritten signature serves as an important authentication element. The results of the study can be useful for creating more reliable authentication systems in such areas as banking, legal procedures, and other areas where a high degree of confidence in the authenticity of documents is required. The presented approaches not only contribute to increasing the level of authorization security, but also expand the horizons for further research in the field of biometric authentication. This, in turn, may lead to wider implementation of these technologies in practical applications in both online and offline systems.

Keywords: mathematical expectation, variance, function, handwritten signature, authentication, measure, metric, derivative, machine learning

Method of training image classifiers using additional labels

2025. T.13. № 2. id 1928
Petrova I.S. 

DOI: 10.26102/2310-6018/2025.49.2.041

This paper is devoted to the development of a method for training classifiers that takes into account relationships between classes, represented as additional labels. The loss functions used in classification and the approaches to incorporating additional labels into them were analyzed. Based on this analysis, we propose as the foundation of our method a triplet loss with a flexible margin, designed on the basis of the original triplet loss. The flexible margin allows adjusting the distances between the embeddings of images depending on the difference degree between their corresponding classes. This makes it possible to model different levels of similarity between classes: category, group, and subgroup levels. In addition, we develop a triplet mining strategy that prevents the model’s weights from collapsing to zero and getting stuck in a trivial solution. The method is validated on tasks of product classification and gastrointestinal disease classification. As a result of applying the method, classification accuracy increased by 9 % in the disease recognition task and by 6 % in the product recognition task. The number of severe classification errors was reduced. The image embedding space formed by the triplet loss allows clustering and recognition of new classes without additional model training.

Keywords: loss function, classification, computer vision, triplets, labels, vector space

Efficient collaborative peripheral computing for a transport network using a clustering service

2025. T.13. № 3. id 1925
Komarenko Y.A.  Krepyshev D.A. 

DOI: 10.26102/2310-6018/2025.50.3.005

Modern Internet of Vehicles (IoV) applications place high demands on reliability and minimal response time in dynamic traffic conditions. However, high-speed vehicles and complex infrastructure such as intersections can lead to loss of communication and increased delays in data transmission and processing. The paper proposes an innovative framework for cluster interaction of vehicles based on peripheral computing (CCVEC), implemented on the OpenStack platform. The development is focused on ensuring stable communication and rational allocation of computing resources in intelligent transport systems. The conducted testing covered various traffic scenarios, including high-density traffic areas. The results showed that the proposed solution supports stable communication between onboard sensors and cloud services. Under optimal conditions, the average latency was about 390 ms, and the throughput reached 30 kB/s. The platform has demonstrated high performance and efficient memory usage when allocating resources. Thus, the CCVEC framework is able to reduce delays, increase connection reliability and efficiently use local resources, which makes it promising for implementation in IoV-based systems and peripheral computing.

Keywords: internet of vehicles (IoV), peripheral computing, intelligent transport systems, communication reliability, data transmission latency, cluster interaction, computing resource allocation, openStack, cloud services, on-board sensors

Designing a seismological wave monitoring system based on neural networks

2025. T.13. № 2. id 1922
Vikhtenko E.M.  Lukashevich S.K.  Manzhula I.S. 

DOI: 10.26102/2310-6018/2025.49.2.043

The article is devoted to the issue of designing an automated information system for monitoring seismological activity in the Far Eastern region of Russia. The Far East belongs to earthquake-prone areas, but due to the peculiarities of territorial development, the system of monitoring the seismological situation in the region is not sufficiently developed. Currently, researchers are working on organizing a system for collecting seismological data. The collected information on seismological events in the region provides an opportunity for their further analysis in order to identify previously unknown patterns and develop methods for predicting earthquakes before their impact on the region's infrastructure. The study examines the existing methods of measuring and marking seismic waves and the features of the territory for drawing up requirements for the system. As a result of the research, logical and physical schemes of the monitoring system are proposed, based on the use of neural networks to track the arrival of P and S waves in a mode close to the real-time mode. The system under development includes modules for obtaining and accumulating primary data, as well as a neural network module. The structure of the information system is planned to be as flexible as possible for convenient configuration of the network architecture and its training.

Keywords: monitoring system, seismic waves, earthquakes, STA/LTA, engineering, neural network, big data

Human pose estimation from video stream

2025. T.13. № 2. id 1920
Potenko M.A. 

DOI: 10.26102/2310-6018/2025.49.2.036

The article presents a study of a human body pose estimation system based on the use of two neural networks. The proposed system allows determining the spatial location of 33 key points corresponding to the main joints of the human body (wrists, elbows, shoulders, feet, etc.), as well as constructing a segmentation mask for accurate delineation of human figure boundaries in an image. The first neural network implements object detection functions and is based on the Single Shot Detector (SSD) architecture with the application of Feature Pyramid Network (FPN) principles. This approach ensures the effective combination of features at different levels of abstraction and enables the processing of input images with a resolution of 224×224 for subsequent determination of people's positions in a frame. A distinctive feature of the implementation is the use of information from previous frames, which helps optimize computational resources. The second neural network is designed for key point detection and segmentation mask construction. It is also based on the principles of multi-scale feature analysis using FPN, ensuring high accuracy in localizing key points and object boundaries. The network operates on images with a resolution of 256×256, which allows achieving the necessary precision in determining spatial coordinates. The proposed architecture is characterized by modularity and scalability, enabling the system to be adapted for various tasks requiring different numbers of control points. The research results have broad practical applications in fields such as computer vision, animation, cartoon production, security systems, and other areas related to the analysis and processing of visual information.

Keywords: neural networks, convolutional neural networks, machine learning, computer vision, human pose estimation, keypoints, image segmentation

Optimization of the management of the development of the process of population medical examination based on predictive modeling of the rate of change in morbidity in the organizational system of regional health care

2025. T.13. № 2. id 1919
Gafanovich E.Y.  Lvovich A.I.  Preobrazhenskiy A.P.  Choporov O.N. 

DOI: 10.26102/2310-6018/2025.49.2.047

The article examines the possibility of increasing the efficiency of managing the development of the process of medical examination of the population of the region based on predictive modeling of the rate of change in morbidity. As a prerequisite for optimizing management predictive estimates serve decisions in the distribution of resource provision planned within the forecasting horizon by the managing center of the organizational system of regional healthcare. Based on long-term statistical data, the average annual rates of change in morbidity and volumes of medical examinations are calculated as estimates of additional resource provision. The initial data allow us to calculate the rates of change for a group of nosological units as a whole, for each nosological unit and for each territorial entity. The use of visual expert modeling allows us to estimate the degree of synchronization of changes in morbidity with the allocation of additional volumes of medical examination. The expediency of using the analysis of long-term medical and statistical information and predictive estimates for making management decisions within the organizational system of healthcare in the Voronezh Region is substantiated. Based on the analysis of retrospective data, a conclusion was made about the insufficient degree of synchronization of the rate of change in morbidity with the allocated resource without using an optimization approach. In addition, it is proposed to conduct a preliminary classification of the territorial entities of the Voronezh Region by the rate of change into three groups: low, medium, and high, taking into account the large number of territorial entities of the Voronezh Region. The main classes of problems of optimization of management decisions in the distribution of additional volumes of medical examinations within the planning horizon of the development of the organizational system are considered. For this purpose, prognostic estimates are transformed into coefficients of priority of use of the planned resource. To distribute the volumes of medical examinations between nosological units, they focus on optimization using expert choice. An optimization problem of resource distribution among territorial entities is formed, management decisions on the basis of which are determined using the algorithm of multi-alternative optimization.

Keywords: organizational system, development management, predictive modeling, visual expert modeling, optimization

Development of a lightweight model for automatic classification of structured and unstructured data in streaming sources to optimize optical character recognition

2025. T.13. № 3. id 1918
Gavrilov V.S.  Korchagin S.A.  Dolgov V.I.  Andriyanov N.A. 

DOI: 10.26102/2310-6018/2025.50.3.006

This article discusses the task of preliminary assessment of incoming electronic document management based on computer vision technologies. The authors synthesized a dataset of images with structured data based on the invoice form and also collected scans of various documents from pages of scientific articles and documentation in the electronic mailbox of a scientific organization to Rosstat reports. Thus, the first part of the dataset refers to structured data with a strict form, and the second part refers to unstructured scans, since information can be presented in different ways on different scanned documents: only text, text and images, graphs, since different sources have different requirements and their own standards. The primary analysis of data in streaming sources can be done using computer vision models. The experiments performed have shown high accuracy of convolutional neural networks. In particular, for a neural network with the Xception architecture, the result is achieved with an accuracy of more than 99%. The advantage over the simpler MobileNetV2 model is about 9%. The proposed approach will allow for the primary filtering of documents by department without using large language and character recognition models, which will increase speed and reduce computational costs.

Keywords: intelligent document processing, computer vision, convolutional neural networks, stream data processing, machine learning

Platform for testing radiological artificial intelligence-powered software

2025. T.13. № 2. id 1917
Kovalchuk A.Y.  Ponomarenko A.P.  Arzamasov K.P. 

DOI: 10.26102/2310-6018/2025.49.2.023

The amount of AI-based software used in radiology has been rapidly increasing in recent years, and the effectiveness of such AI services should be carefully assessed to ensure the quality of the developed algorithms. Manual assessment of such systems is a labor-intensive process. In this regard, an urgent task is to develop a specialized unified platform designed for automated testing of AI algorithms used to analyze medical images. The proposed platform consists of three main modules: a testing module that ensures interaction with the software being tested and collects data processing results; a viewing module that provides tools for visually evaluating the obtained graphic series and structured reports; a metrics calculation module that allows calculating diagnostic characteristics of the effectiveness of artificial intelligence algorithms. During the development, such technologies as Python 3.9, Apache Kafka, PACS and Docker were used. The developed platform has been successfully tested on real data. The obtained results indicate the potential of using the developed platform to improve the quality and reliability of AI services in radiation diagnostics, as well as to facilitate the process of their implementation in clinical practice.

Keywords: platform, diagnostic imaging, testing, medical images, artificial intelligence

Artificial intelligence in the task of generating distractors for test questions

2025. T.13. № 2. id 1915
Dagaev A. 

DOI: 10.26102/2310-6018/2025.49.2.028

Creating high-quality distractors for test items is a labor-intensive task that plays a crucial role in the accurate assessment of knowledge. Existing approaches often produce implausible alternatives or fail to reflect typical student errors. This paper proposes an AI-based algorithm for distractor generation. It employs a large language model (LLM) to first construct a correct chain of reasoning for a given question and answer, and then introduces typical misconceptions to generate incorrect but plausible answer choices, aiming to capture common student misunderstandings. The algorithm was evaluated on questions from the Russian-language datasets RuOpenBookQA and RuWorldTree. Evaluation was conducted using both automatic metrics and expert assessment. The results show that the proposed algorithm outperforms baseline methods (such as direct prompting and semantic modification), generating distractors with higher levels of plausibility, relevance, diversity, and similarity to human-authored reference distractors. This work contributes to the field of automated assessment material generation, offering a tool that supports the development of more effective evaluation resources for educators, educational platform developers, and researchers in natural language processing.

Keywords: distractor generation, artificial intelligence, large language models, knowledge assessment, test items, automated test generation, NLP

Methodology for creating a dataset for predictive analysis of an industrial robot

2025. T.13. № 2. id 1912
Kormin T.G.  Tikhonov I.N.  Berestova S.A.  Zyryanov A.V. 

DOI: 10.26102/2310-6018/2025.49.2.034

Industrial robots are one of the ways to increase production volumes. Bundling, milling, welding, laser processing, and 3D printing are a number of processes that require maintaining high precision positioning of industrial robots throughout the entire operation cycle. This article analyzes the use of the Denavit-Harterberg (DH) method to determine the positioning and orientation errors of an industrial robot. In this study, the DH method is used to create a model of possible errors in industrial robots and to create a database of deviations of the links and the working body of the robot from a predetermined trajectory. Special attention is paid to the presentation of practical steps to create a synthetic data set for the deviation of axes of an industrial robot, starting from the kinematic model of the robot and ending with the preparation of the final data format for subsequent analysis and the construction of a predictive analytics model. The importance of careful data preparation is highlighted by examples from other research in the field of predictive analytics of industrial equipment, demonstrating the economic benefits of timely detection and prevention of possible equipment failures. The developed model is used in the future to generate a synthetic data set for the deviation of the axes of an industrial robot. The proposed data collection model and methodology for creating a data set for predictive analytics are being tested on a six-axis robot designed for this purpose.

Keywords: inverse kinematics problem, predictive analytics, simulation modeling, industrial robot malfunction assessment, denavit-Hartenberg method, automation, fault diagnosis

A conceptual approach to the integration of artificial intelligence into engineering activities

2025. T.13. № 2. id 1907
Terekhin M.A.  Ivaschenko A.V.  Kulakov G.A. 

DOI: 10.26102/2310-6018/2025.49.2.031

The article addresses the pressing issue of developing a unified information space for the integration of artificial intelligence components in the context of information support for design and technological preparation of production. It considers the challenge of creating a digital engineering assistant whose functions include the analysis of design documentation, processing of two-dimensional and three-dimensional models, and generation of new design and technological solutions. The model of interaction between the digital assistant and the engineer is proposed within the framework of integrating computer-aided design systems, engineering data management systems, and digital content management systems. This integration is based on the novel concept of "affordance", which is widely used to describe the characteristics of artificial intelligence systems, as well as in perception psychology and design to describe human interaction with technical devices. Using this concept, an information-logical model of an integrated enterprise information environment has been developed—an environment that brings together natural and artificial intelligence for the purpose of facilitating creative engineering activity. The classification of implementation options based on affordances is proposed as a foundation for compiling and annotating training datasets for generative models, as well as a guideline for formulating subsequent prompt queries. The proposed concept has been practically implemented and illustrated through the unification of medical device designs, including rehabilitation products, surgical navigation systems, multisensory simulators, and a modular expert virtual system. The findings presented in the article have practical value for the automation of engineering decision-making support, as well as for higher education in training engineering specialists, including in interdisciplinary fields such as medical engineering.

Keywords: computer-aided design systems, product information support, artificial intelligence, scientific and technical creativity, engineering activities, affordance

Detection of eating disorders in social media texts and network analysis of affected users

2025. T.13. № 2. id 1906
Solokhov T.D. 

DOI: 10.26102/2310-6018/2025.48.1.033

Eating disorders (EDs) are among the most pressing issues in public health, affecting individuals across various age and social groups. With the rapid growth of digitalization and the widespread use of social media, there emerges a promising opportunity to detect signs of EDs through the analysis of user-generated textual content. This study presents a comprehensive approach that combines natural language processing (NLP) techniques, Word2Vec vectorization, and a neural network architecture for binary text classification. The model aims to identify whether a post is related to disordered eating behavior. Additionally, the study incorporates social network analysis to examine the structure of interactions among users who publish related content. Experimental results demonstrate high precision (0.87), recall (0.84), and overall performance, confirming the model’s practical applicability. The network analysis revealed clusters of users with ED-related content, suggesting the presence of a "social contagion" effect – here dysfunctional behavioral patterns may spread through online social connections. These findings highlight the potential of NLP and graph-based modeling in the early detection, monitoring, and prevention of eating disorders by leveraging digital traces left in online environments.

Keywords: eating disorders, text analysis, machine learning, neural network models, natural language processing, social graph, network analysis

Statistical estimation of the probability of reaching a target price considering volatility and returns across different timeframes

2025. T.13. № 2. id 1905
Gilmullin T.M.  Gilmullin M.F. 

DOI: 10.26102/2310-6018/2025.49.2.030

The article proposes an original algorithm for statistical estimation of the probability of reaching a target price based on the analysis of returns and volatility using a drifted random walk model and integration of data from different timeframes. The relevance of the study stems from the need to make informed decisions in algorithmic trading under market uncertainty. The key feature of the approach is the aggregation of probabilities computed from different time intervals using Bayesian adjustment and weighted averaging, with weights dynamically determined based on volatility. The use of a universal fuzzy scale for qualitative interpretation of the evaluation results is also proposed. The algorithm includes the calculation of logarithmic returns, trend, and volatility, while stability is improved through data cleaning and anomaly filtering using a modified Hampel method. The article presents a calculation example using real OHLCV data and discusses possible approaches to validating the accuracy of the estimates when historical records of target price attainment are available. The results demonstrate the practical applicability of the proposed method for assessing the feasibility of reaching forecasted targets and for filtering trading signals. The developed algorithm can be used in risk management, trading strategy design, and expert decision support systems in financial markets.

Keywords: statistical probability estimation, target price, return, volatility, random walk with drift, timeframe integration, bayesian adjustment, fuzzy logic, logarithmic return, financial modeling

Interpreted reinforcement learning to optimize the operational efficiency of enterprises in the context of digital transformation

2025. T.13. № 3. id 1901
Prokhorova O.K.  Petrova E.S. 

DOI: 10.26102/2310-6018/2025.50.3.001

In the context of the digital transformation of education, MOOC platforms face the need to optimize operational processes while maintaining the quality of education. Traditional approaches to resource management often do not take into account complex temporal patterns of user behavior and individual learning characteristics. This paper proposes an innovative solution based on interpreted reinforcement learning (RL) integrated with the Shapley Value method to analyze the contribution of factors. The study demonstrates how data on activity time, user IDs, training goals, and other parameters can be used to train an RL agent capable of optimizing the allocation of platform resources. The developed approach allows: quantifying the contribution of each factor to operational efficiency; identifying hidden temporal patterns of user activity; and personalizing load management during peak periods. The article contains a mathematical justification of the method, practical implementation in MATLAB, as well as the results of testing, which showed a reduction in operating costs while increasing user satisfaction. Special attention is paid to the interpretability of the RL agent's decisions, which is critically important for the educational sphere. The work provides a ready-made methodology for the implementation of intelligent management systems in digital education, combining theoretical developments with practical recommendations for implementation. The results of the study open up new opportunities for improving the effectiveness of MOOC platforms in the face of growing competition in the educational technology market.

Keywords: reinforcement learning, shapley Value, operational efficiency, digital transformation, interpreted AI, business process optimization

Analyzing customer behavior and choosing marketing strategies based on reinforcement learning

2025. T.13. № 2. id 1900
Prokhorova O.K.  Петрова Е.С. 

DOI: 10.26102/2310-6018/2025.49.2.035

In today's competitive market, companies face the challenge of choosing optimal marketing strategies that maximize customer engagement, retention, and revenue. Traditional methods such as rule-based approaches or A/B testing are often not flexible enough to adapt to dynamic customer behavior and long-term trends. Reinforcement Learning (RL) offers a promising solution, allowing you to make adaptive decisions through continuous interaction with the environment. This article explores the use of RL in marketing, demonstrating how customer data – such as purchase history, campaign interactions, demographic characteristics, and loyalty metrics – can be used to train an RL agent. The agent learns to choose personalized marketing actions, such as sending discounts or customized offers, in order to maximize metrics such as increased revenue or reduced customer churn. The article provides a step-by-step guide to implementing an RL-based marketing strategy using MATLAB. The creation of a user environment, the design of an RL agent and the learning process are considered, as well as practical recommendations for interpreting agent decisions. By simulating customer interactions and evaluating agent performance, we demonstrate the potential of RL to transform marketing strategies. The aim of the work is to bridge the gap between advanced machine learning methods and their practical application in marketing by offering a roadmap for companies seeking to use the capabilities of RL for decision making.

Keywords: reinforcement learning, customer behavior, marketing strategies, state of the environment, agent actions, agent reward

Construction of regression models with switching nonlinear transformations for the assigned explanatory variable

2025. T.13. № 2. id 1893
Bazilevskiy M.P. 

DOI: 10.26102/2310-6018/2025.49.2.024

Often, when constructing regression models, it is necessary to resort to nonlinear transformations of explanatory variables. Both elementary and non-elementary functions can be used for this. This is done because many patterns in nature are complex and poorly described by linear dependencies. Usually, the transformations of explanatory variables in a regression model are constant for all observations of the sample. This work is devoted to constructing nonlinear regressions with switching transformations of the selected explanatory variable. In this case, the least absolute deviations method is used to estimate the unknown regression parameters. To form the rule for switching transformations, an integer function "floor" is used. A mixed 0–1 integer linear programming problem is formulated. The solution of this problem leads to both the identification of optimal estimates for nonlinear regression and the identification of a rule for switching transformations based on the values of explanatory variables. A problem of modeling the weight of aircraft fuselages is solved using this method. The nonlinear regression constructed with the proposed method using switching transformations turned out to be more accurate than the model using constant transformations over the entire sample. An advantage of the mechanism developed for constructing regression models is that thanks to the knowledge of the rules for switching transformations, the resulting regression can be used for forecasting.

Keywords: regression analysis, nonlinear regression, least absolute deviations method, mixed 0–1 integer linear programming problem, integer function «floor», weight model of aircraft fuselage

Review and analysis of optical sensor-based technical vision systems technologies for autonomous navigation on dirt roads

2025. T.13. № 2. id 1892
Bychkov A.  Bulanov A. 

DOI: 10.26102/2310-6018/2025.49.2.045

This review is devoted to computer vision technologies for the autonomous navigation of a mobile robot on dirt roads, and to analyzing their degree of technological readiness. The selection of studies was conducted according to the PRISMA methodology using the academic article aggregator Google Scholar. Based on the analysis of the works from the selected sample, key technologies were identified, including datasets, terrain mapping techniques, and methods for road and obstacle detection. These were further divided into sub-technologies, each of which was evaluated for its level of technological readiness using the scale presented in the study – a newly proposed interpretation of the TRL scale – taking into account the particular challenges of working on dirt roads and in environments that are generally difficult to replicate under laboratory conditions. As a result of the study, statistics were compiled that highlight the most significant works in the field of autonomous navigation on dirt roads. It was also concluded that the main trend in navigation development involves the acquisition and processing of comprehensive data, that traversability analysis focuses on the extraction and processing of geometric features, and that there is an urgent need for high-quality datasets.

Keywords: off-road, dirt roads, obstacle detection, depth sensors, off-road classification, datasets

Practical implementation of software automation of the BP curve and its components for the analysis of the balance of payments of Russia

2025. T.13. № 3. id 1891
Shchegolev A.V. 

DOI: 10.26102/2310-6018/2025.50.3.019

This article presents a practical implementation of the balance of payments (BP) curve using the Python programming language. In this regard, this article is aimed at modeling the relationship between the interest rate, the exchange rate and the state of external economic equilibrium within the framework of the modified IS-LM-BP model. The use of numerical methods and machine learning algorithms makes it possible to analyze the dynamics of macroeconomic indicators and assess the impact of external economic factors on the country's balance of payments. The study uses real statistical data, which ensures the practical applicability of the results obtained. The leading approach to the research is the development of software code for the numerical solution of a system of equations, calibration of the model based on empirical data and the construction of forecasts on various time horizons. The materials of the article are of practical importance for using modern computational economics tools for analyzing and modeling macroeconomic equilibrium, as well as their potential in developing economic policy measures. This model is useful for strategic analysis, as it allows us to assess the impact of changes in interest rates and the exchange rate on macroeconomic equilibrium. The developed methodology allows not only to build a BP curve based on real data, but also to use it to predict future economic conditions, which makes this approach useful for macroeconomic analysis and strategic planning.

Keywords: balance of payments, BP curve, IS-LM-BP model, numerical modeling, macroeconomic equilibrium, correlation-regression analysis, python

Using the ternary balanced number system to improve the accuracy of calculations

2025. T.13. № 2. id 1889
Blinova D.V.  Giniyatullin V.M.  Kupbaev T. 

DOI: 10.26102/2310-6018/2025.49.2.026

The paper describes the use of a ternary balanced number system for calculating the elements of the inverse matrix for ill-conditioned matrices. The conditionality of a matrix characterizes how strongly the solution of a linear equations system can change depending on small perturbations in the data. The higher the conditionality value, the more sensitive the matrix is to small changes in the data. As an example of an ill-conditioned matrix in this paper the three-by-three Hilbert matrix is considered. Based on the known expression, the true values of the elements of the inverse Hilbert matrix are calculated. An assessment of the errors in calculating the elements of the inverse Hilbert matrix, obtained with varying degrees of calculation accuracy in the binary number system (using a computer, software implementation in C language) and in the ternary balanced number system (calculations were performed manually), is given. Comparison of calculation results is performed in the decimal number system. It is shown that the use of a ternary balanced number system allows to reduce the calculation error of ill-conditioned matrices elements by several times (by 3 or more times for low-precision data and by 1,5 or more times for more precise data).

Keywords: inverse matrix, the Hilbert matrix, ternary balanced number system, ill-conditioned matrix, calculation errors

Informative features for electromagnetic detection and recognition of biological objects

2025. T.13. № 2. id 1888
Aleshkov A.A.  Tsvetkov G.A.  Kokovin A.N. 

DOI: 10.26102/2310-6018/2025.49.2.020

The relevance of the study is due to the need to improve the reliability and efficiency of physical protection systems of protected objects in the face of growing security threats, which is possible through the use of more sensitive and selective methods of identification of intruders, which includes the developed method – electromagnetic detection and recognition of biological objects (BO). The purpose of the work is to study the bifurcation process of interaction of external electromagnetic field of radio wave range with the electromagnetic shell of a living organism to substantiate, evaluate and calculate informative features of electromagnetic detection and recognition of BO with the subsequent formation of a dictionary of typical features. The study is based on the previously developed mathematical model of a BO, which is refined and supplemented by analyzing the scientific literature devoted to the study of bioradioinformative technology and bioelectromagnetism. In the course of the work, the conditions and modes of functioning of a biological medium generating electromagnetic radiation are determined and described, depending on the combination of energy and frequency parameters of the external field with the characteristics of this medium. The nomenclature of the most informative signs of electromagnetic recognition – bifurcation parameters characterizing mass, dimensions and electrodynamic properties of a bioobject – is proposed and substantiated. Analytical expressions for calculating the features of classification of BO are derived, confirmed by the results of computational experiment. A dictionary of intruder attributes is developed, providing the possibility of informed decision-making about the presence of an object in the controlled space, its belonging to a certain class and motion parameters. The presented results can be used in the development of means of intruder identification for security and territory monitoring systems.

Keywords: informative features, information interaction, biological object, electromagnetic fields, strength, bioelectromagnetism, intruder identification, bifurcation parameters, feature dictionary

A novel hybrid anomaly detection model using federated graph neural networks and ensemble machine learning for network security

2025. T.13. № 2. id 1887
Arm A.  Lyapuntsova E.V. 

DOI: 10.26102/2310-6018/2025.49.2.044

Traditional network intrusion detection systems have increasingly complex challenges as the sophistication and frequency of cyber-attacks grow. This research proposes federated ensemble graph-based network as a novel hybrid approach to anomaly detection that increases detection performance while minimizing false positives. This new framework relies on federated graph neural networks combined with ensemble approaches using three highly recognized machine learning techniques –Random Forest, XGboost, and LightGBM – to accurately characterize expected patterns of traffic and discern anomalies. Moreover, the framework uses federated learning to ensure privacy-compliant decentralized training across multiple clients learning the same model concurrently without exposure to raw data. The FEGB-Net framework is evaluated using the CICIDS2017 dataset, achieving 97.1% accuracy, 96.2% F1-Score, and 0.98 metrics for evaluating the effectiveness of models, surpassing results from both traditional machine learning and deep learning approaches. By relying on novel graph signal processing approaches to shape the relational learning and ensemble-based voting techniques to categorize results, FEGB-Net can become a practical and effective framework for real-world use due to its transparent interpretability, relative ease of use, and scalability. key contributions include a privacy-preserving Fed-GNN and ensemble framework, a novel meta-fusion algorithm, a reproducible Python implementation, and a large-scale evaluation on CICIDS2017. Future work includes experiments to apply the obtained results in real time and subsequent research considering new attack vectors.

Keywords: network security, anomaly detection, federated learning, graph neural networks, ensemble learning, FEGB-Net, metrics for evaluating the effectiveness of models (AUC-ROC)

Assessment of the reliability and effectiveness of artificial intelligence systems in radiation diagnostics at the operational stage

2025. T.13. № 2. id 1886
Zinchenko V.V.  Vladzimirskyy A.V.  Arzamasov K.M. 

DOI: 10.26102/2310-6018/2025.49.2.016

In the context of the active implementation of artificial intelligence (AI) technologies in healthcare, ensuring stable, controlled and high-quality operation of such systems at the operational stage is of particular relevance. Monitoring of AI systems is enshrined in law: within three years after the implementation of medical devices, including AI systems, it is necessary to provide regular reports to regulatory authorities. The aim of the study is to develop methods for assessing the reliability and effectiveness of medical artificial intelligence for radiation diagnostics. The proposed methods were tested on the data of the Moscow Experiment on the use of innovative technologies in the field of computer vision in the direction of chest radiography, collected in 2023. The developed methods take into account a set of parameters: emerging technological defects, research processing time, the degree of agreement of doctors with the analysis results and other indicators. The proposed approach can be adapted for various types of medical research and become the basis for a comprehensive assessment of AI systems as part of the monitoring of medical devices with artificial intelligence. The implementation of these methods can increase the level of trust of the medical community not only in specific AI-based solutions, but also in intelligent technologies in healthcare in general.

Keywords: artificial intelligence, reliability, efficiency, artificial intelligence system, radiology, radiation diagnostics, monitoring

Optimization of the nomenclature-volume balance of suppliers and consumers in the management of the organizational system of drug supply

2025. T.13. № 2. id 1885
Shvedov N.N.  Lvovich Y.E. 

DOI: 10.26102/2310-6018/2025.49.1.018

The article discusses approaches and tools aimed at improving the intelligent management of the nomenclature component in the drug supply system using optimization problems. We are talking about the relationship between the list of drugs and their quantitative distribution in such a way that the degree of balance between supply and demand is taken into account. The problem lies in insufficient coordination of drug flows, imbalances in stocks and inefficient distribution of resources. All these factors lead to increased costs and reduced availability of vital drugs for end consumers. Effective management of the nomenclature-volume balance allows you to avoid shortages, excess stocks and increase the sustainability of the drug supply system, ensuring optimal stocks and availability of drugs. The main attention is paid to the use of optimization problems and expert assessments of their parameters in managing the digital interaction of suppliers and consumers, which allows for increased accuracy in controlling the range and demand. Control means minimizing shortages or excess stocks, guaranteeing the availability of the necessary drugs for the end consumer. The results of the study were used to develop an intelligent subsystem for supporting management decisions, promoting balanced resource management and increasing the availability of drugs.

Keywords: organizational system, drug provision, management, optimization, expert assessment

Method of numerical calculation of the security level of information infrastructure components

2025. T.13. № 2. id 1884
Belikov Y.V. 

DOI: 10.26102/2310-6018/2025.49.2.025

One of the key issues in the process of organizing information security is the assessment of compliance with the requirements for infrastructure protection, as well as response to current threats and risks. This assessment is ensured by conducting an appropriate audit. Domestic and international standards specify various methods for conducting an information security audit, and also provide conceptual models for constructing the assessment process. However, the disadvantages of these standards include the impossibility of their in-depth adaptation within individual information systems, as well as the partial or complete lack of a numerical assessment of security parameters, which can negatively affect the objectivity of the assessment of the parameters used and not reflect real threats. In turn, the adaptation of numerical methods in the analysis of the maturity level of information security processes allows solving a number of important problems, for example, automation of the assessment process, providing a more accurate indicator of identifying vulnerable components of the information infrastructure, as well as the ability to integrate the obtained values with other processes aimed at neutralizing current security threats from intruders. The purpose of this work is to analyze the possibility of using a numerical assessment of the maturity level of information security, as well as the use of fuzzy sets in the audit.

Keywords: information security, audit, maturity level assessment, information security tools, numerical assessment, fuzzy sets, fuzzy logic, security criteria, risks

Application of the task of finding the minimum vertex coverage in a graph to improve the robustness of digital identity system

2025. T.13. № 2. id 1883
Akutin A.S.  Pechenkin V.V. 

DOI: 10.26102/2310-6018/2025.49.2.015

This paper examines the features of building digital identity systems for managing information technology processes in an enterprise, the architecture of which depends on decentralized data registers - blockchains. The paper considers blockchains as weighted graphs and formulates a number of theses that speak about the specifics of the functioning of such distributed networks in real information technology enterprises. The features of various network topologies and possible architectural vulnerabilities and flaws that can affect the operation of the entire network are considered – centralization of mining, centralization of staking, various attacks on a functioning network (topological and 51% percent attack). Blockchains using various consensus-building algorithms, taking into account their features, are considered. The paper considers the task of finding the minimum coverage in a graph and emphasizes the importance of applying this task to the described digital personality system in order to increase the reliability of the blockchain computer network by analyzing its topology. Various methods of finding the minimum coverage in a graph are considered – exact and heuristic algorithms. The paper analyzes an application that implements the ant colony algorithm to solve the problem, provides numerical characteristics of the algorithm and its formal description.

Keywords: digital identity system, blockchain, distributed systems, graphs, minimum coverage search

Modeling and approximation of scattering characteristics of elementary reflectors

2025. T.13. № 3. id 1882
Preobrazhensky A.P.  Avetisyan T.V.  Preobrazhensky Y.P. 

DOI: 10.26102/2310-6018/2025.50.3.015

Tasks related to the modeling of various electrodynamic objects are encountered in radar, design of electrodynamic devices, measures to reduce radar visibility, development of antennas and diffraction structures. In general, on the basis of the decomposition method, electrodynamic objects can be represented as a set of various elementary components. The scattering properties of the entire object are determined by the scattering properties of each of the components. To determine such characteristics, it is necessary, in general, to rely on the appropriate numerical methods. For a fairly limited number of diffraction structures, various analytical expressions are given in the literature. In some cases, they are quite bulky and require some experience from researchers in the course of use. The paper proposes to approximate the characteristics of elementary reflectors based on the method of least squares and Lagrange polynomials. On the basis of the study, the values of the powers of approximating polynomials were determined, which give an error that does not exceed the specified value. The results of the work can be used in the design of diffraction structures. Based on the results obtained, the time for calculating scattering characteristics will be reduced.

Keywords: modeling, approximation, edge wave method, modal method, diffraction structure

Algorithms and programs for calculating nonparametric criteria for statistical hypothesis testing based on permutations with repetitions

2025. T.13. № 2. id 1880
Agamirov L.V.  Agamirov V.L.  Toutova N.V.  Andreev I.A.  Ziganshin D. 

DOI: 10.26102/2310-6018/2025.49.2.022

One of the important tasks of statistical analysis is to test statistical hypotheses, and in this group the most promising is the subgroup of nonparametric ranking criteria, which are very stable for work with small samples, when it is not possible to reliably justify the hypothetical law of distribution. In its turn, this fact causes the necessity to abandon asymptotic approximations and to have exact critical values of the criteria (or so-called p-values in modern literature). At present, analytical solutions are available only for a very limited class of criteria (signs, Wilcoxon, series, Ansari-Bradley). For all others, a computerized enumeration of a huge number of possible permutations of ranks is required for an exact solution. The creation of a universal algorithm for obtaining an accurate and fast distribution of ranks of nonparametric criteria is the focus of the present work. The algorithm, implemented in open-source programming languages C++, Javascript and Python, is based on a well-known combinatorics problem - permutations with repetitions, with its adaptation to the task of hypothesis testing by rank criteria. The following criteria are considered as such criteria: Kraskell-Wallis, Muda, Lehman-Rosenblatt, as well as a group of normal label criteria: Fisher-Yates, Capon, Klotz, Van der Varden. The algorithm is also adapted for other possible ranking problems of nonparametric statistics.

Keywords: statistical hypothesis testing, nonparametric criteria, rank criteria, exact distributions of rank criteria, permutations with repetitions, permutation algorithms, c++ programs for permutations