metadata of articles for the last 2 years
Работая с сайтом, я даю свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта обрабатывается системой Яндекс.Метрика
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

metadata of articles for the last 2 years

Signal-based feature extraction in motor evoked potentials: TKEO onset detection and Hilbert envelope analysis

2026. T.14. № 3. id 2294
Demigha Y.  Lyapuntsova E.V. 

DOI: 10.26102/2310-6018/2026.54.3.021

The reliable and objective determination of the characteristics of the motor evoked potential (MEP) – latency of occurrence, amplitude from peak to peak, duration and morphology of the waveform – is fundamental for clinical neurophysiology, but in modern practice it largely depends on the judgments of the operator. Mathematical algorithms for signal processing offer a transparent, deterministic and reproducible alternative. We present, characterize, and systematically evaluate a complete mathematical algorithm for identifying MEP features, consisting of three stages: determining the origin based on TKEO, the Tiger-Kaiser energy operator applied to a pre-processed signal with an adaptive threshold k∙σ_baseline; Estimating the displacement of the Hilbert transform – amplitude envelope tracking using a baseline return criterion; and morphological classification by counting significant zero crossings to assign monophase, two-phase, or multiphase labels. At the marker verification stage, tests in which the detected signs do not exceed the minimum noise level are rejected. With an SNR value of 3.0, performance decrease, the MAE delay increases from 1.4 ms (SNR ≥ 5) to 9.7 ms (SNR < 3). The accuracy of morphological classification is 94% for studies with high SNR and decreases to 61% for studies with very low SNR. The mathematical pipeline provides clinically acceptable accuracy for MEP with high and medium SNR levels and serves as an interpretable reference standard with zero training costs. Its failure modes are well characterized, SNR-dependent, and predictable – properties that make it a basic baseline comparator for evaluating more advanced automated analysis methods.

Keywords: motor evoked potentials, transcranial magnetic stimulation, TKEO, tiger-Kaiser energy operator, hilbert transform, amplitude envelope, electromyography, signal processing

Detecting motor evoked potentials using neural convolutional networks: overcoming the limitations of manual analysis

2026. T.14. № 3. id 2292
Demigha Y.  Lyapuntsova E.V. 

DOI: 10.26102/2310-6018/2026.54.3.019

Motor evoked potentials (MEPs) are electrophysiological signals of crucial diagnostic and monitoring importance in neurology, neurosurgery, and rehabilitation medicine. Traditionally, feature extraction from MEP data has been based on manual control and measurements performed by trained clinicians according to established rules, a process that is inherently subjective, time-consuming, and subject to significant differences between observers. This article provides a comprehensive rationale for using convolutional neural network (CNN)-based approaches to extract MEP features. CNNs provide superior performance in key parameters, including accuracy, reproducibility, processing speed, and the ability to detect hidden morphological patterns that may escape human visual perception, compared to traditional manual methods. In addition, automated CNN-based analysis eliminates the variability between patients, allowing for real-time intraoperative monitoring. Performance estimates based on computer modeling and a structured comparative analysis of the two methods strongly confirm this statement. The introduction of CNNs represents a revolutionary step towards objective, scalable, and clinically reliable analysis that can standardize the interpretation of MEP in a variety of clinical settings and potentially improve patient outcomes through more consistent neurological assessment.

Keywords: motor evoked potentials, convolutional neural networks, feature extraction, transcranial magnetic stimulation, intraoperative neurophysiology, deep learning, electrophysiology, automated analysis, interdisciplinary reliability, signal processing

Graph neural networks for predicting network characteristics in New IP and ManyNets architectures

2026. T.14. № 4. id 2276
Povarov M.K.  Gavrilov K.V.  Korchagin P.A.  Pishchulin P.A.  Malakhov S.V. 

DOI: 10.26102/2310-6018/2026.55.4.009

In New IP and ManyNets architectures (ITU-T Network 2030), the need to predict network characteristics, including path delay, without heavy simulation grows; it remains unclear when graph neural networks outperform simple computational methods and how such models generalize to different graph sizes. This article aims to assess applicability of a graph neural network to the path delay task on synthetic graphs with a formula accounting for link load, and to evaluate generalization to larger graphs. A comparative experiment on Erdős–Rényi graphs was applied: a graph convolution-based model was compared with a baseline method; two experiments were conducted: a load-aware target latency experiment and a test on graphs with 15 and 20 nodes after training on graphs with 15 nodes. Results (single run): in the first experiment the baseline gave MAE 1.85 and MAPE 7.89 %, the graph model 9.91 and 59.20 %; in the second, when moving from 15- to 20-node test graphs, the graph model’s MAE decreased by about 7 % and the baseline’s increased by about 8 %. The approach is concluded applicable on synthetic data as a first step toward models for predicting network characteristics in New IP and ManyNets architectures. The materials are of practical value for specialists when choosing and validating delay prediction methods and planning experiments on synthetic topologies.

Keywords: graph neural networks, network characteristics prediction, new IP, manyNets, delay prediction, synthetic network topologies, erdős–Rényi graphs, quality of service, network topology, graph convolution

Mathematical model and software for project team formation based on intra-collective relationships

2026. T.14. № 4. id 2264
Kuminov P.A.  Zakharova A.A. 

DOI: 10.26102/2310-6018/2026.55.4.004

In modern conditions, the success of project activities is determined not only by the professional competencies of participants but also by their socio-psychological compatibility. Existing mathematical models of team formation, based on the classical assignment problem, are focused exclusively on the resource-based approach and do not take into account interpersonal relationships, which also affect the efficiency of joint activities. The aim of the work is to develop a mathematical model and software for forming project teams that combines the professional competencies of candidates and the sociometric characteristics of their relationships to achieve a synergistic effect. A model is proposed that extends the generalized assignment problem by incorporating sociometric indices of cohesion and conflict, and also excludes teams with mutual antipathies. To solve the NP-hard optimization problem, a genetic algorithm implemented in Python using the DEAP framework was applied. An individual is represented by a fixed-length chromosome, where the position corresponds to the role and the value to the candidate's index. The operation of the algorithm is demonstrated on a test example. The model and algorithm can be used by project managers, HR specialists, and educators for the informed formation of student and professional teams with a favorable socio-psychological climate.

Keywords: project team formation, assignment problem, mathematical model, sociometry, genetic algorithm

Research into neural networks as a method for image compression and archiving

2026. T.14. № 3. id 2247
Podberezkin A.A.  Loskutov Y.D.  Gretskii D.A.  Pronin C.B.  Ostroukh A.V. 

DOI: 10.26102/2310-6018/2026.54.3.020

This article explores a method for storing images by training a neural network on a single image and storing its weights as a compact representation. This approach significantly reduces the amount of data stored while maintaining acceptable visual quality. Model parameters and training settings are analyzed to optimize recovery quality. The basic idea of the approach is that a trained model stores its weights, which act as a compact representation of the original image. When reconstruction is required, the weights are reloaded into the network to restore the visual content. Experimental results show that optimizing the network architecture and color space (YCbCr) enables high compression ratios – up to 29.4 while maintaining visual quality close to the original (MSE ≈ 10-5). However, the authors note a significant drawback of the method: long training time and significant computational costs, making it less effective than traditional compression algorithms for practical real-time applications. Nevertheless, the approach demonstrates potential for tasks where preserving fine image details is critical, such as data archiving or video stream compression.

Keywords: image compression, neural network, image archiving, single-image training, image restoration, multilayer perceptron, machine learning, positional coding, coordinate coding, artificial intelligence

Hybrid adaptive optimal control with MPSO-based parameter tuning for a three-link robotic manipulator

2026. T.14. № 4. id 2243
La M.  Lwan M. 

DOI: 10.26102/2310-6018/2026.55.4.007

This paper addresses the problem of high-precision trajectory tracking for a nonlinear three-link robotic manipulator operating under parametric uncertainties and external disturbances. Conventional PID and classical adaptive control methods often demonstrate limited robustness and suboptimal energy efficiency when applied to dynamically coupled multi-link systems. To overcome these limitations, a Hybrid Adaptive-Optimization Control Framework is proposed. The approach integrates Adaptive Computed Torque Control with a Modified Particle Swarm Optimization algorithm for systematic controller gain tuning. The manipulator dynamics are derived using the Euler – Lagrange formulation and implemented in MATLAB through numerical time-domain integration. Controller parameters are optimized offline using a multi-objective cost function that incorporates trajectory tracking error, control effort, and energy consumption. The optimized gains are then applied within an online adaptive compensation structure to enhance robustness against modeling uncertainties. The simulation results show that the proposed approach provides a reduction in the mean square error by approximately 26 % compared to the standard adaptive control, a reduction in the settling time, a reduction in the normalized energy consumption and a reduction in torque pulsation, which confirms the improvement in the accuracy, robustness and energy efficiency of the system.

Keywords: robotic manipulator, adaptive control, hybrid optimal control, particle swarm optimization, trajectory tracking

Аlgorithmization of managing the distribution of limited financial resources in the regional social fund based on ART-MAP neural networks

2026. T.14. № 3. id 2242
Burkovsky V.L.  Obukhova A.E. 

DOI: 10.26102/2310-6018/2026.54.3.015

In a context of persistently limited budgetary resources, exacerbated by the growing social burden on regional budgets, the problem of finding effective mechanisms for distributing state social funds is of paramount importance. The social well-being of millions of citizens and the stability of social relations directly depend on how rationally and fairly resources are distributed. A key element in building such an effective system is a clear, scientifically sound, and, crucially, prioritized classification of recipient groups of social assistance. This classification allows for a shift from a egalitarian support approach to a targeted approach, focusing efforts and resources on the most vulnerable groups. This article proposes an innovative approach to algorithmizing this complex process. The proposed method is based on integrating the developed hierarchical classification of recipients with modern neural network technologies, specifically the ART-MAP family of architectures. The use of neural network data allows for the creation of a flexible, adaptive system capable of learning in real time, taking into account the dynamics of changes in the social environment, and ensuring not only accurate but also completely transparent, understandable, and justified dispersion (redistribution) of financial flows, which is critical for upholding the principles of social justice.

Keywords: financial resource allocation, regional social fund, neural networks, algorithmization, management

Applying machine learning and feature analysis to predict demand in the Russian pharmaceutical market

2026. T.14. № 3. id 2241
Lomakin A.S.  Oganesian A.A.  Zubkov A.V. 

DOI: 10.26102/2310-6018/2026.54.3.017

This article explores the use of computer-based methods for analyzing tabular data to forecast consumption in the Russian pharmaceutical market. It describes the key stage of developing an information system designed to forecast drug procurement and support management decision-making in the pharmaceutical supply chain. It examines the specifics of medical organizations' procurement activities and the key risks associated with planning drug demand and pricing. It details the modern methods used in the study, including machine learning models and feature significance analysis using SHAP. It describes the data preparation and preprocessing process, including collecting, cleaning, transforming, and encoding features, as well as generating training and test samples for building regression models. Particular attention is paid to identifying factors influencing drug pricing and improving forecasting accuracy through the use of specialized models for specific drug groups. The economic impact of implementing the developed tool is assessed. It enables medical organizations to more effectively manage procurement, optimize budgets, reduce financial risks. Specific attention is given to forecasting drug prices and automating the planning and procurement process as part of the sustainable and rational development of the Russian pharmaceutical market.

Keywords: machine learning, artificial intelligence, SHAP analysis, information systems, demand forecasting, pharmaceutical market

Modification of the information process model for remote monitoring of object condition based on heterogeneous data sources

2026. T.14. № 4. id 2237
Gilka V.V.  Kuznetsova A.S.  Kachanov Y.A.  Morozov D.A.  Lomakin A.S. 

DOI: 10.26102/2310-6018/2026.55.4.006

Abstract. The paper proposes a modification of the information process model for remote monitoring of object condition aimed at improving the correctness of result interpretation under conditions of heterogeneous data sources, different measurement frequencies, data transmission delays, and incomplete observations. The objective of the study is to extend the original model by incorporating additional stages and mechanisms that ensure data quality control, temporal alignment of data streams, robustness of notifications, and reproducibility of the obtained assessments. The research methods include structural and functional decomposition of the information process and formalization of data processing principles at each added stage. The proposed modification introduces: an object profile serving as a context for parameter interpretation and a mechanism for unambiguous assignment of measurements to a specific object; temporal synchronization of data streams based on window processing; a data quality control loop with validity labeling and anomaly detection; a confidence indicator for state assessment considering the completeness and quality of observations; event-based interpretation of results; robust notification mechanisms based on an extended threshold model with hysteresis and message rate limiting; explainable inference tools identifying the parameters that influenced the assigned status; and traceability of results through logging of input data, interpretation rules, and output assessments. As a result, a refined structure of the information process has been developed, enabling state assessment that accounts for the quality and consistency of input data and ensuring stable delivery of results to the monitoring subject.

Keywords: remote monitoring, object condition, heterogeneous data source, information process, structural-functional model, data quality control, temporal synchronization, window processing, robust notifications, traceability of results

A systematic approach to research parameters of gas turbine engine supports: a multiphysical model of a damping support

2026. T.14. № 4. id 2236
Zubkov N.V. 

DOI: 10.26102/2310-6018/2026.55.4.010

The relevance of the study is due to the need to increase the efficiency of analyzing the dynamic characteristics of damping bearings of gas turbine engines, since existing finite element models are computationally complex and are not applicable for operational analysis, and simplified analytical models are focused on a generalized assessment of characteristics and have limited capabilities in the study of nonlinear contact and hydrodynamic effects. In this regard, this article is aimed at developing a multiphysical simulation model of a damping support of a gas turbine engine, providing a reliable study of its dynamic and damping characteristics as part of a virtual test complex. The leading research method is a systematic approach based on the integration of the Simscape libraries and the MATLAB Simulink Multibody environment, which allows for consistent modeling of mechanical, contact, and hydrodynamic processes in the bearing assembly and damping package, as well as parametric analysis of the effect of design characteristics on the dynamic response of the system. The article develops a multiphysical model of a damping support that implements the interaction of rolling elements, elastic-dissipative elements and a hydrodynamic medium, and studies the effect of the number of bands and corrugations of the damping package on the power and frequency characteristics of the support. The simulation results obtained on the basis of the developed model make it possible to quantify the effect of the design parameters of the damping bearings on the vibration stability of the rotor and can be used in the design, optimization and virtual prototyping of the support units of gas turbine engines.

Keywords: virtual test facility, gas turbine engine, damping supports, multiphysical model, hydrodynamic model

Algorithm for automatic identification of emergency service vehicles

2026. T.14. № 4. id 2234
Shulga T.E.  Liberman A.I.  Fadeeva A.A.  Kostyukevich T.A. 

DOI: 10.26102/2310-6018/2026.55.4.011

The relevance of this research is determined by the need to ensure rapid access for emergency service vehicles to the territory of secured facilities, whose access in the modern urban environment is often restricted by automatically controlled barriers and other physical obstacles. This issue can be addressed by implementing intelligent identification systems for emergency service vehicles. Consequently, this paper aims to develop an algorithm for the automatic identification of emergency service vehicles based on images. The core idea of the proposed algorithm relies on the combined use of an artificial neural network and an ontological knowledge model of emergency service vehicles. The ontology was developed using the Protégé editor and the OWL language, based on an analysis of open data concerning the classification and equipment of emergency services. The YOLOv8 architecture, trained on an extended Roboflow dataset, was chosen as the foundation for the an artificial neural network. The results of the experimental study confirmed the high efficiency of the proposed model, achieving an accuracy of 89 %, which indicates its practical applicability for solving the target task. The developed algorithm can be integrated into intelligent access control systems for residential complexes and commercial facilities, thereby contributing to an increased level of safety and optimized service delivery.

Keywords: OWL ontology, semantic model, artificial neural network, image recognition algorithm, emergency service vehicles

Automated decision support system for predicting online shopping behavior of e-commerce users

2026. T.14. № 3. id 2230
Svyatov R.S. 

DOI: 10.26102/2310-6018/2026.54.3.010

The relevance of this study is caused by the rapid development of electronic commerce and the growing need for effective tools to predict user behavior in online retail environments. The main problem lies in the fact that existing solutions in this domain are often limited to specific datasets, lack sufficient scalability, and rarely support real-time automation of the forecasting process. The purpose of this study is to develop a decision support system that enables the estimation of the probability of future purchase completion based on the analysis of user behavioral data and provides decision-makers with actionable recommendations for subsequent marketing activities. The methodological framework of the study is based on the use of a web analytics system as a source of information on user activities, data preprocessing and structuring procedures, and the application of gradient boosting as a machine learning algorithm for predicting the probability of purchase. To identify internal and external factors that could have a positive or negative impact on achieving the goal, a SWOT analysis was conducted. Experimental validation of the system was conducted using data from four online stores representing different business domains. The results demonstrate that the overall F-score exceeds 80 % across all experiments. The materials presented in this article have practical relevance for e-commerce professionals, data analysts, and marketing specialists, as well as for decision-makers, since the proposed system enables automated prediction of purchasing behavior, the formation of interpretable user segments, and the application of the obtained results to marketing personalization and optimization of managerial decision-making.

Keywords: machine learning, decision support system, user behavior analysis, e-commerce, consumer behavior prediction, online stores

Hybrid semantic reduction of texts in library information systems

2026. T.14. № 3. id 2220
Rzyankin I.S.  Noskov M.V. 

DOI: 10.26102/2310-6018/2026.54.3.013

The relevance of the study is determined by the continuous growth of textual information in library information systems and the need to ensure fast and meaningful navigation across electronic collections under constrained computational resources. Existing automatic summarization solutions are primarily oriented toward large-scale language models, which limits their practical deployment within local library infrastructures. In this context, the paper aims to develop a resource-efficient method of semantic text reduction that balances the quality of semantic representation with computational feasibility. The proposed approach is based on a hybrid architecture that sequentially combines lexical reduction using word clouds with neural summarization performed by compact models. In addition, a context-oriented evaluation metric is introduced to assess relevance with regard to semantic coherence, structural characteristics, and domain-specific terms significant for the library environment. An experimental study conducted on a corpus of 1178 documents demonstrates that the hybrid approach improves relevance indicators while simultaneously reducing inference time compared to direct neural summarization of the full text. The obtained results confirm the practical applicability of the proposed method for library information systems operating under limited computational infrastructure and its usefulness for navigation and cataloging tasks.

Keywords: semantic text reduction, automatic summarization, word cloud, library information systems, hybrid text processing methods, neural models, relevance evaluation, library Relevance Score

Development of an adaptive resource management system for containerized CAD systems based on reinforcement learning

2026. T.14. № 3. id 2216
Chudinova A.A. 

DOI: 10.26102/2310-6018/2026.54.3.016

This project is dedicated to the development of an adaptive resource management system for containerised computer-aided design (CAD) applications using reinforcement learning. Modern CAD workloads are characterised by highly variable computing requirements, which makes traditional threshold-based auto-scaling mechanisms insufficient for maintaining performance and reliability in dynamic conditions. To address this issue, the proposed system compares classic Kubernetes pod scaling based on thresholds (HPA) with a Q-learning-based auto-scaling strategy applied to container clusters. The experimental setup is implemented as a simulation of a distributed containerised cluster and includes customisable workload models representing light, medium, heavy, and peak request patterns. System performance is evaluated using metrics such as response time, throughput, availability, cost-effectiveness, mean time to recovery, and false positive scaling events. A reinforcement learning agent monitors tracked system metrics and learns scaling policies that optimise long-term performance and stability through repeated interactions with the environment. The application interface allows users to control simulation parameters, including the number of policy runs, the number of episodes per run, and the number of steps per episode, as well as cluster configuration parameters such as the number of nodes and cores per node. The workload intensity can be adjusted to analyse system behaviour in different operating scenarios. This configuration allows for systematic evaluation of adaptive auto-scaling strategies and their impact on resource efficiency and fault tolerance in containerised CAD systems. The study represents a methodological innovation thanks to its interactive, experiment-based evaluation interface, which combines modelling and orchestration logic.

Keywords: adaptive resource management, experimental setup, containerized cluster, workloads, kubernetes, classic pod autoscaling, thresholds (HPA), autoscaling strategy, q-learning

Algorithm for constructing fully interpretable segmented linear regressions

2026. T.14. № 3. id 2212
Bazilevskiy M.P. 

DOI: 10.26102/2310-6018/2026.54.3.018

This article is devoted to the relevant scientific field – interpretable machine learning. Previously, the author introduced the concept of «fully interpretable linear regression», which is constructed using ordinary least squares for the entire set of statistical data. In this article, this concept is generalized to segmented linear regression, in which data is first divided into segments, and then its own linear regression is constructed on each of them. An algorithm for constructing fully interpretable segmented linear regressions has been developed. Its peculiarity is that, firstly, the division of the predictor space into segments is carried out using logical activation functions for the arguments of binary operations min. Secondly, paired regression is construct in each segment, which completely solves the problem of multicollinearity. Using the developed algorithm, a segmented linear regression of concrete compressive strength was constructed based on a sample of 1030 observations. In all its eight segments, the values of the linear regression determination coefficients do not exceed 0.8, which indicates the presence of unaccounted-for factors, so the constructed model cannot be strictly attributed to fully interpretable ones. However, all other interpretability conditions are met. In addition, the segmented model turned out to be much better in terms of approximation quality than simple linear regression.

Keywords: regression analysis, interpretability, segmented linear regression, ordinary least squares, multicollinearity, significance of estimates

Deep learning architectures for multiphase CT image segmentation

2026. T.14. № 3. id 2211
Samsonenko S.V.  Kashirina I.L. 

DOI: 10.26102/2310-6018/2026.54.3.012

The article provides a comprehensive systematic analysis of modern deep learning architectures for automatic segmentation of multiphase CT images. The specific features of multiphase data are considered in detail, the main of which are spatial mismatches (offsets) between phases caused by patient movements and the different nature of the accumulation of contrast agent in pathological tissues at different phases. These features make direct adaptation of classical segmentation methods ineffective and require the development of specialized architectures. The article traces the evolution of approaches: from basic convolutional networks (U-Net, 3D U-Net, nnU-Net) and hybrid models (TransUNet, UNETR) combining convolutions and transformers to specialized solutions. Special attention is paid to models with mechanisms of cross-attention between phases, such as PA-ResSeg, M3Net and MULLET, which allow for implicit alignment of features and adaptive merging of information from different phases without explicit registration (alignment) of images. The paper also analyzes the comparative advantages of various data fusion strategies from different phases (early, late, cross-interaction), discusses issues of computational efficiency and availability of open datasets. Key trends and promising areas of development of the field have been identified, including the use of fundamental models (MedSAM, VoxTell) and modal-agnostic learning. It is concluded that further progress in the field of multiphase segmentation of CT images is associated with the creation of computationally efficient architectures capable of integration into the real clinical process to support diagnostic solutions.

Keywords: hybrid architectures, image segmentation, attention mechanisms, multiphase CT, feature fusion, medical imaging, deep learning, computer vision, PA-ResSeg, m3Net

Research on uncertainty in multi-agent road surface monitoring

2026. T.14. № 2. id 2210
Podberezkin A.A. 

DOI: 10.26102/2310-6018/2026.53.2.009

The relevance of this study is determined by the fact that, in road-infrastructure monitoring platforms, errors at the stage of detection and interpretation of object conditions can propagate into normative and managerial decision errors, especially under real-world acquisition conditions (shadows, glare, wet/snow-covered pavement, contamination, and ambiguous defect boundaries), where the risk of misclassification and inaccurate localization increases. This is critical for threshold-based normative assessment, since even small inaccuracies may change the condition category and, consequently, lead either to unjustified maintenance assignments or to missing hazardous defects. Therefore, this paper investigates the use of detection uncertainty for road-surface defect monitoring within a multi-agent pipeline, where observation results are transferred between components together with the processing context via the Model Context Protocol as a unified mechanism for exchanging events, metadata, and interpretation parameters. The main approach is to build a computational pipeline that includes video-data preprocessing, defect detection, computation of the uncertainty indicator H(p) from the class-probability distribution, assignment of the status "automatic/validation/refinement" subsequent normative interpretation, and aggregation over road-network segments. To ensure reproducibility, each run is recorded as a unified "experiment context" (scene/frame identifier, model version, threshold parameters, decision status), enabling comparable mode-to-mode evaluation and auditing of discrepancy causes. Verification is based on comparing normative decisions with expert assessment and analyzing how the share of erroneous normative decisions depends on the automatic-decision threshold for H(p), while the risk-oriented logic routes high-uncertainty detections to validation and reduces the probability of errors in borderline cases. The results show that context logging via Model Context Protocol and accounting for H(p) improve experimental reproducibility and the soundness of normative interpretation, decreasing the risk of incorrect maintenance prioritization by separating ambiguous observations and preserving the decision rationale.

Keywords: multi agent system, road surface monitoring, road surface defects, computer vision, detection uncertainty, normative interpretation, context logging

A method for information extraction based on extractive question-answering models and strategies for evaluating and aggregating relevant text fragments

2026. T.14. № 3. id 2207
Martynyuk P.A. 

DOI: 10.26102/2310-6018/2026.54.3.008

In the context of accelerated growth of heterogeneous textual data volumes, universal approaches to information extraction that are independent of the specific structure and domain of source texts have become particularly important. Despite the widespread adoption of large generative language models, the problem of accurate and resource-efficient information extraction from textual data remains relevant. While possessing broad capabilities, generative models are often excessive for specialized information retrieval tasks and may demonstrate low interpretability of results. This study is part of research work aimed at developing an alternative method for information extraction from unstructured texts to form a structural model of a text document. The proposed approach focuses on identifying semantically rich text fragments through relevance analysis relative to given thematic aspects of the text. This research presents an information extraction method using an extractive question answering model, based on multi-level answer aggregation combining strategies for assessing text fragment relevance, semantic clustering, and final answer selection for a given question. The proposed approach enables identification of words in the text that are most relevant to the target thematic aspects, which can subsequently be used to extract reliable information from the document. The article presents experimental results confirming the effectiveness of the proposed method in identifying semantically relevant elements of a text document. The obtained results have practical value for developing automated systems of text semantic structure construction and can be applied in document analysis, information retrieval, and intelligent text processing tasks.

Keywords: natural language processing, information extraction, unstructured text, question-answering model, self-attention mechanism

A computational method for image segmentation based on a Dirichlet field and an analysis of the asymptotic accuracy of spatial regularizer discretization

2026. T.14. № 3. id 2204
Shchetinin E.Y.  Andreychuk A.A. 

DOI: 10.26102/2310-6018/2026.54.3.009

A computational method for semantic image segmentation with distributional uncertainty estimation is proposed based on representing the prediction as a Dirichlet distribution field. Unlike approaches that require multiple stochastic inference runs (MC dropout) or averaging over an ensemble of independent models, the method computes uncertainty maps in closed form based on the Dirichlet field parameters predicted in a single forward pass of the neural network. The method is formulated as the minimization of a composite functional including the expected logarithmic loss function (expected log-loss), KL regularization for controlling the distribution concentration, and spatial smoothing that takes into account local image intensity variations (edge-aware). For fixed smooth fields, the asymptotic discretization accuracy of the spatial regularizers used is established: the discrete Dirichlet energy approximates the corresponding continuous integral with a first-order error over the grid step. Additionally, a formal decomposition of the overall uncertainty into epistemic and data-supported components was introduced, which can be used in further analysis of the method's behavior and the development of extensions. Computational experiments were performed on three medical image datasets (ACDC, Synapse, CHAOS) with 10 independent initializations. In the main comparison with the baseline model trained using cross-entropy, the differences are statistically significant across initializations on all datasets; for ACDC, significance at the patient level was further confirmed. The method improves segmentation quality and improves the calibration of probability estimates with an overhead of approximately 17 %. In the task of detecting pixel-level segmentation errors, the uncertainty map achieves an AUROC of 0.891.

Keywords: image segmentation, neural network methods, dirichlet distribution, uncertainty estimation, calibration, dirichlet energy, edge-aware regularization, asymptotic sampling accuracy

Simulation model for managing laboratory staff workload during a pandemic using the AnyLogic platform

2026. T.14. № 4. id 2203
Donsckaia A.R.  Lomakin A.S.  Zubkov A.V.  Orlov D.V.  Nazarov N.O.  Kovaleva E.S. 

DOI: 10.26102/2310-6018/2026.55.4.003

The sharp increase in the burden on healthcare systems during the COVID-19 pandemic has shown the inefficiency of traditional methods of calculating labor productivity based on mathematical formulas. They do not take into account the dynamics of work processes, problems in the planning of labor resources, equipment and areas. This leads to inefficient load distribution, especially when, using the example of clinical laboratories, it became necessary to process thousands of samples for PCR testing every day. The aim of the research is to develop and analyze a method for workload planning using simulation modeling in AnyLogic, which allows visualizing and optimizing laboratory processes. The tasks include an analysis of existing approaches, a description of the methodology, application using the example of a PCR laboratory, and an assessment of the benefits in a pandemic. The proposed approach includes timekeeping of technological processes, data collection in tabular form, and creation of a digital laboratory model to identify bottlenecks, equipment and personnel downtime. Using the example of a PCR laboratory, the possibility of optimizing resources, calculating maximum productivity, and justifying purchases is demonstrated. The method makes it possible to increase the efficiency of laboratory production in situations of unpredictable demand, minimizing the risks of disruptions and financial losses.

Keywords: simulation modeling, anyLogic, workload planning, laboratory production, COVID-19 pandemic

Optimal control of finite increments of model factors based on sensitivity analysis

2026. T.14. № 2. id 2202
Sysoev A.S. 

DOI: 10.26102/2310-6018/2026.53.2.010

The article addresses the topical inverse problem of target-oriented control: determining the necessary finite changes to the system's input factors to achieve a desired target state, as opposed to the classical direct problem of forecasting. To solve it, a new methodological approach is proposed. This approach is based on sensitivity analysis utilizing the Lagrange mean value theorem. This framework allows for moving beyond local linearization to precisely account for nonlinear effects and factor interactions under substantial, practically observed changes. The key scientific result is the development of a universal iterative algorithm, which, for a given mathematical model, determines the vector of finite changes for the controllable factors that ensures the required increment in the output indicator with minimal total cost of the introduced changes and within given constraints. At each iteration step, the model's gradient (sensitivity estimate) is computed at an intermediate point, whose position is sequentially refined, and an auxiliary constrained optimization problem is solved. The practical efficiency and operability of the proposed method are verified using a numerical example with the nonlinear Ishigami model. The algorithm successfully found the optimal control action, ensuring high accuracy in achieving the target.

Keywords: inverse control problem, sensitivity analysis, finite change analysis, lagrange mean value theorem, constrained optimization

Application of artificial intelligence methods to analyze human behavioral biometrics in ensuring the security of complex information systems

2026. T.14. № 2. id 2201
Shelestova O.V.  Kochkarov A.A. 

DOI: 10.26102/2310-6018/2026.53.2.015

This article examines the application of artificial intelligence methods and technologies to analyzing human behavioral biometrics in the security of complex information systems. The relevance of the study stems from the limitations of traditional authentication mechanisms, which focus primarily on the initial stage of a user session and are ineffective in detecting user impersonation during interaction with the system. An alternative approach is proposed, using user behavioral characteristics to continuously assess trust in the current session. The paper analyzes anonymized text input data on a mobile device, reflecting the temporal and structural features of user interaction with the interface. It is shown that the combination of such characteristics allows for the identification of stable behavioral patterns suitable for user profiling. Using dimensionality reduction and cluster analysis methods, typical behavioral profiles are identified, differing in input style and rhythm, as well as the nature of corrections. Cluster membership is established to be maintained across multiple sessions with acceptable variability in individual characteristics. A risk-based approach to assessing behavioral deviations is proposed, based on comparing current behavioral indicators with a typical cluster profile. The study's results confirm the feasibility of using cluster behavioral profiles in risk-based access control systems and can be used in the design and development of continuous authentication mechanisms in complex information systems.

Keywords: behavioral biometrics, information security, artificial intelligence, machine learning, cluster analysis, continuous authentication, user behavior analysis

Agent-based approach to intelligent search in library systems

2026. T.14. № 2. id 2199
Rzyankin I.S.  Baryshev R.A.  Guchko A.A. 

DOI: 10.26102/2310-6018/2026.53.2.008

The article explores the application of an agent-based Retrieval-Augmented Generation (Agentic RAG) approach to intelligent search tasks in library collections. The object of the study is the Agentic RAG architecture, which integrates information retrieval mechanisms with agent-based planning and self-evaluation of intermediate results. The addressed problem concerns the limitations of classical Retrieval-Augmented Generation in handling complex thematic and contextual queries within semantically rich library data environments. Unlike traditional RAG pipelines, the agent-based architecture enables iterative refinement of search strategies, adaptive decision-making, and reassessment of intermediate outcomes. The research methodology is based on the development of a software prototype implementing Agentic RAG and its experimental comparison with a classical RAG baseline using a real university library corpus comprising bibliographic metadata, annotations, and full-text fragments. The evaluation framework includes standard information retrieval metrics (Precision@k, Recall@k, MRR, nDCG) as well as expert-based assessment of answer relevance. The results demonstrate a consistent superiority of Agentic RAG in terms of retrieval accuracy, recall, and ranking quality, particularly for complex queries. However, the interpretation of findings is constrained by the selected evaluation metrics and the characteristics of the experimental corpus. The practical significance lies in the potential integration of agent-based architectures into library information systems without requiring substantial infrastructural changes.

Keywords: agent-based search, retrieval-Augmented Generation, library information systems, intelligent search, semantic search, neural network technologies, agent architectures

Ontology-based approach to predicting consumer purchasing behavior in e-commerce

2026. T.14. № 2. id 2196
Svyatov R.S. 

DOI: 10.26102/2310-6018/2026.53.2.018

The relevance of this study is determined by the need to improve the accuracy and interpretability of models for predicting consumer purchasing behavior in online stores. Existing machine learning methods demonstrate high performance; however, their effectiveness largely depends on the composition and structure of the feature space, which is typically formed empirically and does not reflect the causal relationships between user actions. This study aims to develop a purchasing behavior prediction method based on an ontological analysis of the e-commerce domain. A formalized approach is proposed for describing entities and their interrelations, providing a systematic construction of the feature space and enabling its scalability across various online stores. The gradient boosting algorithm CatBoost was employed as the machine learning tool, trained on data obtained from the Yandex.Metrica web analytics system. The proposed method was tested on five online stores with different thematic focuses. Experimental results demonstrated stable quality metrics, with F-scores ranging from 65 % to 83 %, confirming the applicability and reproducibility of the developed approach. The findings have practical significance for the development of intelligent decision support systems in e-commerce and can be utilized in designing scalable analytical platforms for predicting user activity and purchase conversion.

Keywords: machine learning, ontology analysis, user behavior analysis, e-commerce, consumer behavior prediction, online stores

A comparative study of deep learning architectures for interpretable diagnosis of retinal diseases

2026. T.14. № 2. id 2195
Miroshnichenko V.V.  Kashirina I.L. 

DOI: 10.26102/2310-6018/2026.53.2.016

Interpretability of deep learning decisions remains a critical requirement for their application in medical diagnostics. This study presents a comparative analysis of three modern neural network architectures—Vision Transformer (ViT), Swin Transformer, and ConvNeXt – for multiclass classification of retinal diseases using optical coherence tomography (OCT) images. The research was conducted on the open OCTDL dataset containing 2.064 images across seven diagnostic categories with pronounced class imbalance. To compensate for this imbalance, a loss function weighting strategy was employed. All three models achieved validation accuracy exceeding 0.91, with ConvNeXt demonstrating the best performance (0.945) and an optimal balance of sensitivity and specificity, particularly for rare pathologies. Model interpretability was evaluated using Grad-CAM, attention weight visualization, and the model-agnostic LIME method. The analysis revealed that ConvNeXt combined with Grad-CAM provides the most reliable localization of clinically significant features, whereas ViT attention maps and Swin Transformer activation maps often appeared blurred or focused on non-informative regions. The results confirm the advantage of ConvNeXt as the most promising architecture for clinical deployment in ophthalmological diagnostics, owing to its combination of high accuracy, interpretability, and moderate computational requirements.

Keywords: deep learning, vision Transformer, swin Transformer, convNeXt, retinal diseases, grad-CAM

Application of an actuator model to improve the performance of an unmanned aerial vehicle lateral g-load control system

2026. T.14. № 3. id 2194
Smirnov V.A.  Orlov V.P. 

DOI: 10.26102/2310-6018/2026.54.3.004

This article examines the problem of improving the performance and accuracy of g-load control loops for highly maneuverable unmanned aerial vehicles. It is noted that traditional approaches based on a full range of physical sensors and linearized models lead to design complexity and are insufficient to compensate for significant aerodynamic nonlinearities and parameter spreads. A proposed solution is a transition to model-based control, replacing the steering actuator position sensor signal with the output signal of its virtual mathematical model. The study aims to develop the structure of an astatic loop implementing this approach. A three-loop system with an integral angular velocity stabilizer and compensation for the nonlinear torque characteristic is presented, ensuring astatic control without additional integrating links. To implement the approach in practice, the introduction of correcting devices in the high-frequency loop considers the total phase delays is proposed. The effectiveness of the solution is demonstrated using statistical modeling with random variations in the system parameters. It is shown that replacing a real actuator signal with its model does not lead to a statistically significant deterioration in the quality of transient processes, which confirms the possibility of increasing the speed and reliability of the system while simultaneously simplifying its hardware implementation.

Keywords: control system, lateral g-load, astatic loop, stabilization actuator, model-based control

Improving traffic quality of service in hybrid networks with cloud and fog layers

2026. T.14. № 3. id 2193
Glushak E.V.  Mikhailova P.D. 

DOI: 10.26102/2310-6018/2026.54.3.001

Improving the quality of service (QoS) in hybrid networks with cloud and fog levels is an urgent task of modern development of telecommunication systems. As the volume of data transferred increases, traditional resource management methods become insufficiently effective. Hybrid networks combining cloud and fog computing can significantly improve performance and reduce latency. An urgent task is to ensure a balance between high throughput, minimal delays and low packet loss. Efficient resource allocation helps to reduce energy consumption and operating costs. The article is devoted to optimizing the quality of traffic service in hybrid networks combining cloud and fog computing. A mathematical model based on a system of differential equations is presented that describes the dynamics of load, queues, resource allocation, delays, and packet losses. The model formalizes the task of optimal resource management in order to minimize delays and losses with limited capabilities. Numerical integration methods are used for the solution. The developed algorithm makes it possible to effectively balance the load between cloudy and foggy levels. The proposed approach proves its effectiveness for optimizing modern telecommunication systems, especially for applications with critical response time requirements.

Keywords: hybrid networks, cloud computing, fog computing, quality of service (QoS), traffic optimization, load balancing, data latency, layered architecture, resource allocation, routing

Optimization-simulation modeling for resource allocation management in geographically distributed organizational systems with variable workloads

2026. T.14. № 4. id 2191
Boklashov I.I.  Ivanov D.V.  Lvovich Y.E. 

DOI: 10.26102/2310-6018/2026.55.4.008

This paper addresses the integration of optimization approaches and simulation modeling to manage resource allocation within an organizational system characterized by a geographically distributed operational environment and variable activity volumes. The research methodology employs a systems approach, utilizing structural modeling to represent the organization's functioning and management. By structuring the interaction between the control center and operational units, the study establishes quantitative connection characteristics, which are recorded via the system's digital monitoring. The core component of this optimization-simulation model involves the multi-alternative selection of priority units for integrated resource allocation, subject to balance constraints and a stochastic flow of requests defining work requirements. Variable activity volumes are accounted for through a multi-period distribution of integrated resources. Consequently, the set of candidate units for the subsequent period includes those excluded from the optimized subset in the previous step, alongside a random component determined by the simulation results. The study demonstrates that single-period optimization utilizes real-time data to identify priority units for resource allocation. Furthermore, the multi-period optimization-simulation process generates sufficient synthetic data on resource demand; when combined with retrospective monitoring data, this forms a representative training dataset for machine learning predictive models. Finally, the paper defines management decisions supported by these predictive models for both the operational and developmental stages of the organizational system.

Keywords: organizational system, management, optimization, simulation modeling, machine learning, forecasting

A method for implementing pseudo-realistic movement of non-player characters in open virtual worlds

2026. T.14. № 3. id 2190
Shutov K.I.  Lobanov A.A. 

DOI: 10.26102/2310-6018/2026.54.3.006

The open-world game market increasingly demands NPC (non-player character) behaviour that feels believable yet remains designer-controllable under tight computational budgets. Common solutions tend to be extreme: either they attempt full simulation and overload the system, or they rely on predictable scripted patterns. This paper proposes a pseudo-realistic NPC movement method that bridges these extremes. The core idea is to verify spawn reachability using a matrix of shortest-path distances between world areas. When the player enters an area, the algorithm selects only those NPCs that could have physically reached it given elapsed time, movement speed and available routes, making an encounter consistent with hidden travel rather than instantaneous spawning. Encounter frequency is controlled via a priority scheme, allowing designers to tune event density and the rarity of specific characters without maintaining a detailed simulation. Candidate selection is further accelerated by reordering an almost-sorted list, reducing the cost of repeated queries under similar conditions. Experiments on synthetic graphs show that the core client-side runtime stays within milliseconds for up to 1000 NPCs. The method delivers believability and control at low computational cost and can be integrated into existing engines to adjust difficulty and balance.

Keywords: game design, game development, video games, pathfinding algorithm, sorting algorithm, NPC, non-player character

Privacy-preserving threat intelligence sharing across government agencies using FEGB-Net

2026. T.14. № 3. id 2189
Arm A.  Lyapuntsova E.V. 

DOI: 10.26102/2310-6018/2026.54.3.011

Government networks are increasingly targeted by coordinated cyberattacks that exploit similarities in infrastructure and operational practices across agencies. Although early detection at one organization could provide valuable warnings to others, effective threat intelligence sharing is often constrained by data sovereignty and privacy regulations. This paper presents an extension of the federated ensemble graph-based network (FEGB-Net) framework that enables privacy-preserving threat intelligence sharing across government agencies. The proposed approach extracts compact behavioral threat signatures from locally trained federated graph neural network models, protects these signatures using differential privacy, and supports real-time cross-agency threat matching. Experimental evaluation using the CICIDS2017 dataset demonstrates that detection accuracy remains comparable to isolated operation, while coordinated attack detection time is reduced by up to 88.5 %. Privacy analysis confirms that ε-differential privacy with ε = 2.0 limits membership inference attacks to near-random success. The results show that collaborative defense can be achieved without compromising data privacy or sovereignty.

Keywords: federated learning, threat intelligence sharing, graph neural networks, differential privacy, government cybersecurity