Keywords: electrocardiogram, spatial dependencies, generative models, interpretability, physiological modeling, synthetic ECG data, machine learning in cardiology
DOI: 10.26102/2310-6018/2025.51.4.012
This article presents an innovative mathematical model for generating 12-lead electrocardiograms (ECG), based on a fundamentally novel approach to accounting for spatial dependencies between leads. The primary scientific contribution of this research lies in the development of a method utilizing linear transformation of a set of physiologically grounded basis signals representing projections of the heart's electric field, supplemented with correlated noise that accurately simulates real clinical interference. Unlike traditional generative models (VAE, GAN, Diffecg), which operate as "black boxes", the proposed model enables explicit control over the morphology of key waveforms (P, QRS, T) and strict adherence to physiological constraints, including Kirchhoff's laws for limb leads. This ensures anatomical consistency of signals across all 12 leads, an achievement not previously attained in similar studies. The model demonstrated high performance on the PhysioNet PTB-XL dataset: MSE = 0.015, cosine similarity = 0.94, F1-score = 0.88 for normal rhythms and 0.82 for arrhythmias. A significant advantage of the model is its computational efficiency (generation time 50 ms) and relatively low memory requirements (2.5 GB). Comparative analysis with contemporary generative models (VAE, GAN, CardioDiff) revealed the superiority of the proposed approach in interpretability, parameter control, and physiological authenticity of synthesized signals. The developed model opens new possibilities for creating high-quality synthetic ECG data essential for training AI-based medical diagnostic systems, as well as for applications in telemedicine and medical education. The integration of physical modeling with machine learning presents particular value for researchers and clinicians requiring interpretable and clinically reliable ECG generation tools.
Keywords: electrocardiogram, spatial dependencies, generative models, interpretability, physiological modeling, synthetic ECG data, machine learning in cardiology
DOI: 10.26102/2310-6018/2025.51.4.010
The article presents a study and comparative analysis of modern access control models used in telecommunication systems. Three main models are considered: role-based access control (RBAC), attribute-based access control (ABAC), and privilege-based access control (PBAC). The bank's telecommunications infrastructure, including 800 workstations, 200 servers, 800 employees in the office area, and a data center with 50 servers processing critical applications, is used as an example. The bandwidth between the offices and the data center is 10 Gbit/s, and in the public area it is 1 Gbit/s. Active Directory with Kerberos support and a SIEM monitoring system are used to ensure security. The study assessed performance metrics such as response time, throughput, and resilience to peak loads. A security experiment was conducted that tested attack resilience, response flexibility, and protection levels under various system operating scenarios: under daily loads reflecting typical employee work; under peak loads occurring during periods of high resource usage (e.g., at the end of a reporting period); and under emergency loads associated with security incidents or equipment failures. This approach allowed us to identify differences in the effectiveness of access models in real operational situations.
Keywords: access control models, telecommunication systems, role-based access control model, attribute-based access control model, authority-based access control model
DOI: 10.26102/2310-6018/2025.51.4.008
Standards and approaches are considered in the field of ensuring the security of critical information infrastructure objects applied to banking system organizations. The aspects under study include the organizational structure and management, which affect the level of security in terms of the degree of personnel training, distribution of roles and powers, and the organization's readiness to recover from security incidents. Based on the internal audit methodology used in banking system organizations to maintain the security of information infrastructure objects at a sufficient level, a model is proposed, taking into account expert assessments of the indicators of the organizational structure and management. The directions for improving the method are shown. It is proposed to take into account the hierarchy of security requirements, use logical rules in expert assessment, on the basis of which an improved model is built. As a result, a hierarchy of private indicators is built based on their verbal formulations, data are modeled and an assessment of the level of information security is performed using the proposed approaches. The practical value of the work consists in the possibility of improving the internal audit activities of the banking system entities on its basis to ensure a sufficient level of security of critical information infrastructure objects.
Keywords: ensuring information security, security requirements indicators, objects protection level, banking system organization, conformity assessment methodology, critical information infrastructure
DOI: 10.26102/2310-6018/2025.51.4.011
The article presents a system for assessing the durability of the software development life cycle based on the use of artificial intelligence technologies. An analysis of existing approaches to the science of labor costs and development times is presented, based on which the choice of neural network technologies is substantiated as the most promising direction for solving forecasting problems under uncertainty. The main groups of factors influencing the duration of the development process are identified and classified: technical, organizational, team, historical, resource, external. Based on the classes of factors, constant distribution of input parameters, application for training neural networks, as well as their hyperparameters. The architectural characteristics of neural networks, the number of layers, types of activation functions, optimization methods and control parameters studied in the experiments are given. An algorithm for assessing the timing has been developed, implemented as a software system that provides operational forecasting of the durability of project development based on the analysis of historical data and current project analytics. An example of assessing the development times using the developed system is given and the results are compared with an expert assessment. The proposed system for analyzing the duration of the reduction and increasing the accuracy of the estimate in comparison with the reduction methods.
Keywords: neural network, software development life cycle, time estimating, software system, software engineering
DOI: 10.26102/2310-6018/2025.50.3.048
In many applied fields, the challenge of making optimal decisions is frequently transformed into discrete optimization problems. A common approach to solving such problems involves the use of evolutionary algorithms. While these methods have proven to be effective, they demand careful adjustment of parameters for each particular task and are usually examined separately, without exploring possibilities for their cooperative use or dynamic interchange. Moreover, existing studies have been limited to relatively low-dimensional problems, which has hindered the evaluation of algorithm scalability in real-world large-scale tasks (involving up to thousands of variables). This article aims to refine the set of effective configurations for evolutionary algorithms to optimize the performance of a developed intelligent algorithm-switching system. A comparative analysis of configurations for four classes of evolutionary algorithms – genetic, ant colony, bee colony, and simulated annealing – was conducted. Experiments were performed on high-dimensional test problems (up to 20000 points). The primary research methods included comparison and grouping of results, as well as analysis of computational experiment series to assess algorithm scalability and robustness against the "curse of dimensionality". In prior experiments with low-dimensional problems, differences in algorithm configurations were barely noticeable, whereas significant performance disparities emerged in high-dimensional tasks. As a result, optimal configurations for each algorithm class were identified. The findings hold practical value for developing automated decision-support systems in logistics, manufacturing, and other engineering applications requiring reliable and scalable optimization tools.
Keywords: discrete optimization, evolutionary algorithms, supply chain modeling, production scheduling, ant colony algorithm, genetic algorithm
DOI: 10.26102/2310-6018/2025.50.3.046
The paper presents a study on forecasting customer satisfaction in an insurance company based on machine learning methods. The relevance of the topic is due to the high competition in the insurance market and the need to retain customers by increasing their satisfaction with the service. The purpose of the study is to evaluate the accuracy and performance of models that can predict the level of customer satisfaction with an insurance service based on data on the customer's interaction with the company. Classification algorithms were used as methods. The accuracy and performance of the models was assessed using real data from surveys of insurance company customers. The best were ensemble methods - random forest and gradient boosting, which demonstrated the accuracy of forecasting satisfaction up to 85%, significantly outperforming simpler models. It is shown that gradient boosting allows taking into account nonlinear dependencies of factors, for example, the presence of escalation of the appeal, and thereby more accurately identify "dissatisfied" customers. Currently, such forecasting in insurance companies is either not carried out or relies significantly on random factors. This leads either to too frequent complaints or to low customer satisfaction with their subsequent outflow. The materials of the article are of practical value for insurance organizations: the implementation of the developed models will allow promptly identifying customers with the risk of dissatisfaction and reasonably applying preventive measures, for example, additional service measures or compensation to increase their satisfaction.
Keywords: customer satisfaction, insurance company, machine learning, prediction, gradient boosting, model accuracy
DOI: 10.26102/2310-6018/2025.50.3.045
The article is devoted to the development of a resource-oriented technology for organizing an information process of computational resource distribution under conditions of integrating the concepts of the Internet of Things (IoT) and edge computing. During the research, an analysis of existing models and methods was conducted and their shortcomings were identified, namely: the lack of consideration of the resource cost of data transit for computing nodes involved in data transmission and the computing process and the lack of consideration of the resource costs required for the operation of distributing computing resources. Given the limited resources of devices at the network edge, these drawbacks are particularly relevant. The goal of this study is to minimize resource consumption during resource distribution and solving computational tasks within systems constrained by device limitations. The foundation of the proposed technology includes: an overall mathematical model of resouce allocation process, formulated as an optimization problem; proposed methods for solving said problem based on heuristic rules and meta-heuristics; algorithms for calculating the resource cost of data transit and migration of computational tasks, which serve auxiliary purposes within the developed methods; a repository of meta-heuristic algorithms used to select the optimal method for solving the resource distribution problem. This technology implements the distribution of computational resources while minimizing resource expenses associated with data transit, taking into account both the computational task itself and decision-making regarding resource allocation. It considers the resource constraints of devices and dynamic changes in load and network topology. Experimental modeling confirmed the effectiveness of applying the proposed technology. Significant reductions in resource expenditure for computational resource distribution have been demonstrated, leading to improved results in terms of distributed computing efficiency metrics. The results of the study demonstrate the potential of the proposed technology for organizing distributed computing in systems with limited resources, such as IoT systems and edge computing.
Keywords: computing resource allocation, distributed computing, technology, resource costs optimization, distributed computing modelling
DOI: 10.26102/2310-6018/2025.50.3.044
In the context of digitalization of education, the development of adaptive feedback mechanisms in the context of multithreading, which ensure the personalization of the interaction of participants in the educational process, is becoming a factor in increasing the effectiveness of the educational process. The analysis of existing approaches and tools for personalizing learning routes in multithreading conditions using the example of university disciplines allowed us to formulate the research problem of insufficient automation of the educational process in conditions of multithreading. The purpose of the article is to describe the development of a method for intelligent analysis of information with semantic text processing in the implementation of adaptive feedback of participants in a digital educational environment. The scientific novelty of the study consists in the development of an approach to intelligent processing of answers in free form, which ensures an increase in the efficiency of the educational process in a digital educational environment. The implementation of the stages of the intelligent information processing method in feedback with a multi-format digital assessment is considered. The main stages of the method include: data preparation, linguistic preprocessing, semantic comparison, model training, feedback generation, and analysis of the results of interaction between participants in the educational process. In conclusion, the analysis of the results of the application of the method considered in the work in the educational process is given on the example of streaming university disciplines.
Keywords: digital educational environment, adaptive feedback, natural language processing, distance learning system, tokenization, assessment metrics
DOI: 10.26102/2310-6018/2025.51.4.009
The rapid development of automation tools for programming is a key factor in the digital transformation of society. The purpose of this work is a comprehensive analysis of the evolution of automation tools, including high-level programming languages, structured and object-oriented programming, integrated development environments, low-code/no-code platforms and large language models. The study examines the principles of operation of generative artificial intelligence, its capabilities and limitations, as well as the specifics of Russian solutions in this area. Particular attention is paid to the challenges associated with the widespread introduction of automation: problems of intellectual property, security of generated code, transformation of the programmer's role and adaptation of educational programs. A conclusion is made about the formation of a new paradigm of joint work of humans and artificial intelligence in software development. The practical significance of the work is to provide developers and managers with structured information for making decisions on the implementation of automation tools, the choice of technologies and the assessment of associated risks.
Keywords: programming automation, generative artificial intelligence, large language models, history of programming, integrated development environments, low-code/no-code, devOps, machine learning
DOI: 10.26102/2310-6018/2025.50.3.034
The article discusses a method for detecting DDoS attacks in digital ecosystems using tensor analysis and entropy metrics. Network traffic is formalized as a 4D tensor with the following dimensions: IP addresses, timestamps, request types, and countries of origin. The CP decomposition with rank 3 is used to analyze the data, which allows revealing hidden patterns in traffic. An algorithm for calculating the anomaly score (AS) is developed, which takes into account the factor loadings of the tensor decomposition and the entropy of time distributions. Experiments on real data have shown that the proposed method provides 92 % attack detection accuracy with a false positive rate of 1.2 %. Compared to traditional signature-based methods, the accuracy increased by 35 %, and the number of false positives decreased by 86 %. The method has proven effective in detecting complex low-rate attacks that are difficult to detect by standard methods. The results of the study can be useful for protecting various digital ecosystems, including financial services, telecommunication networks, and government platforms. The proposed approach expands the capabilities of network traffic analysis and can be integrated into modern cybersecurity systems. Further research could be aimed at optimizing the computational complexity of the algorithm and adapting the method to different types of network infrastructures.
Keywords: tensor analysis, DDoS attacks, cybersecurity, digital ecosystems, CP decomposition, entropy analysis, anomaly detection
DOI: 10.26102/2310-6018/2025.51.4.007
Digitalization of education necessitates a formalized representation and systematic organization of information flows that ensure effective interaction of participants in the educational process in the digital educational environment (DEE). The aim of the study is to model information flows based on an ontological representation of the interaction of a decision maker (DM) and feedback. An ontological model has been developed that reflects key classes, instances with the identification of relationships between them and the semantics of information flows circulating between the DEE components. The article presents a decomposition of an instance of the "adaptive feedback algorithm" class of the ontological model of information flows. Digital tools operate in a single circuit of the educational environment, implementing a continuous cycle of assessment, analysis, feedback and correction. An instance of the "unified test question bank" class of the ontological model, including artificial intelligence technologies for the implementation of automated verification of free-form answers in the conditions of streaming learning, allows for variable and level assessment. Feedback implementation tools include LMS, social networks and a virtual information and communication assistant. The relationship of the tools supplemented in the DEE is shown in the ontological model when describing the information flows of the "DM – feedback" connection. The application of the model considered in the article will allow structuring and unifying the description of educational processes with the automation of the digital footprint analysis. The conclusion provides findings with the decomposition of the ontological model using the example of the knowledge assessment process in the context of digitalization and multithreading with the identification of relations in the form of prerequisites of instances of classes of the ontological model.
Keywords: ontology, digital educational environment, distance learning system, information flows, educational technologies, class instances
DOI: 10.26102/2310-6018/2025.51.4.002
Unmanned trains are a key component of the next level of railway automation. Launching locomotives in unmanned mode requires the development of reliable computer vision systems using artificial intelligence technologies. The paper presents a method for improving the quality of learning convolutional neural networks for detecting railway infrastructure objects. The reliability of visual object detection by a computer vision system can be achieved through algorithmic expansion of training datasets. The proposed method takes into account the variability of weather conditions in which identical objects must be detected, and it allows generating image modifications with added effects of rain, snow or fog. The original dataset included 21700 annotated images and contained 7 classes of objects. Based on them, an extended set of 65100 images was formed using the developed method. To evaluate the effectiveness of the proposed approach, comparative learning of the advanced YOLOv11 model was carried out on the original and extended datasets. The F1-measure and mean average precision (mAP) metrics were used to compare the learning results. The results of the computational experiments confirm that using the extended dataset improves the quality of learning. In particular, the F1-measure for the YOLO model trained on the original dataset was 0.72, while on the extended dataset this parameter reached an increased value of 0.90. The value of the second used metric mAP (50–95) increased from 0.67 on the original dataset to 0.83 on the extended dataset. Comparative values of the metrics were obtained at the same confidence threshold of 0.8. The developed method has been implemented in a hardware and software system, which is ready for testing as part of an integrated control and safety system for freight trains.
Keywords: machine vision, machine learning, convolutional neural networks, YOLOv11, rail transport automation, unmanned transport
DOI: 10.26102/2310-6018/2025.50.3.031
In the conditions of high competition for large modern companies producing mass products or providing mass services, it is typical to increase advertising costs, which does not always bring the expected effect. There is a growing need for tools for precise audience segmentation, which can increase the effectiveness of marketing communications. Traditional response prediction models do not allow us to determine whether the client's behavior has changed under the influence of marketing impact, which reduces the possibilities of constructive analysis of marketing campaigns. This article is aimed at studying uplift modeling as a tool for assessing the effect of increasing positive responses from communication and targeting optimization. The results of the study demonstrate significant advantages of the uplift modeling approach for identifying client segments with maximum sensitivity to impact. The comparative analysis of various approaches to building uplift models (such as SoloModel, TwoModel, Class Transformation, Class Transformation with Regression), based on the use of specialized uplift metrics (uplift@k, Qini AUC, Uplift AUC, weighted average uplift, Average Squared Deviation), conducted within the article, demonstrates the strengths and weaknesses of each of the modeling approaches. The study is based on open data X5 RetailHero Uplift Modeling Dataset, provided by X5 Retail Group for the study of uplift modeling methods in the context of retail.
Keywords: uplift modeling, machine learning, marketing communications, targeting, response evaluation, uplift model quality metrics
DOI: 10.26102/2310-6018/2025.50.3.042
This paper analyzes the features of the Modbus protocol, with an emphasis on its vulnerability in the context of security and protection of transmitted information. The main risks associated with the use of Modbus in automation and process control systems (APCS) are considered, including the lack of encryption and authentication mechanisms, which makes it vulnerable to various types of attacks, such as data interception or unauthorized access, as well as options for solving the problem of node verification. The Modbus protocol is one of the most common and popular industrial protocols, actively used in automation systems and control of various technological processes. The protocol is easy to implement and widespread, which makes it attractive for implementation in various industries. However, the RTU mode of the Modbus protocol has disadvantages, such as vulnerability to man-in-the-middle and substitution attacks, which carries potential risks for industrial enterprises using this protocol in production. The vulnerability is due to the lack of built-in authentication and verification mechanisms for nodes involved in data transmission. This creates risks associated with the possibility of unauthorized access and substitution of information during the exchange process. The article proposes a method for increasing confidentiality during interaction between nodes by implementing cryptographic operations that allow for verification of the authenticity of the source of transmitted data by implementing a lightweight cryptographic algorithm based on the XOR operation with a 16-bit secret. The advantage of the proposed method is its compatibility with the existing implementation of the Modbus protocol, minimal impact on system performance, and no need for deep modification of the architecture. It is also worth noting a slight increase in data transmission latency (less than 2 %) and processor time consumption.
Keywords: modbus RTU, man-in-the-middle, frame, cryptographic protection, industrial protocol
DOI: 10.26102/2310-6018/2025.50.3.036
Recognition of license plates (LP) is one of the key tasks for intelligent transport systems. In practice, such factors as blur, noise, adverse weather conditions or shooting from a long distance lead to obtaining low-resolution (LR) images, which significantly reduces the reliability of recognition. A promising solution to this problem is the use of super-resolution (SR) methods capable of restoring high-resolution (HR) images from the corresponding LR versions. This paper is devoted to the research and development of a software package using neural network super-resolution models to improve the quality and accuracy of LP recognition. The software package implements the YOLO (You Only Look Once) neural network architectures for object detection, the SORT (Simple Online and Realtime Tracking) object tracking algorithm and super-resolution models to improve LP images. This approach ensures high accuracy of LP recognition even when working with images obtained in difficult shooting conditions characterized by low quality or resolution. The experimental results demonstrate that the proposed approach can improve the accuracy of LP recognition in low-resolution images. The image restoration quality was assessed using the PSNR and SSIM metrics, which confirmed the improvement of the visual characteristics of LP for the most effective models. The developed software package has a wide potential for practical application and can be integrated into various systems, for example, for access control to protected areas, traffic monitoring and analysis, automation of parking complexes, as well as as part of solutions for ensuring public safety. The flexibility of the implemented architecture allows you to adapt the system to specific requirements with modifications, which emphasizes its versatility and practical significance.
Keywords: license plates recognition, computer vision, deep neural networks, superresolution, objects detection, objects tracking
DOI: 10.26102/2310-6018/2025.50.3.047
This article proposes an intelligent mivar decision-making system (MDMS) designed for the optimized distribution and transportation of cargo by groups of warehouse robots. This mivar decision-making system integrates three groups of different warehouse robots: the loader robot (RP), the transporter robot (RT), and the unloader robot (RR). The selection and determination of the state of each robot (loader robot, transporter robot, and unloader robot) are based on corresponding calculations performed using specially developed algorithms. These algorithms are based on a series of key equation systems, such as the transporter robot equation system, the loader robot equation system, the unloader robot equation system, and the command variable system. The equation systems take into account the robot's state, operational capability, ability to complete cargo transportation, compatibility for cargo transportation, etc. Additionally, the Manhattan distance is considered, which helps determine the robot's ability to complete its task. The article provides a detailed description of the equation systems and calculation algorithms, as well as a formalized description of the domain in which the mivar logical artificial intelligence system operates. The logical schematic of the MDMS system and decision-making rules are also outlined, which aid in robot selection, making the system more efficient. Experimental results show that this system can function normally according to pre-established logic and objectives. It accurately completed all distribution tasks, demonstrating good stability and reliability.
Keywords: mivar, mivar decision-making systems, logical AI, distribution system, group of warehouse robots, robot-loader, robot-transporter, robot-unloader
DOI: 10.26102/2310-6018/2025.51.4.001
This paper examines the availability of satellite communications in the Arctic zone of the Russian Federation. It provides information on existing satellite communications systems, the number of which is currently limited due to sanctions and the geographic features of the region. After analyzing the actually available satellite communications systems, it is noted that satellite communications systems using the geostationary orbit (GEO) are currently the only option for providing data transmission services. An analysis of the problems typical of using the geostationary orbit in high-latitude conditions is given; an overview of Russian geostationary satellites and the conditions of their use in the Arctic is made, taking into account the coverage areas of the beams and frequency ranges. The result of calculating the geometric relationships when organizing communications between a satellite in GEO and earth stations in the Arctic region is given. For further study of the quality of communication in the northernmost parts of the region, the range of slant range and elevation angle values typical for the waters of the Northern Sea Route is calculated. The results of calculations of the required distance of the earth station from ground objects are presented, allowing for rational placement of the earth station both from the point of view of ensuring direct visibility of the satellite and the required elevation angle, and for reducing the noise temperature of the receiver.
Keywords: satellite communication, geostationary orbit, arctic region, elevation angle, slant range
DOI: 10.26102/2310-6018/2025.50.3.038
The article presents the results of a study aimed at expanding the theoretical basis in the field of real-time computing. The issues considered include: defining indicators of computational complexity in real time, a methodology for their quantitative assessment, identifying ways to achieve the computability of algorithms in real time, and formalizing approaches to the optimal technical implementation of real-time computing systems. The research is based on existing concepts in algorithm theory and computation theory, including real-time computation. Significant new scientific results of the research include: the introduction, along with the known indicators of temporal and spatial computational complexity, of an additional indicator of configuration computational complexity, necessary for assessing computational complexity in real time; the confirmation of the possibility of controlling temporal, spatial, and configuration complexity within the framework of a given algorithm functional solely by changing the number of computation execution threads; theoretical justification of the possibility of reducing the execution time of the configuration algorithm from exponential to polynomial or even linear by condensing the initial graph of the algorithm with the formation of strongly connected components of a set of actor functions and obtaining as a result an acyclic directed graph, whose topological sorting can be performed in linear time; determination of approaches to the optimal technical implementation of the algorithm with a given configuration, including in the form of an integrated circuit with wiring optimized based on the solution of Steiner's rectangular problem.
Keywords: computational complexity, real time, computability, configuration, search algorithm, actor functions, portability
DOI: 10.26102/2310-6018/2025.50.3.037
This paper addresses the problem of improving the accuracy of determining the spectral characteristics of voice signals in audio recordings. To solve this problem, a modification of the classical Hamming window function is proposed by introducing an optimizable parameter. The study's relevance stems from the need to improve the reliability of voice recognition and identification systems, especially in the context of biometric applications and authentication tasks. The main objective is the development of an algorithm for calculating the optimal value of this parameter, maximizing the quality of spectral analysis for specific voice frequency ranges. To achieve this objective, the gradient descent method was used to optimize the parameter of the modified function. Quality assessment was performed based on a weighted sum of spectral characteristics (peak factor, spectral line width, signal-to-noise ratio). Experiments were conducted on test signals simulating male (200–400 Hz) and female (220–880 Hz) voices. The results showed that the proposed approach improves the accuracy of determining spectral components, especially in the male baritone range (up to 5.42 % improvement), by achieving clearer identification of fundamental frequencies and reducing side-lobe levels compared to the classical Hamming window. The study's conclusions indicate the potential of adapting window functions to specific frequency ranges of voice signals. The proposed algorithm can be used to improve the performance of biometric identification systems and other applications requiring accurate spectral analysis of voice.
Keywords: window function, hamming window, spectral analysis, voice signal processing, parameter optimization, gradient descent, biometric identification, spectrum estimation accuracy, STFT
DOI: 10.26102/2310-6018/2025.50.3.025
The study presents an integrated algorithm for evaluating and optimizing systems with heterogeneous data, taking into account managerial and organizational performance indicators. The proposed algorithm consists of data coverage analysis (DCA), fuzzy data analysis (FDA), and a set of statistical methods for evaluating the likelihood of the obtained results. An integrated algorithm has been developed for determining the most effective heterogeneous performance indicators, which differs in its method of selecting reliable indicators and allows for the formulation of strategies for improving organizational systems. A set of 12 criteria indicating the application of an integrated method was selected for verification. The results showed that the AOD results have a lower mean absolute percentage error (MAPE) than the fuzzy AOD results. The study also analyzes and weighs indicators, and the results showed that the indicators "investments in research and development relative to production costs" and "investments in education and retraining per employee" are the most effective. The study presents a unique algorithm for taking into account heterogeneous managerial and organizational factors. It can handle data uncertainty due to the presence of fuzzy inference mechanisms in the algorithm. The weights of the indicators are determined using a set of reliable statistical algorithms.
Keywords: integrated algorithm, heterogeneous data, data coverage analysis, fuzzy logic, verification, statistical criterion, data mining, indicator weight
DOI: 10.26102/2310-6018/2025.50.3.041
The work is devoted to topical issues of the synthesis of human-machine interaction tools, within the framework of which a model for interfacing components of graphical user interfaces (GUI) based on algebraic logic methods is considered. The components of GUI are presented as components of open information systems with standardized interfaces that determine their spatial compatibility. To formalize the components of the GUI, it is proposed to use semantic networks, while the compatibility of the components is determined by the rules of logical inference, presented in the form of a Horn disjunction. The description of the integrated visual component "Named input field" is presented in the form of a semantic network containing a description of the spatial compatibility of its constituent indivisible components. An extension of the OpenAPI specification has been developed to solve the problem of unifying and standardizing the description of GUI components and ensuring the interoperability of tools for synthesizing screen forms and supporting UX testing. The article presents the results of the synthesis of chains of geometric shapes that mimic the components of GUI, which can also be presented declaratively in the form of semantic networks, and, consequently, in the RDF format. In addition to the components themselves, semantic networks include a description of filters that can be used to control the choice of ways to spatially interface GUI components.
Keywords: human-machine interaction, graphical user interface, specification, component, horn's disjunction
DOI: 10.26102/2310-6018/2025.50.3.033
One of the significant areas of investment in civil aviation is air transportation subsidies. The article considers the possibility of optimizing management decisions on the distribution of investment volumes among airlines participating in the air transportation route selection program that ensures growth of efficiency indicators with a limited investment resource. To formulate the optimization problem, continuous optimized variables that determine investment volumes and alternative variables corresponding to the choice of a specific transportation route are introduced. The initial data provided by the airlines are used to assess the fulfillment of extreme and boundary requirements for the subsidizing process. Each indicator, on the basis of which the specified requirements are formed, is calculated according to the parameters recorded in the initial data, depending on the values of the variables. In this case, it becomes necessary to split the condition of a limited integrated resource into two particular boundary conditions. As a result, we have a multi-criteria optimization problem with constraints, defined on sets of continuous and alternative optimized variables. To solve it, it is proposed to use a combination of an adaptive algorithm of directed randomized search and a particle swarm algorithm. We conduct a computational experiment using an optimization approach, which is compared with actual data on air transport subsidies. The optimized variant of investment distribution and route selection is characterized by values of performance indicators that are better than those actually achieved.
Keywords: investment management, centralized control, air transport subsidies, airlines, optimization
DOI: 10.26102/2310-6018/2025.50.3.026
Networks are widely used to represent the interactive relationships between individual elements in complex big data systems, such as the cloud-based Internet. Determinable causes in these systems can lead to a significant increase or decrease in the frequency of interaction within the corresponding network, making it possible to identify such causes by monitoring the level of interaction within the network. One method for detecting changes is to first create a network graph by drawing an edge between each pair of nodes that have interacted within a specified time interval. The topological characteristics of the graph, such as degree, proximity, and mediation, can then be considered as one-dimensional or multidimensional data for online monitoring. However, the existing statistical process control (SPC) methods for unweighted networks almost do not take into account either the sparsity of the network or the direction of interaction between two network nodes, that is, pair interaction. By excluding inactive pair interactions, the proposed parameter estimation procedure provides higher consistency with lower computational costs than the alternative approach when the networks are large-scale and sparse. The matrices developed on the basis of a matrix probabilistic model for describing directed pair interactions within time-independent, unweighted big data networks with cloud processing significantly simplify parameter estimation, the effectiveness of which is increased by automatically eliminating pair interactions that do not actually occur. Then the proposed model is integrated into a multidimensional distribution function for online monitoring of the level of communication in the network.
Keywords: cloud computing, big data, network status changes, real-time monitoring, unweighted networks, pair interaction, matrix probability model
DOI: 10.26102/2310-6018/2025.50.3.043
It is known that the use of Non-Orthogonal Multiple Access (NOMA) methods can improve the spectral efficiency and capacity of communication networks. However, in the presence of nonlinear distortions or synchronization issues, the orthogonality of user signals within a CDMA group is disrupted, leading to inter-channel interference and a reduction in interference immunity as the number of users increases. This must be taken into account when analyzing the interference immunity in broadband radio communication networks. The paper presents simulation results demonstrating the possibility of using orthogonal synchronous code division multiple access in combination with non-orthogonal multiple access, where the system's interference immunity is determined solely by the characteristics of NOMA. The influence of power distribution among users on the network's interference immunity, depending on their distance, is shown. For the analysis, mathematical models and MATLAB implementations were used, enabling the study of key system parameters, including bit error rate (BER), capacity, and power allocation strategies. The results demonstrate that the proposed approach allows for effective analysis and optimization of NOMA systems, taking into account the impact of nonlinear distortions and power distribution. Examples of calculations are provided, confirming the feasibility of using NOMA in broadband radio communication networks.
Keywords: non-Orthogonal Multiple Access (NOMA), spectral efficiency, interference immunity, nonlinear distortions, power allocation, radio networks
DOI: 10.26102/2310-6018/2025.50.3.027
In the context of the increasing complexity of managing national projects aimed at achieving the National Development Goals of the Russian Federation, an urgent task is to automate the analysis of the relationships between the activities planned within these projects and the indicators that reflect the degree of achievement of the objectives set in the project. Traditional methods of manual document processing are characterized by high labor intensity, subjectivity and significant time costs, which necessitates the development of intelligent decision support systems. This article presents an approach to automating the analysis of links and indicators of national projects, which allows for automatic detection and verification of semantic links "event-indicator" in national project documents, significantly increasing the efficiency of analytical work. This approach is based on the use of the Retrieval-Augmented Generation (RAG) system, which combines a locally adapted language model with vector search technologies. The work demonstrates that the integration of the RAG approach with vector search and taking into account the project ontology allows achieving the required accuracy and relevance of the analysis. The system is particularly valuable not only for its ability to generate interpretable justifications for the identified links, but also for its ability to identify key events that affect the achievement of indicators for several national projects at once, including those whose impact on the implementation of these indicators is not obvious. The proposed solution opens up new opportunities for the digitalization of public administration and can be adapted for other tasks, such as identifying risks in the implementation of events and generating new events.
Keywords: RAG systems, large language models, national projects, semantic search, automation, national goals, artificial intelligence in public administration
DOI: 10.26102/2310-6018/2025.50.3.040
This paper presents a procedure for dynamically modifying the binary encoding scheme in a genetic algorithm (GA), enabling adaptive adjustment of the search space during the algorithm’s execution. In the proposed approach, the discretization step for each coordinate is updated from generation to generation based on the current boundaries of regions containing high-quality solutions and the density of individuals within them. For each such region, the number of bits in the binary string representing solutions is determined according to the number of encoded points, after which the discretization step is recalculated. The encoding scheme is restructured in a way that ensures the correctness of genetic operators in the presence of discontinuities in the search space, preserves the fixed cardinality of the solution set at each generation, and increases the precision of the solutions due to the dynamic adjustment of the discretization step. Experimental results on multimodal test functions such as Rastrigin and Styblinski–Tang demonstrate that the proposed GA modification progressively refines the search area during evolution, concentrating solutions around the global extrema. For the Rastrigin function, initially fragmented regions gradually focus around the global maximum. In the Styblinski–Tang case, the algorithm shifts the search from an intentionally incorrect initial area toward one of the global optima.
Keywords: adaptive encoding, genetic algorithm, discretization, multimodal optimization, search space
DOI: 10.26102/2310-6018/2025.50.3.024
The growing volume of processed data and the widespread adoption of cloud technologies have made efficient task distribution in high-load computing systems a critical challenge in modern computer science. However, existing solutions often fail to account for resource heterogeneity, dynamic workload variations, and multi-objective optimization, leaving gaps in achieving optimal resource utilization. This study aims to address these limitations by proposing a hybrid load-balancing algorithm that combines the strengths of Artificial Bee Colony (ABC) and Max-Min scheduling strategies. The research employs simulation in the CloudSim environment to evaluate the algorithm’s performance under varying workload conditions (100 to 5000 tasks). Tasks are classified into "light" and "heavy" based on their MIPS requirements, with ABC handling lightweight tasks for rapid distribution and Max-Min managing resource-intensive tasks to minimize makespan. Comparative analysis against baseline algorithms (FCFS, SJF, Min-Min, Max-Min, PSO, and ABC) demonstrates the hybrid approach’s superior efficiency, particularly in large-scale and heterogeneous environments. Results show a 15–30% reduction in average task completion time at high loads (5000 tasks), confirming its adaptability and scalability. The study concludes that hybrid algorithms, integrating heuristic and metaheuristic techniques, offer a robust solution for dynamic cloud environments. The proposed method bridges the gap between responsiveness and strategic resource allocation, making it viable for real-world deployment in data centers and distributed systems. The practical significance of the work lies in increasing energy efficiency, reducing costs and ensuring quality of service (QoS) in cloud computing.
Keywords: cloud computing, scheduling, task allocation, virtual machines, hybrid algorithm, load balancing, optimization, cloudSim
DOI: 10.26102/2310-6018/2025.51.4.005
The acetylene hydrogenation process is an important step in the production of ethylene and other valuable chemical products. However, its effectiveness largely depends on the accuracy of control of technological parameters, such as temperature, pressure and consumption of reagents. Despite this, most research in the field of acetylene hydrogenation focuses on improving the technological aspects of the process, while the development of modern information, measuring and control systems remains poorly understood. As part of the study, an information-measuring and control system was proposed aimed at increasing the efficiency of the acetylene hydrogenation process. The system is based on a virtual analyzer, which allows you to calculate the degree of conversion in real time based on data from instrumentation. Optimization of the virtual analyzer model was performed using a genetic algorithm, which ensured high accuracy of calculations. Based on the data of the virtual analyzer, a control algorithm was developed that corrects the process parameters to maintain optimal reaction conditions. The control system was implemented in the Centum VP environment, which will allow it to be integrated into the existing automation infrastructure.
Keywords: ethylene production, acetylene hydrogenation, petrochemistry, control system, process automation
DOI: 10.26102/2310-6018/2025.50.3.035
The article explores modern methods for automatic detection of atypical (anomalous) musical events within a musical sequence, such as unexpected harmonic shifts, uncharacteristic intervals, rhythmic disruptions, or deviations from musical style, aimed at automating this process and optimizing specialists' working time. The task of anomaly detection is highly relevant in music analytics, digital restoration, generative music, and adaptive recommendation systems. The study employs both traditional features (Chroma Features, MFCC, Tempogram, RMS-energy, Spectral Contrast) and advanced sequence analysis techniques (self-similarity matrices, latent space embeddings). The source data consisted of diverse MIDI corpora and audio recordings from various genres, normalized to a unified frequency and temporal scale. Both supervised and unsupervised learning methods were tested, including clustering, autoencoders, neural network classifiers, and anomaly isolation algorithms (isolation forests). The results demonstrate that the most effective approach is a hybrid one that combines structural musical features with deep learning methods. The novelty of this research lies in a comprehensive comparison of traditional and neural network approaches for different types of anomalies on a unified dataset. Practical testing has shown the proposed method's potential for automatic music content monitoring systems and for improving the quality of music recommendations. Future work is planned to expand the research to multimodal musical data and real-time processing.
Keywords: musical sequence, anomaly, tempogram, musical style, MFCC, chroma, autoencoder, music anomaly detection
DOI: 10.26102/2310-6018/2025.50.3.029
The relevance of the study is due to the need to increase the efficiency of agent training under conditions of partial observability and limited interaction, which are typical for many real-world tasks in multiagent systems. In this regard, the present article is aimed at the development and analysis of a hybrid approach to agent training that combines the advantages of gradient-based and evolutionary methods. The main method of the study is a modified Advantage Actor-Critic (A2C) algorithm, supplemented with elements of evolutionary learning — crossover and mutation of neural network parameters. This approach allows for a comprehensive consideration of the problem of agent adaptation in conditions of limited observation and cooperative interaction. The article presents the results of experiments in an environment with two cooperative agents tasked with extracting and delivering resources. It is shown that the hybrid training method provides a significant increase in the effectiveness of agent behavior compared to purely gradient-based approaches. The dynamics of the average reward confirm the stability of the method and its potential for more complex multiagent interaction scenarios. The materials of the article have practical value for specialists in the fields of reinforcement learning, multi-agent system development, and the design of adaptive cooperative strategies under limited information.
Keywords: reinforcement learning, evolutionary algorithms, multiagent system, a2C, LSTM, cooperative learning