Keywords: uplift modeling, machine learning, marketing communications, targeting, response evaluation, uplift model quality metrics
DOI: 10.26102/2310-6018/2025.50.3.031
In the conditions of high competition for large modern companies producing mass products or providing mass services, it is typical to increase advertising costs, which does not always bring the expected effect. There is a growing need for tools for precise audience segmentation, which can increase the effectiveness of marketing communications. Traditional response prediction models do not allow us to determine whether the client's behavior has changed under the influence of marketing impact, which reduces the possibilities of constructive analysis of marketing campaigns. This article is aimed at studying uplift modeling as a tool for assessing the effect of increasing positive responses from communication and targeting optimization. The results of the study demonstrate significant advantages of the uplift modeling approach for identifying client segments with maximum sensitivity to impact. The comparative analysis of various approaches to building uplift models (such as SoloModel, TwoModel, Class Transformation, Class Transformation with Regression), based on the use of specialized uplift metrics (uplift@k, Qini AUC, Uplift AUC, weighted average uplift, Average Squared Deviation), conducted within the article, demonstrates the strengths and weaknesses of each of the modeling approaches. The study is based on open data X5 RetailHero Uplift Modeling Dataset, provided by X5 Retail Group for the study of uplift modeling methods in the context of retail.
Keywords: uplift modeling, machine learning, marketing communications, targeting, response evaluation, uplift model quality metrics
DOI: 10.26102/2310-6018/2025.50.3.025
The study presents an integrated algorithm for evaluating and optimizing systems with heterogeneous data, taking into account managerial and organizational performance indicators. The proposed algorithm consists of data coverage analysis (DCA), fuzzy data analysis (FDA), and a set of statistical methods for evaluating the likelihood of the obtained results. An integrated algorithm has been developed for determining the most effective heterogeneous performance indicators, which differs in its method of selecting reliable indicators and allows for the formulation of strategies for improving organizational systems. A set of 12 criteria indicating the application of an integrated method was selected for verification. The results showed that the AOD results have a lower mean absolute percentage error (MAPE) than the fuzzy AOD results. The study also analyzes and weighs indicators, and the results showed that the indicators "investments in research and development relative to production costs" and "investments in education and retraining per employee" are the most effective. The study presents a unique algorithm for taking into account heterogeneous managerial and organizational factors. It can handle data uncertainty due to the presence of fuzzy inference mechanisms in the algorithm. The weights of the indicators are determined using a set of reliable statistical algorithms.
Keywords: integrated algorithm, heterogeneous data, data coverage analysis, fuzzy logic, verification, statistical criterion, data mining, indicator weight
DOI: 10.26102/2310-6018/2025.50.3.026
Networks are widely used to represent the interactive relationships between individual elements in complex big data systems, such as the cloud-based Internet. Determinable causes in these systems can lead to a significant increase or decrease in the frequency of interaction within the corresponding network, making it possible to identify such causes by monitoring the level of interaction within the network. One method for detecting changes is to first create a network graph by drawing an edge between each pair of nodes that have interacted within a specified time interval. The topological characteristics of the graph, such as degree, proximity, and mediation, can then be considered as one-dimensional or multidimensional data for online monitoring. However, the existing statistical process control (SPC) methods for unweighted networks almost do not take into account either the sparsity of the network or the direction of interaction between two network nodes, that is, pair interaction. By excluding inactive pair interactions, the proposed parameter estimation procedure provides higher consistency with lower computational costs than the alternative approach when the networks are large-scale and sparse. The matrices developed on the basis of a matrix probabilistic model for describing directed pair interactions within time-independent, unweighted big data networks with cloud processing significantly simplify parameter estimation, the effectiveness of which is increased by automatically eliminating pair interactions that do not actually occur. Then the proposed model is integrated into a multidimensional distribution function for online monitoring of the level of communication in the network.
Keywords: cloud computing, big data, network status changes, real-time monitoring, unweighted networks, pair interaction, matrix probability model
DOI: 10.26102/2310-6018/2025.50.3.027
In the context of the increasing complexity of managing national projects aimed at achieving the National Development Goals of the Russian Federation, an urgent task is to automate the analysis of the relationships between the activities planned within these projects and the indicators that reflect the degree of achievement of the objectives set in the project. Traditional methods of manual document processing are characterized by high labor intensity, subjectivity and significant time costs, which necessitates the development of intelligent decision support systems. This article presents an approach to automating the analysis of links and indicators of national projects, which allows for automatic detection and verification of semantic links "event-indicator" in national project documents, significantly increasing the efficiency of analytical work. This approach is based on the use of the Retrieval-Augmented Generation (RAG) system, which combines a locally adapted language model with vector search technologies. The work demonstrates that the integration of the RAG approach with vector search and taking into account the project ontology allows achieving the required accuracy and relevance of the analysis. The system is particularly valuable not only for its ability to generate interpretable justifications for the identified links, but also for its ability to identify key events that affect the achievement of indicators for several national projects at once, including those whose impact on the implementation of these indicators is not obvious. The proposed solution opens up new opportunities for the digitalization of public administration and can be adapted for other tasks, such as identifying risks in the implementation of events and generating new events.
Keywords: RAG systems, large language models, national projects, semantic search, automation, national goals, artificial intelligence in public administration
DOI: 10.26102/2310-6018/2025.50.3.024
The growing volume of processed data and the widespread adoption of cloud technologies have made efficient task distribution in high-load computing systems a critical challenge in modern computer science. However, existing solutions often fail to account for resource heterogeneity, dynamic workload variations, and multi-objective optimization, leaving gaps in achieving optimal resource utilization. This study aims to address these limitations by proposing a hybrid load-balancing algorithm that combines the strengths of Artificial Bee Colony (ABC) and Max-Min scheduling strategies. The research employs simulation in the CloudSim environment to evaluate the algorithm’s performance under varying workload conditions (100 to 5000 tasks). Tasks are classified into "light" and "heavy" based on their MIPS requirements, with ABC handling lightweight tasks for rapid distribution and Max-Min managing resource-intensive tasks to minimize makespan. Comparative analysis against baseline algorithms (FCFS, SJF, Min-Min, Max-Min, PSO, and ABC) demonstrates the hybrid approach’s superior efficiency, particularly in large-scale and heterogeneous environments. Results show a 15–30% reduction in average task completion time at high loads (5000 tasks), confirming its adaptability and scalability. The study concludes that hybrid algorithms, integrating heuristic and metaheuristic techniques, offer a robust solution for dynamic cloud environments. The proposed method bridges the gap between responsiveness and strategic resource allocation, making it viable for real-world deployment in data centers and distributed systems. The practical significance of the work lies in increasing energy efficiency, reducing costs and ensuring quality of service (QoS) in cloud computing.
Keywords: cloud computing, scheduling, task allocation, virtual machines, hybrid algorithm, load balancing, optimization, cloudSim
DOI: 10.26102/2310-6018/2025.50.3.029
The relevance of the study is due to the need to increase the efficiency of agent training under conditions of partial observability and limited interaction, which are typical for many real-world tasks in multiagent systems. In this regard, the present article is aimed at the development and analysis of a hybrid approach to agent training that combines the advantages of gradient-based and evolutionary methods. The main method of the study is a modified Advantage Actor-Critic (A2C) algorithm, supplemented with elements of evolutionary learning — crossover and mutation of neural network parameters. This approach allows for a comprehensive consideration of the problem of agent adaptation in conditions of limited observation and cooperative interaction. The article presents the results of experiments in an environment with two cooperative agents tasked with extracting and delivering resources. It is shown that the hybrid training method provides a significant increase in the effectiveness of agent behavior compared to purely gradient-based approaches. The dynamics of the average reward confirm the stability of the method and its potential for more complex multiagent interaction scenarios. The materials of the article have practical value for specialists in the fields of reinforcement learning, multi-agent system development, and the design of adaptive cooperative strategies under limited information.
Keywords: reinforcement learning, evolutionary algorithms, multiagent system, a2C, LSTM, cooperative learning
DOI: 10.26102/2310-6018/2025.50.3.028
Modern computer graphics offers many different visual effects for processing three-dimensional scenes during rendering. The burden of calculating these graphic effects falls on the user hardware, which leads to the need to compromise between performance and image quality. In this regard, the development of systems capable of automatically assessing the quality of three-dimensional rendering and images in general becomes relevant. The relevance of this topic is expressed in two directions. First, the ability to predict user reactions will allow for more accurate customization of graphic applications. Second, understanding preferences can help in optimizing 3D scenes by identifying visual effects that can be disabled. In a broader sense, this also poses the challenge of optimally managing the rendering process so that it becomes possible to maximize the use of available hardware capabilities. Therefore, it becomes a significant task to model the process of rendering 3D graphics in such a form, in which it will be as simple as possible to deal with its optimization. The purpose of this study is to create such a model, which will allow to perform the stage of expert evaluation to automatically determine the quality of three-dimensional rendering and use it for optimal control of the rendering pipeline. A number of important issues that require special attention in the research are also discussed. The range of applications of the developed system includes various spheres of human activity involving three-dimensional modeling. Such a system can become a useful tool for both developers and users, which is especially important in education, video game development, virtual reality technologies, etc., where it is necessary to model realistic objects or visualize complex processes.
Keywords: quadratic knapsack problem, multidimensional knapsack problem, artificial neural networks, three-dimensional rendering, user preference analysis, visual quality assessment, future technologies
DOI: 10.26102/2310-6018/2025.50.3.018
Based on the system engineering principles, the technological aspects of designing a prototype electric vehicle with a combined control system are considered, which assumes the possibility of simple and safe switching from manual mode to remote (via radio channel) or software. The design and physical implementation of an object are based on consideration of prototyping, machining process, and programming technologies that are interrelated throughout the entire structure. The project is implemented on the basis of the Bigo.Land set (in its mechanical and mechatronic parts) and based on ArduPilot/Pixhawk (in its software and hardware parts). The basic set of Bigo.Land is complemented by a two-way overrunning clutch, which, along with the software, allows the pilot to take part in the control process if necessary. The result of the work is a fully functional prototype of an electric vehicle with a sensing system and functions of unmanned control and autonomous behavior; as well as its virtual (CAD/CAE) model and software in the form of the Ardupilot/Pixhawk flight controller firmware, which extends and complements the standard functionality of the base Ardupilot software. The project and the results obtained can be useful to specialists developing and operating unmanned mobile vehicles, as well as educational institutions implementing pedagogical technologies based on the project learning method.
Keywords: unmanned electric vehicle, technological process aspects of design, combined control, two-way overrunning clutch, prototyping, system engineering, project-based learning
DOI: 10.26102/2310-6018/2025.50.3.023
The issue of wireless transmission of information via radio communication is raised. It is indicated that the key parameter of the radio channel quality is the signal-to-noise ratio at the input of the receiving device. The importance of ensuring a high signal-to-noise ratio in radio transmitting and receiving devices and systems is emphasized. An analytical review and comparative analysis of common methods for determining the signal-to-noise ratio at the input of the receiving device is carried out. Theoretical and practical methods for determining the signal-to-noise ratio are considered, in particular, the method of complex envelope, the method of spectral analysis, as well as the method of calculating losses in free space. Their advantages and disadvantages are revealed. The mathematical and methodological apparatus of the considered methods is described. A brief description of the algorithms for measuring the signal-to-noise ratio in these methods is given. Information about the conducted experimental studies of the methods is provided. The initial data and the results of the experiment are described. The results of a comparative analysis of theoretical and practical methods are presented according to the criterion of accuracy in estimating the signal-to-noise ratio at the input of the receiving device. The main reasons and factors that reduce the accuracy of the theoretical assessment of the signal-to-noise ratio compared with the practical measurement are analyzed. Possible ways to increase the value of the signal-to-noise ratio in theoretical methods are proposed.
Keywords: wireless communication, radio signal, signal-to-noise ratio, complex envelope method, spectral analysis method, loss calculation method
DOI: 10.26102/2310-6018/2025.50.3.030
In this study, a new mechanism for generating training data for a neural network for the task of image-based code generation is proposed. In order for a system to be able to perform the task assigned to it, it must be trained. The initial dataset that is provided with the pix2code system allows the system to be trained, but it relies on the data that is provided in the domain-specific dictionary. Expanding or changing words in the dictionary does not affect the data set in any way, which limits the flexibility of the system's application by not allowing for the rules that may apply to the enterprise to be taken into account. Some studies claim to have created their own dataset, but its lack of public access makes it difficult to assess the complexity of the images it contains. To solve this problem, within the framework of this study, a submodule was developed that allows, based on a modified dictionary of a domain-specific language, to create a custom training dataset consisting of an image-source code pair corresponding to this image. To test the functionality of the created dataset, the modified pix2code system performed training and was then able to predict the code on test examples.
Keywords: code generation, image, machine learning, dataset, source code
DOI: 10.26102/2310-6018/2025.50.3.014
This paper considers a method for increasing the search speed in hash tables with links if the problem assumes that the performance is limited by the throughput of one of the interfaces between the storage levels (caches L1, L2, L3, memory). To reduce the impact of this limitation, an algorithm for optimal use of the cache line size, the minimum portion of information transferred between the storage levels, is proposed. The paper shows that there is an optimal size of information about a key in a hash table (key representation) for a specific problem and architecture; equations are given for its numerical and approximate analytical calculation for the cases of a key present and absent in the table. A separate case of using a part of a key as its representation in the table is considered. An algorithm for working with inconvenient key representation sizes that are not a power of two is proposed. The presented calculation results confirm the increase in search performance when using a calculated key representation size compared to other options. The presented experimental result confirms the assumption that the associated complication of the code has virtually no effect on performance due to partial processor idleness. The work assumes collision resolution via chains, but similar calculations should be applicable to other methods given their specific features.
Keywords: hash, hash-table, open addressing, chain, collision, memory level parallelism, cache, cache-line, cache miss
DOI: 10.26102/2310-6018/2025.50.3.013
The paper proposes a new method for suppressing artifacts generated during image blending. The method is based on differential activation. The task of image blending arises in many applications; however, this work specifically addresses it from the perspective of face attribute editing. Existing artifact suppression approaches have significant limitations: they employ differential activation to localize editing regions followed by feature merging, which leads to loss of distinctive details (e.g., accessories, hairstyles) and degradation of background integrity. The state-of-the-art artifact suppression method utilizes an encoder-decoder architecture with hierarchical aggregation of StyleGAN2 generator feature maps and a decoder, resulting in texture distortion, excessive sharpening, and aliasing effects. We propose a method that combines traditional image processing algorithms with deep learning techniques. It integrates Poisson blending and the MAResU-Net neural network. Poisson blending is employed to create artifact-free fused images, while the MAResU-Net network learns to map artifact-contaminated images to clean versions. This forms a processing pipeline that converts images with blending artifacts into clean artifact-free outputs. On the first 1000 images of the CelebA-HQ database, the proposed method demonstrates superiority over existing approach across five metrics: PSNR: +17.11 % (from 22.24 to 26.06), SSIM: +40.74 % (from 0.618 to 0.870), MAE: −34.09 % (from 0.0511 to 0.0338), LPIPS: −67.16 % (from 0.3268 to 0.1078), and FID: −48.14 % (from 27.53 to 14.69). The method achieves these results with 26.3 million parameters (6.6× fewer than the 174.2 million parameters of comparable method) and 22 % faster processing speed. Crucially, it preserves accessory details, background elements, and skin textures that are typically lost in existing methods, confirming its practical value for real-world facial editing applications.
Keywords: deep learning, facial attribute editing, blending artifact suppression network, image-to-image translation, differential activation, MAResU-Net, generative adversarial network (GAN)
DOI: 10.26102/2310-6018/2025.50.3.010
The relevance of this study is driven by the rapid growth of unstructured textual data in the digital environment and the pressing need for its systematic analysis. The lack of universal and easily reproducible methods for grouping textual information complicates interpretation and limits practical application across various domains, including healthcare, education, marketing, and the corporate sector. In response to this challenge, the present article aims to identify key algorithmic approaches to clustering unstructured texts and to analyze software systems implementing these methods. The primary research strategy is based on a comparative and analytical approach that enables the generalization and classification of contemporary machine learning algorithms applied to text data processing. The study reviews both traditional clustering techniques and advanced architectures incorporating unsupervised learning, numerical vector representations, and neural network models. Software tools are examined with a focus on their levels of accuracy, interpretability, and adaptability. As a result, the study systematizes criteria for selecting methods according to specific tasks, highlights limitations of existing approaches, and outlines promising directions for further development. The findings are intended to support professionals engaged in designing and deploying software solutions for the automatic processing and analysis of textual information.
Keywords: text clustering, unstructured data, topic modeling, machine learning, vector representations, unsupervised algorithms, software frameworks, text mining
DOI: 10.26102/2310-6018/2025.50.3.012
In recent years, the development of virtual reality (VR) technologies has been largely associated with the introduction of machine learning (ML) methods. The use of ML methods is aimed at increasing the level of comfort, efficiency and effectiveness of VR. ML algorithms can analyze interaction data, recognize patterns and adapt interaction scenarios based on the user's behavior and emotional state. The article analyzes the key modern areas of joint use of VR and ML, which have already been tested in practice and have shown fairly high efficiency. One of these areas is improving interaction in VR, including improving the quality of VR systems, more realistic graphics, adapting content to the user and accurate tracking of movements. The article considers the problems of using ML in VR technologies in the field of education, psychotherapy, rehabilitation, medicine, traffic management, in technologies for the creation, transmission, distribution, storage and use of electricity and other areas. A brief analysis of ML tools used in VR is also provided, among which generative neural networks can be distinguished that can create dynamic virtual environments. The study shows that the combination of VR and ML opens up new possibilities for creating intelligent and interactive systems and can lead to significant breakthroughs not only in VR but also in related technology areas.
Keywords: virtual reality technologies, machine learning, machine learning efficiency, adaptive algorithms, education, medicine, rehabilitation
DOI: 10.26102/2310-6018/2025.49.2.049
This article presents a project optimization procedure in the form of a network graph. The idea of optimization is to make all paths from the initial event to the final one critical by transferring resources from non-critical work with a non-zero free reserve to critical work of some critical path. Assuming that the dependence of the duration of work on the resources allocated for its execution is linear, formulas for new work durations and a new critical time are obtained. The reallocation of resources reduces the duration of some work, but makes the project more stressful. To evaluate a project with new work durations, a stress coefficient was introduced for each work as the intensity of use of the generalized project resource per unit of time. In the process of optimization, these characteristics behave differently, therefore, a generalized characteristic of the project intensity is introduced based on the aggregation of particular characteristics of work using the "fuzzy majority" principle. Note that well-known weighted averages can be used to aggregate partial estimates, while, for example, the method of paired comparisons can be used to determine the weights. The article provides an illustrative example demonstrating the operation of the proposed approach.
Keywords: network graph, critical path, resource, optimization, tension coefficient, aggregation
DOI: 10.26102/2310-6018/2025.50.3.009
This study is devoted to assessing the quality of annotations in Russian generated by a multi-agent system for time series analysis. The system includes four specialized agents: a dashboard analyst, a time series analyst, a domain-specific agent, and an agent for user interaction. Annotations are generated by analyzing dashboard and time series data using the GPT-4o-mini model and a task graph implemented with LangGraph. The quality of the annotations was assessed using the metrics of clarity, readability, contextual relevance, and literacy, as well as using an adapted Flesch readability index formula for the Russian language. Testing was developed and conducted with the participation of 21 users on 10 dashboards – a total of 210 ratings on a ten-point scale for each of the metrics. The assessment and results showed the effectiveness of annotations: clarity - 8.486, readability - 8.705, contextual relevance – 8.890, literacy – 8.724. The readability index was 33.6, which shows the average complexity of the text. This indicator is related to the specifics of the research area and does not take into account the arrangement of words and their context, but only static length indicators. An adult and a non-specialist in each field are able to perceive complex words in the annotation, which is proven by other ratings. All comments left by users will be taken into account to improve the format and interactivity of the system in further research.
Keywords: time series, annotation generation, LLM, multi-agent system, dashboards
DOI: 10.26102/2310-6018/2025.50.3.022
The relevance of this study is obvious. The rapid rise in inflation, fueled by a significant increase in wages in some sectors of the economy, and inflationary expectations are making life very difficult for society as a whole. The goal is to determine the level of GDP that will ensure stability in the country's economy and in the lives of its citizens for a long time. The article presents a study of the macroeconomic model of the Goodwin business cycle, which includes a small parameter in order to predict the dynamics of changes in vital economic indicators. For its analysis, such a method of dynamical systems theory as the method of normal forms by A. Poincare was used. It is shown that such a model can have a stable cycle in the vicinity of the state of economic equilibrium. Asymptotic formulas for calculating periodic solutions are obtained. The quantitative size of the limit cycle has been determined, which reflects periodic processes occurring in the economic system Goodwin, according to the input parameters. The stability of these processes has been proven. The results of the study clearly illustrate that the desired sustainable cyclical pattern of economic development, which allows the state to develop effectively, does not occur in all cases. In addition, it is also quite difficult to draw conclusions about the scope of this cycle from a practical point of view. But if it succeeds, then it is possible to make long-term forecasts regarding the development and the level of the main economic indicators that this development will ensure.
Keywords: dynamic systems, goodwin economic system, small parameter method, limit cycle, stability
DOI: 10.26102/2310-6018/2025.49.2.038
The article considers the feasibility of currency integration in the BRICS format, as well as the optimality of BRICS as a currency zone. In the course of the study, calculations have been made using the optimality formula for a currency zone. This model allows one to analyze the ratio of macroeconomic indicators of pairs of countries and find the average optimality coefficient of the entire association for currency integration. In addition, the research provides additional economic and geopolitical criteria, which are used to check the relevance of the primary calculations using the optimal currency zone model. Correlation of labor markets, the ratio of investment attractiveness levels correlation of business and financial cycles, inflationary convergence, geopolitical risks - all this has a direct or indirect impact on the success of integration. The data obtained after calculation and verification using additional criteria reflect the real degree of readiness of BRICS to create a single currency, as well as the predisposition of individual countries to economic integration. The purpose of the article is not to discredit the BRICS programs, but to provide a scientific approach to the analysis of one of the initiatives repeatedly promoted during BRICS summits. The feasibility of currency integration in the BRICS format is a complex multifaceted process that requires enormous time and resource expenditures from all member states of the association. This state of affairs runs counter to individual calls and statements made by politicians of the BRICS states, which may somewhat distort the idea of the subject of the study – currency integration in the BRICS format – in the eyes of the public.
Keywords: currency zone, currency integration, optimality, BRICS, criterion, economy, single currency, potential
DOI: 10.26102/2310-6018/2025.50.3.016
The article examines a conceptual approach to creating and utilizing a digital twin of stage space, which enables the implementation of higher-level control methods through synchronization with the physical space, employing automation of stage processes and their intelligent analysis. A model of stage space is proposed, encompassing static stage objects, dynamic actors, and controllable equipment, as well as intermediate software and hardware interaction systems. Based on this model, a method for constructing a digital twin is introduced, relying on bidirectional real-time synchronization between the model and the automation object. Potential applications of the resulting hardware-software system are discussed, focusing on the development of new methods for managing stage equipment and integrating immersive technologies into the stage environment. The architecture and process of developing a digital twin and a control system based on it are described. New control methods based on intelligent data analysis are proposed, including automated targeting of lighting fixtures, scene switching via triggers, and the integration of augmented reality technologies. These methods significantly streamline control processes and enhance the immersiveness of events.
Keywords: digital twin, simulation, control systems, lighting equipment, stage, theater lighting, augmented reality, cyber-physical system, intelligent control, digital transformation
DOI: 10.26102/2310-6018/2025.50.3.021
Keywords: stratified model, production management, multi-level evaluation of results, optimal resource allocation, optimal control
DOI: 10.26102/2310-6018/2025.50.3.007
With the increasing number of incidents involving the unauthorized use of unmanned aerial vehicles (UAVs), the development of effective methods for their automatic detection has become increasingly relevant. This article provides a concise overview of current approaches to UAV detection, with particular emphasis on acoustic monitoring methods, which offer several advantages over radio-frequency and visual systems. The main acoustic features used for recognizing drone sound signals are examined, along with techniques for extracting these features using open-source libraries such as Librosa and Essentia. To evaluate the effectiveness of various features, a balanced dataset was compiled and utilized, containing audio recordings of drones and background noise. A multi-stage feature selection methodology was tested using the Feature-engine library, including the removal of constant and duplicate features, correlation analysis, and feature importance assessment. As a result, a subset of 53 acoustic features was obtained, providing a balance between UAV detection accuracy and computational cost. The mathematical foundations of spectral feature extraction are described, including different types of spectrograms (mel-, bark-, and gammatone-spectrograms), as well as vector and scalar acoustic features. The results presented can be used to develop automatic UAV acoustic detection systems based on machine learning methods.
Keywords: unmanned aerial vehicle, acoustic signals, acoustic features, spectral analysis, machine learning
DOI: 10.26102/2310-6018/2025.49.2.048
Oil spills pose a serious threat to marine ecosystems, causing long-lasting environmental and economic consequences. To minimize damage, it is critically important to effectively limit the spread of pollution. One of the most common means in the fight against oil spills are booms — floating barriers that allow to localize the spill area and increase the efficiency of subsequent cleaning. However, the effectiveness of such barriers depends not only on the materials used, but also on their geometric configuration. In this regard, the task of minimizing the length of the boom necessary to cover a given spill area becomes urgent. In this paper, this problem is formulated as an isoperimetric optimization problem in the class of polygons. The problem of maximizing the area bounded by a polygon with a fixed perimeter and a fixed segment (for example, a section of shore) is investigated, provided that the boundary is a broken line rather than a smooth curve. It is proved that the optimal shape is achieved when the polygon is regular, that is, its sides and angles are equal. The results obtained can be used in the design of more efficient boom placement systems, contributing to lower material costs and improved environmental safety.
Keywords: isoperimetric problem, shape optimization, booms, oil spill, mathematical modeling, geometric optimization
DOI: 10.26102/2310-6018/2025.50.3.011
This paper is devoted to the problem of optimizing a quantum key distribution (QKD) network by combining an initial set of end nodes into small access networks with star-type topology using clustering algorithms. The study presents a modified version of the k-medoids algorithm that takes into account the constraint on the maximum quantum link length between a pair of nodes. A new non-Euclidean metric for link quality assessment based on the quantum capacitance value calculated based on the physical properties and length of the optical fiber link was also presented. The performance of the presented algorithm using two metrics, the Euclidean norm and the presented estimation metric, was then compared. A series of experiments were conducted to solve the clustering problem for multiple sets of nodes randomly distributed on the plane. It is found that the application of the presented non-Euclidean metric reduces the number of clusters by 11.7% compared to the Euclidean norm, and using multiple attempts at each iteration can improve the result by even more than 20%. The clustering method and the new metric presented in this paper allow us to reduce the number of subnets, reducing the cost of organizing central nodes, and also allows us to further solve the simplified problem of building a backbone network, combining the obtained subnets into a single QKD network.
Keywords: quantum key distribution, mathematical modeling, clustering, k-medoids algorithm, software package
DOI: 10.26102/2310-6018/2025.49.2.032
The article considers the problem of designing a system for operational short-term forecasting of wind speed at a specific point on the coast. An automated approach to designing hybrid machine learning models that combine an ensemble of multilayer neural networks and an interpretable system based on fuzzy logic is proposed. The method is based on the automated formation of an ensemble of neural networks and a system based on fuzzy logic using self-configuring evolutionary algorithms, which allows adapting to the features of the input data without manual tuning. After constructing the neural network ensemble, a separate system based on fuzzy logic is formed, learning from its inputs and outputs. This approach allows reproducing the behavior of the neural network model in an interpretable form. Based on experimental testing on a meteorological dataset, the effectiveness of the method is proven, which ensures a balance between the quality of the forecast and the interpretability of the model. It is shown that the constructed interpretable system reproduces the key patterns of the neural network ensemble, while remaining compact and understandable for analysis. The constructed model can be used in decision-making in port services and in organizing coastal events for quick and easy forecasting. The proposed approach as a whole allows obtaining similar models in various situations similar to the one considered.
Keywords: operational forecasting of wind characteristics, ensembles of neural networks, fuzzy logic systems, decision trees, self-configuring evolutionary algorithms
DOI: 10.26102/2310-6018/2025.50.3.002
Modern digital radio communication systems impose stringent requirements on energy and spectral efficiency under the influence of various types of interference, particularly in challenging radio wave propagation conditions. Consequently, the investigation of existing methods for operating in radio channels with fading, as well as the development of new approaches to address this challenge, remains highly relevant. The primary objective of this study is to investigate diversity reception techniques aimed at enhancing signal robustness against fading. The study examines approaches to combining known diversity methods and proposes a new modified spatial reception method. The methodology employed includes a comparative analysis of various combinations of spatial diversity reception techniques within an adaptive feedback system, based on simulations conducted in the MATLAB environment to evaluate the impact of different fading types on data transmission in a channel with feedback. The novelty of this work lies in the proposed diversity method, which involves signal combining through optimal summation in diversity reception, performed only on a selected subset of receiving antennas. This subset is determined based on channel state estimation results, as summing signals from all receiving antennas is deemed unnecessary and significantly increases complexity when the received signal quality is already high. The results demonstrate that the proposed solution offers advantages over the conventional optimal summation method by reducing computational complexity, as signal summation is limited to a portion of the receiving antennas rather than all of them. The proposed solution is particularly suitable for applications requiring simultaneous optimization of both energy efficiency and spectral efficiency in digital radio systems. Its relevance becomes especially pronounced under degraded reception conditions caused by environmental factors inducing severe fading effects.
Keywords: diversity reception, selection combining, equal gain combining, maximal ratio combining, adaptive system with feedback, error-control coding, fading channel
DOI: 10.26102/2310-6018/2025.50.3.004
Clinical gait analysis is a key tool for diagnosis and rehabilitation planning in patients with motor disorders; however, accurate and automatic detection of gait events remains a challenging task in resource-limited settings. Force plates are considered the gold standard for automatic gait event detection, but their application is limited in cases of pathological gait patterns and when patients use assistive rehabilitation devices. This paper presents an approach to automatic detection of gait events in children with pathological gait using recurrent neural networks. The presented methodology effectively identifies key gait events (heel strike and toe off). The study used kinematic data from patients with gait disorders, collected using an optical motion capture system under various conditions: barefoot walking, in orthopedic footwear, with orthoses, and other technical rehabilitation aids. Four models were trained to detect gait events (one for each leg and event type). The models demonstrated high sensitivity with small time delays between predicted and actual events. The proposed method can be used in clinical practice to automate data annotation and reduce processing time for gait analysis results.
Keywords: gait events, neural networks, recurrent neural networks, motion capture, biomechanics, cerebral palsy, foot kinematics, machine learning
DOI: 10.26102/2310-6018/2025.49.2.037
This article proposes an algorithm for evaluating project resource allocation that takes into account various fuzzy expert recommendations regarding the start times of tasks within float constraints, aiming to select the optimal set of expert suggestions. To determine the float constraints for task start and finish times, the classical critical path method is used. Expert recommendations on task start times are modeled as fuzzy trapezoidal or triangular numbers defined along the time axis. Based on the fuzzy start and finish times of project tasks, a fuzzy representation of the probability that a task will be performed at a specific moment is constructed. Building alpha-cuts for this fuzzy probability representation allows the identification of intervals, within float constraints, during which a task is likely to be performed at a certain level of fuzzy probability, thus enabling resource planning for those periods. The obtained results allow for: evaluating the expert recommendations that are optimal in terms of resource distribution; minimizing subcontracting needs for task execution; and calculating the associated subcontracting costs. The proposed algorithmic and software solution can serve as an effective decision support tool in the implementation of multi-component projects.
Keywords: network graph of the project, critical path, fuzzy expert recommendations, work completion dates on project, project resource optimization
DOI: 10.26102/2310-6018/2025.50.3.017
The article explores the task of automatically determining the semantic similarity of texts, aimed at identifying original sources and instances of borrowing in news materials. A two-phase algorithm is presented: the first stage employs cosine similarity for preliminary text filtering, while the second stage calculates an asymmetric weighted measure of semantic similarity using RuBERT models. The algorithm conducts a comprehensive analysis of texts, taking into account their morphological, syntactic, and semantic features, and demonstrates robustness against typical errors found in news materials. The developed algorithm includes stages of linguistic text processing, inverted index construction, and similarity calculation using various linguistic features. Special attention is given to sentence processing: TF-IDF weighting, duplicate removal, and intersection analysis. To assess the semantic similarity of sentences, a weighted scoring system is applied, incorporating lexical, morphological, syntactic, and semantic characteristics. The experimental part of the study focuses on determining the algorithm's optimal parameters, such as threshold values and weight coefficients for different linguistic features. The results demonstrate that the proposed algorithm effectively detects borrowings, including cases of substantial text modifications, achieving high recall at the filtering stage and improved precision after semantic analysis. The algorithm is particularly useful for automated news digest generation and monitoring text reuse in regional media.
Keywords: semantic similarity, text processing, neural networks, ruBERT, morphological analysis, syntactic analysis, semantic analysis, borrowings, original source
DOI: 10.26102/2310-6018/2025.49.2.042
This article presents a concept for analyzing tea raw materials using the YOLO family of models, as well as comparative analysis of two versions of YOLOv8: Nano and Small. The study highlights metrics used to compare these models' performance. An experimental comparison was conducted on real examples of tea raw material images. For this purpose, a training dataset was collected containing images of tea samples classified by fermentation type: green tea, red tea, white tea, yellow tea, oolong, shou puerh, and sheng puerh. To increase the number of training samples, augmentation methods were applied such as image rotation, sharpening, perspective distortion, and blurring. Based on the experiment results, it is concluded that choosing between the two presented models depends on the task at hand and available computational resources. YOLOv8s (Small) outperforms YOLOv8n (Nano) in terms of accuracy but consumes more time to provide results. On the other hand, YOLOv8n processes data faster and can be effectively utilized under limited computing power conditions, making it particularly suitable for handling large volumes of data.
Keywords: image analysis, machine learning, computer vision, tea raw material, convolutional neural networks
DOI: 10.26102/2310-6018/2025.50.3.003
The paper is devoted to the pressing issue of automating the logistics processes of emergency medical services (EMS). The present macro-management structure of EMS logistics is examined. Deficiencies and current problems are highlighted. It is considered advisable to start the solution from automating the central EMS warehouses in the region. During the analysis, quantitative parameters and warehouse functionalities were identified. An analysis of current solutions revealed the impracticality of effectively using off-the-shelf developments. It is proposed to implement an original development, and the tasks for initiating work on it have been set. In solving these tasks, an improved EMS logistics management structure for the region has been proposed, including an automated specialized warehouse. Its architecture is presented as a hardware-software solution with distribution of business processes and functions across levels. A storage organization methodology is proposed, enabling the implementation of a warehouse with the specified parameters. Algorithms for executing key processes such as automatic loading and unloading are provided. To maximize warehouse utilization, models are presented to determine dimensional parameters and the capacity of the racking group. Models are also provided to determine and minimize the execution time of basic automatic warehouse procedures. This mathematical apparatus is to be used in designing and automating warehouses built according to the proposed methodology. Its application demonstrated that even with a non-optimized motion scheme for actuating mechanisms (which is not recommended for an operational solution), the technical requirements for the drive units of the robotic system are easily achievable with minimal costs. Based on the results of the work performed, we decided to proceed to the next stage: creating a prototype.
Keywords: logistics, management structure, automation, methodology for organizing an automated warehouse, software and hardware package, robotic solution, medicine warehouse, emergency medical care