<?xml version="1.0" encoding="UTF-8"?>
<article article-type="research-article" dtd-version="1.3" xml:lang="ru" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="https://metafora.rcsi.science/xsd_files/journal3.xsd">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">moitvivt</journal-id>
      <journal-title-group>
        <journal-title xml:lang="ru">Моделирование, оптимизация и информационные технологии</journal-title>
        <trans-title-group xml:lang="en">
          <trans-title>Modeling, Optimization and Information Technology</trans-title>
        </trans-title-group>
      </journal-title-group>
      <issn pub-type="epub">2310-6018</issn>
      <publisher>
        <publisher-name>Издательство</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.26102/2310-6018/2021.35.4.035</article-id>
      <article-id pub-id-type="custom" custom-type="elpub">1115</article-id>
      <title-group>
        <article-title xml:lang="ru">Детектирование дефектов неисправных элементов линий электропередач при помощи нейронных сетей семейства YOLO</article-title>
        <trans-title-group xml:lang="en">
          <trans-title>Detection of defects in faulty elements of power lines using neural networks of YOLO</trans-title>
        </trans-title-group>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes">
          <contrib-id contrib-id-type="orcid">0000-0002-9121-894X</contrib-id>
          <name-alternatives>
            <name name-style="eastern" xml:lang="ru">
              <surname>Астапова</surname>
              <given-names>Марина Алексеевна</given-names>
            </name>
            <name name-style="western" xml:lang="en">
              <surname>Astapova</surname>
              <given-names>Marina Alekseevna</given-names>
            </name>
          </name-alternatives>
          <email>marinaastapova55@gmail.com</email>
          <xref ref-type="aff">aff-1</xref>
        </contrib>
        <contrib contrib-type="author" corresp="yes">
          <contrib-id contrib-id-type="orcid">0000-0002-7032-0291</contrib-id>
          <name-alternatives>
            <name name-style="eastern" xml:lang="ru">
              <surname>Уздяев</surname>
              <given-names>Михаил Юрьевич</given-names>
            </name>
            <name name-style="western" xml:lang="en">
              <surname>Uzdiaev</surname>
              <given-names>Mikhail Yurievich</given-names>
            </name>
          </name-alternatives>
          <email>m.y.uzdiaev@gmail.com</email>
          <xref ref-type="aff">aff-2</xref>
        </contrib>
      </contrib-group>
      <aff-alternatives id="aff-1">
        <aff xml:lang="ru">Санкт-Петербургский Федеральный исследовательский центр Российской академии наук</aff>
        <aff xml:lang="en">St. Petersburg Federal Research Center of the Russian Academy of Sciences</aff>
      </aff-alternatives>
      <aff-alternatives id="aff-2">
        <aff xml:lang="ru">Санкт-Петербургский Федеральный исследовательский центр Российской академии наук</aff>
        <aff xml:lang="en">St. Petersburg Federal Research Center of the Russian Academy of Sciences</aff>
      </aff-alternatives>
      <pub-date pub-type="epub">
        <day>01</day>
        <month>01</month>
        <year>2026</year>
      </pub-date>
      <volume>1</volume>
      <issue>1</issue>
      <elocation-id>10.26102/2310-6018/2021.35.4.035</elocation-id>
      <permissions>
        <copyright-statement>Copyright © Авторы, 2026</copyright-statement>
        <copyright-year>2026</copyright-year>
        <license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/">
          <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License</license-p>
        </license>
      </permissions>
      <self-uri xlink:href="https://moitvivt.ru/ru/journal/article?id=1115"/>
      <abstract xml:lang="ru">
        <p>В настоящее время визуальная диагностика состояния элементов линий электропередач (ЛЭП) является сложной и трудоемкой процедурой. В целях повышения эффективности и снижения трудозатрат этой процедуры наиболее перспективным является применение беспилотных летательных аппаратов, оборудованных системами компьютерного зрения, выполняющих автоматическую детекцию поврежденных элементов ЛЭП. Для повышения качества детекции поврежденных участков ЛЭП системами компьютерного зрения наиболее перспективно применение современных глубоких нейросетевых архитектур. Однако, вопрос применения таких архитектур в обозначенной задаче недостаточно освещен в современных исследованиях. Особо остро стоит вопрос сравнения различных нейронных сетей и выявления значимых различий в их результатах. Данная статья посвящена сравнительному анализу современных нейросетевых детекторов YOLOv3 и YOLOv4, а также их сокращенных версий (YOLOv3-tiny и YOLOv4-tiny) в задаче детекции дефектов ЛЭП. Приводятся результаты обучения этих детекторов на наборе данных CPLID, а также статистическое сравнение результатов YOLOv3 и YOLOv4 посредством процедуры кросс-валидации. Детекторами были показаны высокие результаты точности детекции (mAP@0,50=0,97±0,03; mAP@0,75=0,78±0,04), а также статистически значимые различия в этих результатах. Сравнительный анализ результатов показал, что применение более простой нейронной сети YOLOv3 является более перспективным в задаче детекции дефектов ЛЭП.</p>
      </abstract>
      <trans-abstract xml:lang="en">
        <p>Currently, visual state diagnostics of power transmission lines (PTL) elements is a complex and time-consuming procedure. In order to increase the efficiency and reduce the labor costs of this undertaking, the most promising measure is the use of unmanned aerial vehicles equipped with computer vision systems that automatically detect damaged elements of power lines. For the purposes of improving the detection quality of power lines damaged areas by computer vision systems, the application of modern deep neural network architectures would be most effective. However, the problem of utilizing such architectures in the aforementioned task is not sufficiently covered in modern research. The issue of comparing various neural networks and identifying substantial differences in their results is especially acute. This article is devoted to a comparative analysis of modern neural network detectors YOLOv3 and YOLOv4 as well as their reduced versions (YOLOv3-tiny and YOLOv4-tiny) in terms of detecting power transmission line defects. The results of training these detectors on the CPLID dataset are presented along with statistical comparison of the YOLOv3 and YOLOv4 results by means of the cross-validation procedure. The detectors displayed high rates of detection accuracy (mAP @ 0.50 = 0.97 ± 0.03; mAP @ 0.75 = 0.78 ± 0.04) and statistically significant distinctions in these results. A comparative analysis of the findings has revealed that the employment of a simpler neural network YOLOv3 has more potential when applied to detection of power transmission line defects.</p>
      </trans-abstract>
      <kwd-group xml:lang="ru">
        <kwd>беспилотный летательный аппарат</kwd>
        <kwd>обследование высоковольтных линий электропередач</kwd>
        <kwd>обнаружение неисправностей</kwd>
        <kwd>определение дефектов</kwd>
        <kwd>нейронные сети</kwd>
        <kwd>YOLOv3</kwd>
        <kwd>YOLOv4</kwd>
      </kwd-group>
      <kwd-group xml:lang="en">
        <kwd>unmanned aerial vehicle</kwd>
        <kwd>inspection of high-voltage power lines</kwd>
        <kwd>fault detection</kwd>
        <kwd>defect detection</kwd>
        <kwd>neural networks</kwd>
        <kwd>YOLOv3</kwd>
        <kwd>YOLOv4</kwd>
      </kwd-group>
      <funding-group>
        <funding-statement xml:lang="ru">Работа выполнена при финансовой поддержке РФФИ в рамках научного проекта №20-08-01056 А</funding-statement>
        <funding-statement xml:lang="en">This work was carried out with the financial support of the Russian Foundation for Basic Research within the framework of scientific project No. 20-08-01056 A.</funding-statement>
      </funding-group>
    </article-meta>
  </front>
  <back>
    <ref-list>
      <title>References</title>
      <ref id="cit1">
        <label>1</label>
        <mixed-citation xml:lang="ru">Savvaris A., Xie Y., Malandrakis K., Lopez M., Tsourdos A. Development of a fuel cell hybrid-powered unmanned aerial vehicle. 2016 24th mediterranean conference on control and automation (MED). 2016:1242–1247.</mixed-citation>
      </ref>
      <ref id="cit2">
        <label>2</label>
        <mixed-citation xml:lang="ru">Zhang Tao, et al. Current trends in the development of intelligent unmanned autonomous systems. Frontiers of information technology &amp; electronic engineering. 2017;18(1):68-85.</mixed-citation>
      </ref>
      <ref id="cit3">
        <label>3</label>
        <mixed-citation xml:lang="ru">Sadykova D., Pernebayeva D., Bagheri M., James A. IN-YOLO: Real-time detection of outdoor high voltage insulators using UAV imaging. IEEE Transactions on Power Delivery. 2019;35(3):1599–1601.</mixed-citation>
      </ref>
      <ref id="cit4">
        <label>4</label>
        <mixed-citation xml:lang="ru">Bian J., Hui X., Zhao X., Tan M. A monocular vision-based perception approach for unmanned aerial vehicle close proximity transmission tower inspection. International Journal of Advanced Robotic Systems. 2019;16(1):1729881418820227.</mixed-citation>
      </ref>
      <ref id="cit5">
        <label>5</label>
        <mixed-citation xml:lang="ru">He K., Gkioxari G., Dollar P., Girshick R. Mask R-CNN. Proceedings of the IEEE international conference on computer vision. 2017:2961–2969.</mixed-citation>
      </ref>
      <ref id="cit6">
        <label>6</label>
        <mixed-citation xml:lang="ru">Girshick R. Fast R-CNN. Proceedings of the IEEE international conference on computer vision. 2015:1440–1448.</mixed-citation>
      </ref>
      <ref id="cit7">
        <label>7</label>
        <mixed-citation xml:lang="ru">Ren S., He K., Girshick R., Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence. 2016;39(6):1137–1149.</mixed-citation>
      </ref>
      <ref id="cit8">
        <label>8</label>
        <mixed-citation xml:lang="ru">Redmon J., Divvala S., Girshick R., Farhadi A. You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:779–788.</mixed-citation>
      </ref>
      <ref id="cit9">
        <label>9</label>
        <mixed-citation xml:lang="ru">Pál D., Póczos B., Szepesvári C. Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs. Advances in neural information processing systems. 2010:1849–1857.</mixed-citation>
      </ref>
      <ref id="cit10">
        <label>10</label>
        <mixed-citation xml:lang="ru">Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.-Y., Berg A.C. SSD: Single shot multibox detector. European conference on computer vision. Springer, Cham. 2016:21–37.</mixed-citation>
      </ref>
      <ref id="cit11">
        <label>11</label>
        <mixed-citation xml:lang="ru">Астапова М.А., Лебедев И.В. Обзор подходов к детектированию дефектов элементов ЛЭП на изображениях в инфракрасном, ультрафиолетовом и видимом спектрах. Моделирование, оптимизация и информационные технологии. 2020;8(4):38–39.</mixed-citation>
      </ref>
      <ref id="cit12">
        <label>12</label>
        <mixed-citation xml:lang="ru">Prates R.M., Cruz R., Marotta A.P., Ramos R.P., Simas Filho E.F., Cardoso J.S. Insulator visual non-conformity detection in overhead power distribution lines using deep learning. Computers &amp; Electrical Engineering. 2019;78:343–355.</mixed-citation>
      </ref>
      <ref id="cit13">
        <label>13</label>
        <mixed-citation xml:lang="ru">Liu C., Wu Y., Liu J., Sun Z. Improved YOLOv3 Network for Insulator Detection in Aerial Images with Diverse Background Interference. Electronics. 2021;10(7):771.</mixed-citation>
      </ref>
      <ref id="cit14">
        <label>14</label>
        <mixed-citation xml:lang="ru">Tao X., Zhang D., Wang Z., Liu X., Zhang H., Xu D. Detection of Power Line Insulator Defects Using Aerial Images Analyzed With Convolutional Neural Networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2020;50(4):1486–98.</mixed-citation>
      </ref>
      <ref id="cit15">
        <label>15</label>
        <mixed-citation xml:lang="ru">Liao G.P., Yang G.J., Tong W.T., Gao W., Lv F.L., Gao D. Study on power line insulator defect detection via improved faster region-based convolutional neural network. 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT). 2019:262–266.</mixed-citation>
      </ref>
      <ref id="cit16">
        <label>16</label>
        <mixed-citation xml:lang="ru">Insulator Data Set – Chinese Power Line Insulator Dataset (CPLID). Доступно по: https://github.com/InsulatorData/InsulatorDataSet (дата обращения: 16.12.2021).</mixed-citation>
      </ref>
      <ref id="cit17">
        <label>17</label>
        <mixed-citation xml:lang="ru">Transmission Tower DataSet in VOC data format. Доступно по: https://drive.google.com/drive/folders/1UyP0fBNUqFeoW5nmPVGzyFG5IQZcqlc5 (дата обращения: 16.12.2021).</mixed-citation>
      </ref>
      <ref id="cit18">
        <label>18</label>
        <mixed-citation xml:lang="ru">Ömer Emre Yetgin, Ömer Nezih GEREK. Powerline Image Dataset (Infrared-IR and Visible Light-VL). 2019. Доступно по: https://data.mendeley.com/datasets/n6wrv4ry6v/8 (дата обращения: 16.12.2021).</mixed-citation>
      </ref>
      <ref id="cit19">
        <label>19</label>
        <mixed-citation xml:lang="ru">Dataset for insulator fault detection Доступно по: https://figshare.com/articles/dataset/66KVimage_zip/14992944 (дата обращения: 16.12.2021).</mixed-citation>
      </ref>
      <ref id="cit20">
        <label>20</label>
        <mixed-citation xml:lang="ru">STN PLAD: A Dataset for Multi-Size Power Line Assets Detection in High-Resolution UAV Images. Доступно по: https://github.com/andreluizbvs/PLAD (дата обращения: 16.12.2021).</mixed-citation>
      </ref>
      <ref id="cit21">
        <label>21</label>
        <mixed-citation xml:lang="ru">Lin T.Y., Dollár P., Girshick R., He K., Hariharan B., Belongie S. Feature pyramid networks for object detection. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017:2117–2125.</mixed-citation>
      </ref>
      <ref id="cit22">
        <label>22</label>
        <mixed-citation xml:lang="ru">Russakovsky O., et al. Imagenet large scale visual recognition challenge. International journal of computer vision. 2015;115(3):211–252.</mixed-citation>
      </ref>
      <ref id="cit23">
        <label>23</label>
        <mixed-citation xml:lang="ru">Redmon J., Farhadi A. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. 2018.</mixed-citation>
      </ref>
      <ref id="cit24">
        <label>24</label>
        <mixed-citation xml:lang="ru">Redmon, J., Farhadi, A. Yolo9000: better, faster, stronger arXiv preprint. 2017.</mixed-citation>
      </ref>
      <ref id="cit25">
        <label>25</label>
        <mixed-citation xml:lang="ru">Redmon J., et al. You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:779–788.</mixed-citation>
      </ref>
      <ref id="cit26">
        <label>26</label>
        <mixed-citation xml:lang="ru">He K., Zhang X., Ren S., Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:770–778.</mixed-citation>
      </ref>
      <ref id="cit27">
        <label>27</label>
        <mixed-citation xml:lang="ru">Everingham M., Van Gool L., Williams C. K., Winn J., Zisserman A. The pascal visual object classes (voc) challenge. International journal of computer vision. 2010;88(2):303–338.</mixed-citation>
      </ref>
      <ref id="cit28">
        <label>28</label>
        <mixed-citation xml:lang="ru">Lin T.Y., et al. Microsoft coco: Common objects in context. European conference on computer vision. Springer, Cham. 2014:740–755.</mixed-citation>
      </ref>
      <ref id="cit29">
        <label>29</label>
        <mixed-citation xml:lang="ru">Adelson E.H., Anderson C.H., Bergen J.R., Burt P.J., Ogden J.M. Pyramid methods in image processing. RCA engineer. 1984;29(6):33–41.</mixed-citation>
      </ref>
      <ref id="cit30">
        <label>30</label>
        <mixed-citation xml:lang="ru">Liu S., Qi L., Qin H., Shi J., Jia J. Path aggregation network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018:8759–8768.</mixed-citation>
      </ref>
      <ref id="cit31">
        <label>31</label>
        <mixed-citation xml:lang="ru">Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q. Densely connected convolutional networks. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017:4700–4708.</mixed-citation>
      </ref>
      <ref id="cit32">
        <label>32</label>
        <mixed-citation xml:lang="ru">Bochkovskiy A., Wang C.Y., Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. 2020.</mixed-citation>
      </ref>
      <ref id="cit33">
        <label>33</label>
        <mixed-citation xml:lang="ru">Dietterich T.G. Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation. 1998;10(7):1895–1923.</mixed-citation>
      </ref>
      <ref id="cit34">
        <label>34</label>
        <mixed-citation xml:lang="ru">Zhang K., Zhang Z., Li Z., Qiao Y. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters. 2016;23(10):1499–1503.</mixed-citation>
      </ref>
      <ref id="cit35">
        <label>35</label>
        <mixed-citation xml:lang="ru">Viola P., Jones M. Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001. 2001;1:I–I.</mixed-citation>
      </ref>
    </ref-list>
    <fn-group>
      <fn fn-type="conflict">
        <p>The authors declare that there are no conflicts of interest present.</p>
      </fn>
    </fn-group>
  </back>
</article>