Применение моделей машинного обучения семейства YOLO для задачи анализа чайного сырья по фотографии
Работая с сайтом, я даю свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта обрабатывается системой Яндекс.Метрика
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

Application of YOLO family machine learning models for the task of analyzing tea raw materials by photograph

Popov V.A.,  idZubkov A.V.

UDC 004.032.26
DOI: 10.26102/2310-6018/2025.49.2.042

  • Abstract
  • List of references
  • About authors

This article presents a concept for analyzing tea raw materials using the YOLO family of models, as well as comparative analysis of two versions of YOLOv8: Nano and Small. The study highlights metrics used to compare these models' performance. An experimental comparison was conducted on real examples of tea raw material images. For this purpose, a training dataset was collected containing images of tea samples classified by fermentation type: green tea, red tea, white tea, yellow tea, oolong, shou puerh, and sheng puerh. To increase the number of training samples, augmentation methods were applied such as image rotation, sharpening, perspective distortion, and blurring. Based on the experiment results, it is concluded that choosing between the two presented models depends on the task at hand and available computational resources. YOLOv8s (Small) outperforms YOLOv8n (Nano) in terms of accuracy but consumes more time to provide results. On the other hand, YOLOv8n processes data faster and can be effectively utilized under limited computing power conditions, making it particularly suitable for handling large volumes of data.

1. Preedy V.R. Tea in Health and Disease Prevention. Academic Press; 2012. 1612 p.

2. Zhen Yo.-S. Tea: Bioactivity and Therapeutic Potential. London: CRC Press; 2002. 280 p. https://doi.org/10.1201/b12659

3. Ning J., Sun J., Li Sh., Sheng M., Zhang Zh. Classification of Five Chinese Tea Categories with Different Fermentation Degrees Using Visible and Near-Infrared Hyperspectral Imaging. International Journal of Food Properties. 2017;20(2):1515–1522. https://doi.org/10.1080/10942912.2016.1233115

4. Bakhshipour A., Zareiforoush H., Bagheri I. Application of Decision Trees and Fuzzy Inference System for Quality Classification and Modeling of Black and Green Tea Based on Visual Features. Journal of Food Measurement and Characterization. 2020;14(3):1402–1416. https://doi.org/10.1007/s11694-020-00390-8

5. Mukhopadhyay S., Paul M., Pal R., De D. Tea Leaf Disease Detection Using Multi-Objective Image Segmentation. Multimedia Tools and Applications. 2021;80(1):753–771. https://doi.org/10.1007/s11042-020-09567-1

6. Chen Yi. Identification of Tea Leaf Based on Histogram Equalization, Gray-Level Co-Occurrence Matrix and Support Vector Machine Algorithm. In: Multimedia Technology and Enhanced Learning: Second EAI International Conference, ICMTEL 2020: Proceedings: Part I, 10–11 April 2020, Leicester, UK. Cham: Springer; 2020. P. 3–16. https://doi.org/10.1007/978-3-030-51100-5_1

7. Takahashi K., Sugimoto I. Remarks on Tea Leaves Aroma Recognition Using Deep Neural Network. In: Engineering Applications of Neural Networks: 18th International Conference, EANN 2017: Proceedings, 25–27 August 2017, Athens, Greece. Cham: Springer; 2017. P. 160–167. https://doi.org/10.1007/978-3-319-65172-9_14

8. Lakshmanan V., Görner M., Gillard R. Practical Machine Learning for Computer Vision. Sebastopol: O’Reilly Media, Inc.; 2021. 482 p.

9. Burns D.A., Ciurczak E.W. Handbook of Near-Infrared Analysis. Boca Raton: CRC Press; 2007. 834 p. https://doi.org/10.1201/9781420007374

10. Shanmugamani R. Deep Learning for Computer Vision: Expert Techniques to Train Advanced Neural Networks Using TensorFlow and Keras. Packt Publishing Ltd; 2018. 310 p.

11. Tereshchuk M.V., Zubkov A.V., Orlova Yu.A., Molchanov D.R., Litvinenko V.A., Cherkashin D.R. Development of Models for Classifying the Movements of an Anthropomorphic Body From a Video Stream. Herald of Dagestan State Technical University. Technical Sciences. 2024;51(2):154–163. (In Russ.). https://doi.org/10.21822/2073-6185-2024-51-2-154-163

12. Ulyev A.D., Donsckaia A.R., Zubkov A.V. Automated Recognition and Control of Human Interaction by Video Image. Proceedings of the Southwest State University. Series: IT Management, Computer Science, Computer Engineering. Medical Equipment Engineering. 2023;13(2):45–64. (In Russ.). https://doi.org/10.21869/2223-1536-2023-13-2-45-64

13. Nikitin D.V., Taranenko I.S., Kataev A.V. Road Sign Detection Based on the YOLO Neural Network Model. Engineering Journal of Don. 2023;(7). (In Russ.). URL: http://www.ivdon.ru/en/magazine/archive/n7y2023/8531

14. Orlova Yu., Gorobtsov A., Sychev O., Rozaliev V., Zubkov A., Donsckaia A. Method for Determining the Dominant Type of Human Breathing Using Motion Capture and Machine Learning. Algorithms. 2023;16(5). https://doi.org/10.3390/a16050249

15. Majorova E.S., Zaripova R.S. Development of Image Style Transfer Algorithm Using Pre-Trained Neural Network. Engineering Journal of Don. 2024;(2). (In Russ.). URL: http://www.ivdon.ru/en/magazine/archive/n2y2024/8997

Popov Vladislav Alekseevich

Volgograd State Technical University, Volgograd, the Russian Federation

Volgograd, the Russian Federation

Zubkov Alexander Vladimirovich
Candidate of Engineering Sciences, Docent

ORCID |

Volgograd State Technical University
Volgograd State Medical University

Volgograd, the Russian Federation

Keywords: image analysis, machine learning, computer vision, tea raw material, convolutional neural networks

For citation: Popov V.A., Zubkov A.V. Application of YOLO family machine learning models for the task of analyzing tea raw materials by photograph. Modeling, Optimization and Information Technology. 2025;13(2). URL: https://moitvivt.ru/ru/journal/pdf?id=1938 DOI: 10.26102/2310-6018/2025.49.2.042 (In Russ).

74

Full text in PDF

Received 30.04.2025

Revised 01.06.2025

Accepted 09.06.2025