<?xml version="1.0" encoding="UTF-8"?>
<article article-type="research-article" dtd-version="1.3" xml:lang="ru" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="https://metafora.rcsi.science/xsd_files/journal3.xsd">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">moitvivt</journal-id>
      <journal-title-group>
        <journal-title xml:lang="ru">Моделирование, оптимизация и информационные технологии</journal-title>
        <trans-title-group xml:lang="en">
          <trans-title>Modeling, Optimization and Information Technology</trans-title>
        </trans-title-group>
      </journal-title-group>
      <issn pub-type="epub">2310-6018</issn>
      <publisher>
        <publisher-name>Издательство</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.26102/2310-6018/2025.49.2.028</article-id>
      <article-id pub-id-type="custom" custom-type="elpub">1915</article-id>
      <title-group>
        <article-title xml:lang="ru">Искусственный интеллект в задаче генерации дистракторов для тестовых заданий</article-title>
        <trans-title-group xml:lang="en">
          <trans-title>Artificial intelligence in the task of generating distractors for test questions</trans-title>
        </trans-title-group>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name-alternatives>
            <name name-style="eastern" xml:lang="ru">
              <surname>Дагаев</surname>
              <given-names>Александр Евгеньевич</given-names>
            </name>
            <name name-style="western" xml:lang="en">
              <surname>Dagaev</surname>
              <given-names>Alexander</given-names>
            </name>
          </name-alternatives>
          <email>a.e.dagaev@mospolytech.ru</email>
          <xref ref-type="aff">aff-1</xref>
        </contrib>
      </contrib-group>
      <aff-alternatives id="aff-1">
        <aff xml:lang="ru">Московский политехнический университет</aff>
        <aff xml:lang="en">Moscow Polytechnic University</aff>
      </aff-alternatives>
      <pub-date pub-type="epub">
        <day>01</day>
        <month>01</month>
        <year>2026</year>
      </pub-date>
      <volume>1</volume>
      <issue>1</issue>
      <elocation-id>10.26102/2310-6018/2025.49.2.028</elocation-id>
      <permissions>
        <copyright-statement>Copyright © Авторы, 2026</copyright-statement>
        <copyright-year>2026</copyright-year>
        <license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/">
          <license-p>This work is licensed under a Creative Commons Attribution 4.0 International License</license-p>
        </license>
      </permissions>
      <self-uri xlink:href="https://moitvivt.ru/ru/journal/article?id=1915"/>
      <abstract xml:lang="ru">
        <p>Создание качественных дистракторов для тестовых заданий представляет собой трудоемкий процесс, имеющий большое значение для точной оценки знаний. Существующие подходы часто генерируют неправдоподобные или не отражающие типичные ошибки учащихся варианты. В данной статье предлагается алгоритм генерации дистракторов на основе искусственного интеллекта. Он использует LLM для построения сначала правильной цепочки рассуждений для заданного вопроса и ответа, а затем внедряет типичные ошибки для получения неверных, но в то же время убедительных вариантов ответа, стремясь отразить распространенные заблуждения учащихся. Алгоритм был протестирован на вопросах из русскоязычных наборов данных RuOpenBookQA и RuWorldTree. Оценка проводилась как с использованием автоматических метрик, так и экспертами. Результаты показывают, что алгоритм превосходит базовые методы (прямых запросов и семантических изменений), генерируя дистракторы с более высоким уровнем правдоподобия, релевантности, разнообразия и сходства с эталонными дистракторами, созданными человеком. Данная работа вносит вклад в область автоматизированной генерации контрольно-измерительных материалов, предоставляя способствующий созданию более эффективных оценочных материалов инструмент для преподавателей, разработчиков образовательных платформ и исследователей в сфере обработки естественного языка.</p>
      </abstract>
      <trans-abstract xml:lang="en">
        <p>Creating high-quality distractors for test items is a labor-intensive task that plays a crucial role in the accurate assessment of knowledge. Existing approaches often produce implausible alternatives or fail to reflect typical student errors. This paper proposes an AI-based algorithm for distractor generation. It employs a large language model (LLM) to first construct a correct chain of reasoning for a given question and answer, and then introduces typical misconceptions to generate incorrect but plausible answer choices, aiming to capture common student misunderstandings. The algorithm was evaluated on questions from the Russian-language datasets RuOpenBookQA and RuWorldTree. Evaluation was conducted using both automatic metrics and expert assessment. The results show that the proposed algorithm outperforms baseline methods (such as direct prompting and semantic modification), generating distractors with higher levels of plausibility, relevance, diversity, and similarity to human-authored reference distractors. This work contributes to the field of automated assessment material generation, offering a tool that supports the development of more effective evaluation resources for educators, educational platform developers, and researchers in natural language processing.</p>
      </trans-abstract>
      <kwd-group xml:lang="ru">
        <kwd>генерация дистракторов</kwd>
        <kwd>искусственный интеллект</kwd>
        <kwd>большие языковые модели</kwd>
        <kwd>оценка знаний</kwd>
        <kwd>тестовые задания</kwd>
        <kwd>автоматическая генерация тестов</kwd>
        <kwd>NLP</kwd>
      </kwd-group>
      <kwd-group xml:lang="en">
        <kwd>distractor generation</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>large language models</kwd>
        <kwd>knowledge assessment</kwd>
        <kwd>test items</kwd>
        <kwd>automated test generation</kwd>
        <kwd>NLP</kwd>
      </kwd-group>
      <funding-group>
        <funding-statement xml:lang="ru">Исследование выполнено без спонсорской поддержки.</funding-statement>
        <funding-statement xml:lang="en">The study was performed without external funding.</funding-statement>
      </funding-group>
    </article-meta>
  </front>
  <back>
    <ref-list>
      <title>References</title>
      <ref id="cit1">
        <label>1</label>
        <mixed-citation xml:lang="ru">Awalurahman H.W., Budi I. Automatic Distractor Generation in Multiple-Choice Questions: A Systematic Literature Review. PeerJ Computer Science. 2024;10. https://doi.org/10.7717/peerj-cs.2441</mixed-citation>
      </ref>
      <ref id="cit2">
        <label>2</label>
        <mixed-citation xml:lang="ru">Kumar A.P., Nayak A., K. M.Sh., Goyal Sh., Chaitanya. A Novel Approach to Generate Distractors for Multiple Choice Questions. Expert Systems with Applications. 2023;225. https://doi.org/10.1016/j.eswa.2023.120022</mixed-citation>
      </ref>
      <ref id="cit3">
        <label>3</label>
        <mixed-citation xml:lang="ru">Bitew S.K., Hadifar A., Sterckx L., Deleu J., Develder Ch., Demeester Th. Learning to Reuse Distractors to Support Multiple-Choice Question Generation in Education. IEEE Transactions on Learning Technologies. 2022;17:375–390. https://doi.org/10.1109/tlt.2022.3226523</mixed-citation>
      </ref>
      <ref id="cit4">
        <label>4</label>
        <mixed-citation xml:lang="ru">Artsi Ya., Sorin V., Konen E., Glicksberg B.S., Nadkarni G., Klang E. Large Language Models for Generating Medical Examinations: Systematic Review. BMC Medical Education. 2024;24. https://doi.org/10.1186/s12909-024-05239-y</mixed-citation>
      </ref>
      <ref id="cit5">
        <label>5</label>
        <mixed-citation xml:lang="ru">Shi F., Chen X., Misra K., et al. Large Language Models Can Be Easily Distracted by Irrelevant Context. In: Proceedings of the 40th International Conference on Machine Learning, ICML 2023: Volume 202, 23–29 July 2023, Honolulu, Hawaii, USA. PMLR; 2023. P. 31210–31227.</mixed-citation>
      </ref>
      <ref id="cit6">
        <label>6</label>
        <mixed-citation xml:lang="ru">Lee Yo., Kim S., Jo Yo. Generating Plausible Distractors for Multiple-Choice Questions via Student Choice Prediction. arXiv. URL: https://arxiv.org/abs/2501.13125 [Accessed 12th April 2025].</mixed-citation>
      </ref>
      <ref id="cit7">
        <label>7</label>
        <mixed-citation xml:lang="ru">De-Fitero-Dominguez D., Garcia-Lopez E., Garcia-Cabot A., Del-Hoyo-Gabaldon J.-A., Moreno-Cediel A. Distractor Generation Through Text-to-Text Transformer Models. IEEE Access. 2024;12:25580–25589. https://doi.org/10.1109/access.2024.3361673</mixed-citation>
      </ref>
      <ref id="cit8">
        <label>8</label>
        <mixed-citation xml:lang="ru">Zhang L., Zou B., Aw A.T. Empowering Tree-Structured Entailment Reasoning: Rhetorical Perception and LLM-driven Interpretability. In: Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 20–25 May 2024, Torino, Italy. ELRA and ICCL; 2024. P. 5783–5793.</mixed-citation>
      </ref>
      <ref id="cit9">
        <label>9</label>
        <mixed-citation xml:lang="ru">Feng W., Lee J., McNichols H., et al. Exploring Automated Distractor Generation for Math Multiple-choice Questions via Large Language Models. In: Findings of the Association for Computational Linguistics: NAACL 2024, 16–21 June 2024, Mexico City, Mexico. Association for Computational Linguistics; 2024. P. 3067–3082. https://doi.org/10.18653/v1/2024.findings-naacl.193</mixed-citation>
      </ref>
      <ref id="cit10">
        <label>10</label>
        <mixed-citation xml:lang="ru">Cai X., Wang Ch., Long Q., Zhou Yu., Xiao M. Knowledge Hierarchy Guided Biological-Medical Dataset Distillation for Domain LLM Training. arXiv. URL: https://arxiv.org/abs/2501.15108 [Accessed 12th April 2025].</mixed-citation>
      </ref>
      <ref id="cit11">
        <label>11</label>
        <mixed-citation xml:lang="ru">Wang R., Jiang Yu., Tao Yu., Li M., Wang X., Ge Sh. High-Quality Distractors Generation for Human Exam Based on Reinforcement Learning from Preference Feedback. In: Natural Language Processing and Chinese Computing: 13th National CCF Conference, NLPCC 2024: Proceedings: Part IV, 01–03 November 2024, Hangzhou, China. Singapore: Springer; 2024. P. 94–106. https://doi.org/10.1007/978-981-97-9440-9_8</mixed-citation>
      </ref>
      <ref id="cit12">
        <label>12</label>
        <mixed-citation xml:lang="ru">Maity S., Deroy A., Sarkar S. A Novel Multi-Stage Prompting Approach for Language Agnostic MCQ Generation Using GPT. In: Advances in Information Retrieval: 46th European Conference on Information Retrieval, ECIR 2024: Proceedings: Part III, 24–28 March 2024, Glasgow, UK. Cham: Springer; 2024. P. 268–277. https://doi.org/10.1007/978-3-031-56063-7_18</mixed-citation>
      </ref>
      <ref id="cit13">
        <label>13</label>
        <mixed-citation xml:lang="ru">Shen Ch.-H., Kuo Yi-L., Fan Ya.-Ch. Personalized Cloze Test Generation with Large Language Models: Streamlining MCQ Development and Enhancing Adaptive Learning. In: Proceedings of the 17th International Natural Language Generation Conference, 23–27 September 2024, Tokyo, Japan. Association for Computational Linguistics; 2024. P. 314–319.</mixed-citation>
      </ref>
      <ref id="cit14">
        <label>14</label>
        <mixed-citation xml:lang="ru">Wang H.-J., Hsieh K.-Yu, Yu H.-Ch., et al. Distractor Generation Based on Text2Text Language Models with Pseudo Kullback-Leibler Divergence Regulation. In: Findings of the Association for Computational Linguistics: ACL 2023, 09–14 July 2023, Toronto, Canada. Association for Computational Linguistics; 2023. P. 12477–12491. https://doi.org/10.18653/v1/2023.findings-acl.790</mixed-citation>
      </ref>
    </ref-list>
    <fn-group>
      <fn fn-type="conflict">
        <p>The authors declare that there are no conflicts of interest present.</p>
      </fn>
    </fn-group>
  </back>
</article>