Home Releases 17(4)

EXPERIMENTAL GENERATION OF EDUCATIONAL TASKS IN NATURAL SCIENCE DISCIPLINES USING ARTIFICIAL INTELLIGENCE

Pedagogical Education , UDC: 37.01:004 DOI: 10.25688/2076-9121.2023.17.4.02

Authors

  • Patarakin Yevgeny D. Doctor of Education Sciences, Associate Professor
  • Burov Vasiliy V.
  • Soshnikov Dmitry V. PhD in Physics and Mathematics

Annotation

This study investigates the suitability of modern generative models for the automatic generation of educational task texts. In the first part of the study, we conducted a bibliometric mapping of the research field related to automatic question generation, utilizing three databases: Lens, Dimensions, and the ACM Digital Library. In the second part, we compared the capabilities of three generative systems (ChatGPT-3.5, YaGPT, GigaChat) to formulate various types of assignments based on a textbook content: multiple-choice questions, open-ended questions, and essay topics based on a given text fragment. The source material was a fragment of a fifth-grade biology textbook describing the difference between living and non-living things. The evaluation encompassed an assessment of the models’ ability to generate diverse question variants, their proficiency in recording these questions in JSON format for integration into digital platforms, and the correctness of the questions in terms of grammar, relevance, and pedagogical appropriateness.

How to link insert

Patarakin, Y. D., Burov, V. V. & Soshnikov, D. V. (2023). EXPERIMENTAL GENERATION OF EDUCATIONAL TASKS IN NATURAL SCIENCE DISCIPLINES USING ARTIFICIAL INTELLIGENCE Bulletin of the Moscow City Pedagogical University. Series "Pedagogy and Psychology", 17(4), 28. https://doi.org/10.25688/2076-9121.2023.17.4.02
References
1. 1. Winslow, R. R., Skripsky, S. L., & Kelly, S. L. (2016). Chapter 14. Not Just for Citations: Assessing Zotero While Reassessing Research. In: D’Angelo, B. J., Jamieson, S., Maid, B., & Walker, J. R. (Eds.), Information Literacy: Research and Collaboration across Disciplines (pp. 287–304). The WAC Clearinghouse; University Press of Colorado. https://wac.colostate.edu/books/perspectives/infolit/
2. 2. Ginting, S. L. B. (2023). A Computational Bibliometric Analysis of Esport Management using VOSviewer. International Journal of Informatics, Information System and Computer Engineering (INJIISCOM), 4(1), 31–48. https://doi.org/10.34010/injiiscom.v4i1.9570
3. 3. Al Husaeni, D. F., & Nandiyanto, A. B. D. (2022). Bibliometric using Vosviewer with Publish or Perish (using google scholar data): From step-by-step processing for users to the practical examples in the analysis of digital learning articles in pre and post Covid-19 pandemic. ASEAN Journal of Science and Engineering, 2(1), 19–46.
4. 4. Wu, J., Gan, W., Chen, Z., Wan, S., & Lin, H. (2023). AI-Generated Content (AIGC): A Survey (arXiv:2304.06632). arXiv. https://doi.org/10.48550/arXiv.2304.06632
5. 5. Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun L. (2023). A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT (arXiv:2303.04226). arXiv. https://doi.org/10.48550/arXiv.2303.04226
6. 6. Mulla, N., & Gharpure, P. (2023). Automatic question generation: a review of methodologies, datasets, evaluation metrics, and applications. Progress in Artificial Intelligence, 12(1), 1–32. https://doi.org/10.1007/s13748-023-00295-9
7. 7. Zhang, R., Guo, J., Chen, L., Fann, Y., & Chend, X. (2021). A review on question generation from natural language text. ACM Trans. Inf. Syst., 40(1). https://doi.org/10.1145/3468889
8. 8. Moore, S., Nguyen, H. A., Fang, T., & Stamper, J. (2023). Crowdsourcing the evaluation of multiple-choice questions using item-writing flaws and bloom’s taxonomy. Proceedings of the Tenth ACM Conference on Learning @ Scale. New York, NY, USA: Association for Computing Machinery, 25–34. https://doi.org/10.1145/3573051.3593396
9. 9. Patil, C., & Patwardhan, M. (2020). Visual question generation: The state of the art. ACM Comput. Surv., 53(3). https://doi.org/10.1145/3383465
10. 10. Madri, V. R., & Meruva, S. (2023). Acomprehensive review on MCQ generation from text. Multimedia Tools and Applications, 1–20. https://doi.org/10.1007/s11042-023-14768-5
11. 11. Goyal, R., Kumar, P., & Singh, V. P. (2023a). Automated question and answer generation from texts using text-to-text transformers. Arabian Journal for Science and Engineering, 1–15. https://doi.org/10.1007/s13369-023-07840-7
12. 12. Goyal, R., Kumar, P., & Singh, V. P. (2023b). A Systematic survey on automated text generation tools and techniques: application, evaluation, and challenges. Multimedia Tools and Applications, 1–56. https://doi.org/10.1007/s11042-023-15224-0
13. 13. Falcão, F., Pereira, D. M., Gongalves, N., De Champlainn, A., Costa, P., & Pego J. M. (2023). A suggestive approach for assessing item quality, usability and validity of Automatic Item Generation. Advances in Health Sciences Education, 1–25. https://doi.org/10.1007/s10459-023-10225-y
14. 14. Kumar, A. P., Nayak, A., Shenoy, K. M., Chaitanya, & Ghosh, K. (2023). A Novel Framework for the Generation of Multiple Choice Question Stems Using Semantic and Machine-Learning Techniques. International Journal of Artificial Intelligence in Education. Netherlands: Springer Science and Business Media LLC. https://doi.org/10.1007/s40593-023-00333-6
15. 15. Panchal, P., Thakkar, J., Pillai, V., & Patil, S. (2021). Automatic Question Generation and Evaluation. Journal of University of Shanghai for Science and Technology, 23(5), 751–761. https://doi.org/10.51201/jusst/21/05203
16. 16. Finnie-Ansley, J., Denny, P., Becker, B. A., Luxton-Reilly, A., & Prather, J. (2022). The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. Proceedings of the 24th Australasian Computing Education Conference. New York, NY, USA: Association for Computing Machinery, 10–19. https://doi.org/10.1145/3511861.3511863
17. 17. Kim, C., Lin, X., Collins, C., Taylor, G. W., & Amer, M. R. (2021). Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning. ACM Transactions on Interactive Intelligent Systems, 11(3–4), 23:1–23:34. https://doi.org/10.1145/3465407
18. 18. Suh, S., & An, P. (2022). Leveraging Generative Conversational AI to Develop a Creative Learning Environment for Computational Thinking. In: 27th International Conference on Intelligent User Interfaces. New York, NY, USA: Association for Computing Machinery, 73–76. https://doi.org/10.1145/3490100.3516473
19. 19. Lewis, C. (2022). Automatic Programming and Education. In: Companion Proceedings of the 6th International Conference on the Art, Science, and Engineering of Programming (pp. 70–80). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3532512.3539664
20. 20. Jonsson, M., & Tholander, J. (2022). Cracking the code: Co-coding with AI in creative programming education. In: Creativity and Cognition (pp. 5–14). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3527927.3532801
Download file .pdf 684.34 kb