Király, Sándor and Troll, Ede (2025) Algorithmic thinking at risk? Exploring LLM use in computer science education. ANNALES MATHEMATICAE ET INFORMATICAE, 61. pp. 141-155. ISSN 1787-6117
|
Text
141_155_király.pdf - Published Version Download (1MB) | Preview |
Abstract
The rapid rise of AI – especially Large Language Models (LLMs) like GPT-4, Microsoft Copilot, and Google Gemini – has significantly impacted higher education. LLMs support students in problem-solving, writing, and learning complex topics, while educators use them for course planning, lecture content, and assessments. The primary aim of this research was to explore whether university computer science students use large language models (LLMs) to support their learning, and if so, how and why. The study was conducted among students enrolled in a three-year BSc in Computer Science program at Eszterházy Károly Catholic University. The study combined questionnaires with semi-structured interviews involving nine students and three instructors. Students reported using AI chatbots for tasks such as code testing, debugging, understanding examples, generating code, designing exercises, and self-assessment. LLM usage increased with subject complexity and varied by programming skill. While students were moderately satisfied with LLMs, instructors voiced concerns that overreliance could undermine algorithmic thinking and coding skills. The findings suggest a need to revise assessment methods and enhance teaching materials to better reflect current educational practices.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | computer science education, LLMs, AI Chatbots |
| Subjects: | Q Science / természettudomány > QA Mathematics / matematika > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány |
| Depositing User: | Tibor Gál |
| Date Deposited: | 11 Nov 2025 10:32 |
| Last Modified: | 11 Nov 2025 10:32 |
| URI: | https://real.mtak.hu/id/eprint/228841 |
Actions (login required)
![]() |
Edit Item |




