AUTOMATIC SOURCE CODE GENERATION THROUGH PRE-TRAINED LANGUAGE MODELS
CHATGPT: EVALUATION AND APPLICATION
Keywords:
Code Generation, Transformers, Pretrained Models, ChatGPTAbstract
This study explores the capabilities of the ChatGPT artificial intelligence language model, with the GPT-3 architecture developed by OpenAI, in the task of generating JavaScript source code from Spanish instructions. Transformer language models, as exponents of deep learning, are effective in learning contextual representations of words and phrases. This allows the model to understand not only individual programming terms, but also how they are combined into larger structures such as loops and functions. Through a set of unique programming feature requests from a selected set of cases prepared for this work, we examine the model's ability to transcribe these high-level specifications into executable and functional source code. The results of the model were evaluated in a compiler, seeking an objective evaluation of the functionality of the code generated in unit test cases prepared in advance. The model achieves 100% compliable code and 90% successful problem resolution. This work pursues exploration at the intersection of AI and programming, opening the way for effective automation of code development based on sentences in the Spanish language. This work is expected to provide a contribution to the growing body of literature focusing on code generation and natural language understanding in AI language models.
Downloads
Metrics
Downloads
Published
How to Cite
Conference Proceedings Volume
Section
License
Copyright (c) 2024 Adrián Bender, Santiago Nicolet, Pablo Folino, Juan José Lópe
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.