Vis enkel innførsel

dc.contributor.authorHort, Max
dc.contributor.authorGrishina, Anastasiia
dc.contributor.authorMoonen, Leon
dc.date.accessioned2024-04-18T12:40:13Z
dc.date.available2024-04-18T12:40:13Z
dc.date.created2024-03-18T15:10:02Z
dc.date.issued2023
dc.identifier.isbn978-1-6654-5223-6
dc.identifier.urihttps://hdl.handle.net/11250/3127299
dc.description.abstractLarge language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair. Large amounts of data for training such models benefit the models’ performance. However, the size of the data and models results in long training times and high energy consumption. While publishing source code allows for replicability, users need to repeat the expensive training process if models are not shared. GOALS: The main goal of the study is to investigate if publications that trained language models for software engineering (SE) tasks share source code and trained artifacts. The second goal is to analyze the transparency on training energy usage. Leon Moonen Simula Research Laboratory & BI Norwegian Business School Oslo, Norway leon.moonen@computer.org understanding, video content prediction [3, 4]). In particular, Deep Learning (DL) often achieves performance improvements by increasing the amount of training data and the size of the model, leading to long training times and substantial energy consumption [5], with an increase in computational costs for state-of-the-art models by a factor of 300000 between 2012 and 2018 [6, 7]. This trend not only raises barriers for researchers with limited computational resources [8], it is also harmful to the environment [5, 6]. METHODS: We perform a snowballing-based literature search to find publications on language models for source code, and analyze their reusability from a sustainability standpoint. RESULTS: From a total of 494 unique publications, we identified 293 relevant publications that use language models to address code-related tasks. Among them, 27% (79 out of 293) make artifacts available for reuse. This can be in the form of tools or IDE plugins designed for specific tasks or task-agnostic models that can be fine-tuned for a variety of downstream tasks. Moreover, we collect insights on the hardware used for model training, as well as training time, which together determine the energy consumption of the development process. CONCLUSION: We find that there are deficiencies in the sharing of information and artifacts for current studies on source code models for software engineering tasks, with 40% of the surveyed papers not sharing source code or trained artifacts. We recommend the sharing of source code as well as trained artifacts, to enable sustainable reproducibility. Moreover, comprehensive information on training times and hardware configurations should be shared for transparency on a model’s carbon footprint. Index Terms—sustainability, reuse, replication, energy, DL4SE.en_US
dc.language.isoengen_US
dc.relation.ispartofProceedings of the 2023 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleAn Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Codeen_US
dc.title.alternativeAn Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Codeen_US
dc.typeChapteren_US
dc.description.versionpublishedVersionen_US
dc.identifier.cristin2255482
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal