Skip to content

Latest commit

 

History

History
11 lines (6 loc) · 1.62 KB

paper_11.md

File metadata and controls

11 lines (6 loc) · 1.62 KB

Exploring Distributional Shifts in Large Language Models for Code Analysis

Authors: Arakelyan, Shushan and Das, Rocktim and Mao, Yi and Ren, Xiang

Abstract:

We systematically study how three large language models with code capabilities - CodeT5, Codex, and ChatGPT - generalize to out-of-domain data. We consider two fundamental applications - code summarization, and code generation. We split data into domains following its natural boundaries - by an organization, by a project, and by a module within the software project. We establish that samples from each new domain present all the models with a significant challenge of distribution shift. We study how established methods adapt models to better generalize to new domains. Our experiments show that while multitask learning alone is a reasonable baseline, combining it with few-shot finetuning on examples retrieved from training data can achieve very strong performance. Moreover, this solution can outperform direct finetuning for very low-data scenarios. Finally, we consider variations of this approach to create a more broadly applicable method to adapt to multiple domains at once. We find that for code generation, a model adapted to multiple domains simultaneously performs on par with those adapted to a single domain.

Link: Read Paper

Labels: general coding task, code model, code model training, source code model, empirical study