Skip to content

Latest commit

 

History

History
9 lines (7 loc) · 1.08 KB

README.md

File metadata and controls

9 lines (7 loc) · 1.08 KB

ICLR2021

Number of papers: 1

  • Authors: Guo, Daya and Ren, Shuo and Lu, Shuai and Feng, Zhangyin and Tang, Duyu and Liu, Shujie and Zhou, Long and Duan, Nan and Svyatkovskiy, Alexey and Fu, Shengyu and others
  • Abstract: Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inh...
  • Link: Read Paper
  • Labels: general coding task, code model, code model training, source code model