Skip to content

Latest commit

 

History

History
18 lines (10 loc) · 2.13 KB

PL_design_for_LLMs.md

File metadata and controls

18 lines (10 loc) · 2.13 KB

PL Design For Llms

  • Prompting Is Programming: A Query Language for Large Language Models, (PLDI2023)

    • Abstract: Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external to...
    • Labels: PL design for LLMs
  • Relational Programming with Foundational Models, (AAAI2024)

    • Abstract: Foundation models have vast potential to enable diverse AI applications. The powerful yet incomplete nature of these models has spurred a wide range of mechanisms to augment them with capabilities such as in-context learning, information retrieval, and code interpreting. We propose VIEIRA, a declarative framework that unifies these mechanisms in a general solution for programming with foundation models. VIEIRA follows a probabilistic relational paradigm and treats foundation models as stateless ...
    • Labels: PL design for LLMs
  • Scallop: A Language for Neurosymbolic Programming, (PLDI2023)

    • Abstract: We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framewor...
    • Labels: PL design for LLMs