Skip to content

Latest commit

ย 

History

History
488 lines (352 loc) ยท 12.7 KB

chatgpt-survey.md

File metadata and controls

488 lines (352 loc) ยท 12.7 KB
marp theme
true
gaia

ChatGPT

๋Œ€์‹  ์กฐ์‚ฌํ•ด๋“œ๋ฆฝ๋‹ˆ๋‹ค.

gamz





20230508 - Real-world Tools Added 20230419 - First Created


๋ชฉ์ฐจ

  • GPT ๋ž€?
  • GPT vs GPT-# vs ChatGPT
  • ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)
  • OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ
  • Real-world Tools

GPT ์ •์˜

Generative Pre-trained Transformer

w:700


GPT ์ •์˜

Transformer?

w:700


GPT ์ •์˜

์™œ Pre-trained ์ผ๊นŒ?

  • GPT๋Š” ๋Œ€๋Ÿ‰์˜ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ Unsupervised ํ•™์Šต์„ ํ†ตํ•ด
  • ๋ฐ์ดํ„ฐ๋“ค๊ฐ„์˜ ๊ด€๊ณ„๋‚˜ ํŒจํ„ด๋“ค์„ ํ•™์Šตํ•˜๋„๋ก ๋งŒ๋“ค์–ด์ง„ ๋ชจ๋ธ
  • ์ด ์ •๋„๊นŒ์ง€๋งŒ ํ•™์Šตํ•œ๊ฒŒ ํ›จ์”ฌ ๋” ์ž ์žฌ๊ฐ€์น˜๊ฐ€ ๋†’์€ ์ƒํƒœ (์œ ์—ฐํ•จ)
  • ํŠน์ • ํƒœ์Šคํฌ๋‚˜ ๋ถ„์•ผ์— ๋งž๊ฒŒ fine-tuning์„ ํ•˜์—ฌ ๋” ๋‹ค์–‘ํ•œ ํ™œ์šฉ์ด ๊ฐ€๋Šฅ

GPT vs GPT-# vs ChatGPT

Large Language Model (Many Parameters)

w:900


GPT vs GPT-# vs ChatGPT

w:900


GPT vs GPT-# vs ChatGPT

w:800


GPT vs GPT-# vs ChatGPT

by Reinforcement Learning with Human Feedback

w:800


ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

๊ฒฐ๊ตญ, ์›ํ•˜๋Š” ๋‹ต๋ณ€์„ ์–ป๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž˜ ์ž‘์„ฑํ•ด์•ผํ•จ ์ด๋Š” ๋งˆ์น˜ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์„ ํ•˜๋Š” ๊ฒƒ๊ณผ๋„ ๋น„์Šทํ•จ

"The hottest new programming language is English". by Andrej Karpathy (์ „ Tesla AI ์ด ์ฑ…์ž„์ž, OpenAI ์ฐฝ๋ฆฝ ๋ฉค๋ฒ„)


ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

ํ”„๋กฌํ”„ํŠธ ๊ตฌ์„ฑ์š”์†Œ ์„ค๋ช…
Instruction ๋ชจ๋ธ์ด ์ˆ˜ํ–‰ํ•˜๊ธฐ๋ฅผ ์›ํ•˜๋Š” ํŠน์ • ํƒœ์Šคํฌ ๋˜๋Š” ์ง€์‹œ ์‚ฌํ•ญ
Context ๋” ๋‚˜์€ ๋‹ต๋ณ€์„ ํ•˜๋„๋ก ์œ ๋„ํ•˜๋Š” ์™ธ๋ถ€์ •๋ณด ๋˜๋Š” ์ถ”๊ฐ€๋‚ด์šฉ
Input Data ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•œ ์ธํ’‹ ๋˜๋Š” ์งˆ๋ฌธ
Output Indicator ๊ฒฐ๊ณผ๋ฌผ์˜ ์œ ํ˜• ๋˜๋Š” ํ˜•์‹์„ ๋‚˜ํƒ€๋‚ด๋Š” ์š”์†Œ

NLP ๊ธฐ๋ฐ˜ AI ๋ถ„์•ผ์—์„œ ํ”„๋กฌํ”„ํŠธ์˜ ์š”์†Œ๋“ค์„ ์ž˜ ํ™œ์šฉํ•ด์„œ ๊ฒฐ๊ณผ๋ฌผ์˜ ํ’ˆ์งˆ๋ฅผ ๋Œ์–ด์˜ฌ๋ฆฌ๋Š” ์—”์ง€๋‹ˆ์–ด๋ง์„ ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค.


ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

์˜ˆ์‹œ) ๋‚˜๋งŒ์˜ ์˜์–ด ๋ฒˆ์—ญ๊ธฐ

I want you to act as an English translator, spelling corrector, and improver. I will speak to you in any language and you will detect the language, translate it and answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with more beautiful and elegant, upper-level English words and sentences. Keep the meaning the same, but make them more literary. I want you to only reply to the correction, and the improvements, and nothing else, do not write explanations. My first sentence is {sentence}


w:850


ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

์˜ˆ์‹œ) ๋‚˜๋งŒ์˜ ์˜์–ด ๋ฒˆ์—ญ๊ธฐ (JSON ์‘๋‹ต)

I want you to act as an English translator, spelling corrector, and improver. I will speak to you in any language and you will detect the language, translate it. I want you to replace my simplified A0-level words and sentences with more beautiful and elegant, upper-level English words and sentences. Keep the meaning the same, but make them more literary. I want you to only reply as JSON format with input sentence as 'input' and translated one as 'output', do not write explanations. My first sentence is {sentence}


w:850


ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

์˜ˆ์‹œ) Linux Kernel ๋น™์˜

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. do not write explanations. do not type commands unless I instruct you to do so. when I need to tell you something in English, I will do so by putting text inside curly brackets {like this}.


w:850


w:850


w:850


ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

๊ทธ ์™ธ ๊ธฐ๋ฒ•๋“ค

  • ๋ฏธ์‚ฌ์—ฌ๊ตฌ ์ตœ์†Œํ™”, ์‰ฝ๊ณ  ๊ฐ„๊ฒฐํ•œ ํ‘œํ˜„
  • ์—ด๋ฆฐ ์งˆ๋ฌธ๋ณด๋‹ค ๋‹ซํžŒ ์ง€์‹œ๋ฌธ
  • ์˜ˆ์ œ๋ฅผ ํ•จ๊ป˜ ์ œ๊ณต
  • Zero-Shot, One-Shot, Few-Shot
  • CoT (Chain-of-Thought) / Zero-Shot CoT
  • Self-Consistency
  • Generated Knowledge Prompting

ChatGPT ์ž˜ ์“ฐ๋Š” ๋ฐฉ๋ฒ•? (Prompt)

๋‹ค๋ฅธ ์ž˜ ๋งŒ๋“ค์–ด์ง„ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฐธ๊ณ  (ํ”„๋กฌํ”„ํŠธ ๋งˆ์ผ“)


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Advanced Autocomplete

w:1000


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

๋นˆ์นธ ์ฑ„์šฐ๊ธฐ

Suggest three names for an animal that is a superhero.

Animal: Cat
Names: Captain Sharpclaw, Agent Fluffball, The Incredible Feline
Animal: Dog
Names: Ruff the Protector, Wonder Canine, Sir Barks-a-Lot
Animal: Horse
Names: ________

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

๋นˆ์นธ ์ฑ„์šฐ๊ธฐ

Suggest three names for an animal that is a superhero.

Animal: Cat
Names: Captain Sharpclaw, Agent Fluffball, The Incredible Feline
Animal: Dog
Names: Ruff the Protector, Wonder Canine, Sir Barks-a-Lot
Animal: Horse
Names: ________
Equinorse, Super Steed, Gallop Glider

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Understanding Tokens and Probabilities

w:1100

w:1100


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Understanding Tokens and Probabilities

w:1100


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Temperature (0~1)

0: Mostly deterministic

w:1100


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

OpenAI Console

w:900


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

w:900


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

w:900


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

$ pip install openai
import os
import openai

openai.api_key = os.getenv("OPENAI_API_KEY")
res = openai.Completion.create(
    model='text-davinci-003',
    prompt="์Šคํƒ€๋ฒ…์Šค ์ง์›์ฒ˜๋Ÿผ ์ปคํ”ผ ์ฃผ๋ฌธ์„ ๋ฐ›์•„๋ณผ๋ž˜?",
    temperature=0
)

--
๋„ค, ์ฃผ๋ฌธํ•˜

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

max_tokens (Defaults to 16)

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

res = openai.Completion.create(
    model='text-davinci-003',
    prompt="์Šคํƒ€๋ฒ…์Šค ์ง์›์ฒ˜๋Ÿผ ์ปคํ”ผ ์ฃผ๋ฌธ์„ ๋ฐ›์•„๋ณผ๋ž˜?",
    max_tokens=100,
    temperature=0
)

--
์Šคํƒ€๋ฒ…์Šค์— ์˜ค์‹ ๊ฑธ ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค. ์–ด๋–ค ์Œ๋ฃŒ๋ฅผ ์ฃผ๋ฌธํ•˜์‹œ๊ฒ ์–ด์š”?

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Text Completion

w:1150


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Chat Completion

w:1150


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

w:900


w:850


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Chat Completion

res = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)

role system, assistant, user


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

Playground

w:700


OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

system: you are a helpful assistant.
user: I want to code stack data structure. How?
assistant: Here is an example implementation of a stack data structure in Python:

class Stack:
  def __init__(self):
    self.items = []

  def push(self, item):
    self.items.append(item)
  ...
...

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

system: You are a ruby programmer.
user: I want to code stack data structure. How?
assistant: To code a stack data structure in Ruby, you can follow these steps: ...

class Stack
  def initialize
    @stack = []
  end

  def push(element)
    @stack.push(element)
  end
...

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

system: you are an useless assistant. answer nothing or short with slangs.
user: I want to code stack data structure. How?
assistant: IDK. Google it.

OpenAI API ์‚ฌ์šฉ ํŠœํ† ๋ฆฌ์–ผ

$ npm install chatgpt
import { ChatGPTAPI } from 'chatgpt'

const chatGpt = new ChatGPTAPI({
  apiKey: process.env.OPENAI_API_KEY,
  completionParams: { model: 'gpt-3.5-turbo', temperature: 0.5 }
})

const response = await chatGpt.sendMessage(content, {
  systemMessage: '200์ž๋‚ด๋กœ ์งง๊ฒŒ ๋‹ต๋ณ€ํ•ด์ค˜',
  completionParams: {max_tokens: 512},
})

Real-world Tools

Task Planning: Using ChatGPT to analyze the requests of users to understand their intention. Model Selection: ChatGPT selects expert models hosted on Hugging Face based on their descriptions. Task Execution: Invokes and executes each selected model, and return the results to ChatGPT. Response Generation: Finally, using ChatGPT to integrate the prediction of all models, and generate responses.


Real-world Tools

w:750


Real-world Tools

w:700


Real-world Tools

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains import LLMChain, SimpleSequentialChain

llm = OpenAI(temperature=0.9)

chain1 = LLMChain(llm=llm, prompt=PromptTemplate(
  input_variables=["product"],
  template="What is a good name for a company that makes {product}?"))

chain2 = LLMChain(llm=llm, prompt=PromptTemplate(
  input_variables=["company_name"],
  template="Write a catchphrase for the following company: {company_name}"))

overall_chain = SimpleSequentialChain(chains=[chain1, chain2])

Real-world Tools

catchphrase = overall_chain.run("colorful socks")
print(catchphrase)

---
Rainbow Socks Co.

"Step into Color with Rainbow Socks!"

Real-world Tools

from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader

# data/paul_graham_essay.txt
documents = SimpleDirectoryReader('data').load_data()
index = GPTVectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

---
The author wrote short stories and tried to program on an IBM 1401.

w:950


References


References