Skip to content

Commit

Permalink
✍️ GPT-4o in under 3 minutes (#20)
Browse files Browse the repository at this point in the history
- Created new blog post entry "✍️ GPT-4o in under 3 minutes"
- Fixed youtube video missing base url
  • Loading branch information
Bullrich authored May 16, 2024
1 parent aa69b33 commit 1695578
Show file tree
Hide file tree
Showing 2 changed files with 98 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/_includes/blog.html
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ <h3 class="text-4xl lg:text-6xl font-bold blog-title">{{ title }}</h3>

<div class="blog-content">
{% if video %}
<iframe width="560" height="315" src="{{ video }}" title="YouTube video player" frameborder="0"
allow="autoplay; clipboard-write; encrypted-media; picture-in-picture"
<iframe width="560" height="315" src="https://www.youtube.com/embed/{{ video }}" title="YouTube video player"
frameborder="0" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture"
referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
{% endif %}

Expand Down
96 changes: 96 additions & 0 deletions src/blog/2024-05-16-a-quick-tutorial-on-gpt-4o.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
title: "GPT-4o in under 3 minutes"
date: 2024-05-16
description: |
Quick introduction into how to use the newly released model GPT-4o.
tags:
- openai
- ai
video: "4GPXwaWyxX8?si=m60rvMjzd7D9Qq5o"
---
Now that [GPT-4o](https://openai.com/index/hello-gpt-4o/) has been released, let’s look into how we can implement our custom GPT-4o assistant.

To start, go to an empty directory and create a `npm` project and install the dependencies:
- `npm init -y`
- `npm install openai`

Because I’m running the code directly, I’m adding the `type: "module"` to my package.json:
```json
{
"name": "tutorial-gpt-4o",
"type": "module",
"version": "1.0.0",
...
}
```

Now that we have all of our preparation, it is time to create our script.

```js
// We import the OpenAI library
import { OpenAI } from "openai";

// We set our key in a variable
// Pssss.. you should use an environment variable
const key = "YOUR-API-KEY";

async function main() {
// The OpenAI constructor accepts an `apiKey` string parameter
const openai = new OpenAI({ apiKey: key });

console.log("Writing message");

// We need to call a chained function
const chat = await openai.chat.completions.create({
// This is where we assign GPT-4o
model: "gpt-4o",
messages: [{
// Every message has a `role` and `content`
role: "user", content: "What color is the sky?"
}]
});

// Finally we log our response
console.log("Response:", chat.choices[0].message.content);
}

// Don't forget to call your function!
main();

```

If you run `node index.js`, you’ll have an output similar to:
> Writing message
> Response: Hello! I'm just a computer program, so I don't have feelings, but thanks for asking. How can I assist you today?
## Deep diving into the types
Now let’s analyze the code in detail, as some parts may be a little confusing:
```js
messages: [{
role: "user", content: "What color is the sky?"
}]
```

Every `message` object in the array has two values, the first one is `role`, which represents _who_ is sending the message.
We are using `user`, which indicates the model that it has to reply. We can also use `system`, which will be the core instructions for the model, but it will still be waiting for a second message coming from `user`.

A working example would be the following:
```js
messages: [
{ role: "system", content: "You are a meteorologist"}
{ role: "user", content: "Why does it rain?"}
]
```

By setting the `system` instructions, the model now will know that, to every query it receives, it needs to remember that it _is a meterologist_.

The other confusing element could be the value that we receive:
```js
chat.choices[0].message.content
```

Every time we call the completion endpoint it will return an array of choices, by default, **we will always receive a single choice**, so, unless that you manually change this (because you want to compare results), you _should always obtain the element 0 in the array_.

`message` is a wrapper with 3 values, `content`, which is the answer, `role`, which we just saw before. In this case, it would be `assistant`, and finally `tool_calls`, but we won’t see that here (you can see it in [the documentation](https://platform.openai.com/docs/guides/function-calling)).

And that’s it! You can now start experimenting with your model.

0 comments on commit 1695578

Please sign in to comment.