Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs #1025

Merged
merged 2 commits into from
Oct 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

# BAML

BAML is a domain-specific-language to write and test LLM functions.
BAML is a domain-specific language to write and test LLM functions.

An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.

Expand Down
8 changes: 4 additions & 4 deletions docs/docs/comparisons/pydantic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ and sometimes even GPT-4 outputs incorrect stuff like this, even though it's tec
```
(this is an actual result from GPT-4 before some more prompt engineering)

when all you really want is a prompt that looks like the one below -- with way less tokens (and less likelyhood of confusion). :
when all you really want is a prompt that looks like the one below -- with way less tokens (and less likelihood of confusion). :
```diff
Parse the following resume and return a structured representation of the data in the schema below.
Resume:
Expand Down Expand Up @@ -220,8 +220,8 @@ Ahh, much better. **That's 80% less tokens** with a simpler prompt, for the same
But we digress, let's get back to the point. You can see how this can get out of hand quickly, and how Pydantic wasn't really made with LLMs in mind. We haven't gotten around to adding resilience like **retries, or falling back to a different model in the event of an outage**. There's still a lot of wrapper code to write.

### Pydantic and Enums
There's other core limitations.
Say you want to do a classification task using Pydantic. An Enum is a great fit for modeling this.
There are other core limitations.
Say you want to do a classification task using Pydantic. An Enum is a great fit for modelling this.

Assume this is our prompt:
```text
Expand Down Expand Up @@ -375,7 +375,7 @@ function ExtractResume(resume_text: string) -> Resume {
"#
}
``` */}
The BAML compiler generates a python client that import and call the function:
The BAML compiler generates a python client that imports and calls the function:
```python
from baml_client import baml as b

Expand Down
Loading