Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Output data type #589

Merged
merged 1 commit into from
Dec 23, 2023
Merged

[RFC] Output data type #589

merged 1 commit into from
Dec 23, 2023

Conversation

saqadri
Copy link
Contributor

@saqadri saqadri commented Dec 22, 2023

[RFC] Output data type

Add some structure to the outputs that allow the frontend to know how to render them.

Still need to update the Python types, and modelparsers to return data in this type.

schema/aiconfig.schema.json Show resolved Hide resolved
schema/aiconfig.schema.json Outdated Show resolved Hide resolved
| {
kind: "string" | "file_uri" | "base64";
value: string;
};
Copy link
Contributor

@rholinshead rholinshead Dec 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right, I guess we can rely on the mime_type below to know the represented type

data: JSONValue;

data:
| JSONValue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Long-term, do we want to support arbitrary JSONValue here or just let them dump things into metadata so we can always have a kind/value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can decide that. My 2c are it's still good to support arbitrary JSONValue here, but we can revisit that. It will require a breaking change to the schema too, so we can consider this for v2 schema

Comment on lines +14 to +44
class OutputData(BaseModel):
"""
OutputData represents the output content in a standard format.
"""

kind: Literal["string", "file_uri", "base64"]
value: str


class ExecuteResult(BaseModel):
"""
ExecuteResult represents the result of executing a prompt.
"""

# Type of output
output_type: Literal["execute_result"]
# nth choice.
execution_count: Union[int, None] = None
# The result of the executing prompt.
data: Any
data: Union[Any, OutputData]
# The MIME type of the result. If not specified, the MIME type will be assumed to be plain text.
mime_type: Optional[str] = None
# Output metadata
metadata: Dict[str, Any]


class Error(BaseModel):
"""
Error represents an error that occurred while executing a prompt.
"""

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the only changes -- the rest are autoformatting changes

Add some structure to the outputs that allow the frontend to know how to render them.

Still need to update the Python types, and modelparsers to return data in this type.
@saqadri
Copy link
Contributor Author

saqadri commented Dec 23, 2023

@rossdanlm I'm going to ship this -- can you take up the model parser updates to satisfy these types. Can start with a couple of them (maybe Dalle and GPT) to see what it's like. cc @rholinshead

@saqadri saqadri merged commit 307f791 into main Dec 23, 2023
4 checks passed
rossdanlm pushed a commit that referenced this pull request Dec 24, 2023
This comes after Sarmad's schema updates in #589


We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too
rossdanlm pushed a commit that referenced this pull request Dec 24, 2023
This comes after Sarmad's schema updates in #589


We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed
rossdanlm pushed a commit that referenced this pull request Dec 25, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed
saqadri added a commit that referenced this pull request Dec 25, 2023
Add schema explicitly for JSON

Adding the $schema property to JSON with the schemastore schema.
Currently our schema has just one version but in the future we should
respect the schema that the config already has or introduce proper
versioning.

Test Plan:
```
from aiconfig import AIConfigRuntime

# Load the aiconfig (without $schema).
config = AIConfigRuntime.load('travel.aiconfig.json')
config.save()

# Ensure $schema is specified and I can get IntelliSense in the json file now.
```

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/598).
* __->__ #598
* #589
rossdanlm pushed a commit that referenced this pull request Dec 26, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed
rossdanlm pushed a commit that referenced this pull request Dec 26, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 26, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 26, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 26, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 26, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 27, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 27, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm pushed a commit that referenced this pull request Dec 27, 2023
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
rossdanlm added a commit that referenced this pull request Dec 27, 2023
…ata (#603)

[typescript] Save output.data with text content instead of response data




This comes after Sarmad's schema updates in
#589. To keep diffs small
and easier to review, this simply converts from model-specific outputs
--> pure text. I have a diff in
#610 which converts from
pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py`
already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai.
I also ran the typescript demos to make sure that they still work. Run
these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just
changed `response` to `response.generated_text`), while `llama.ts`
already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData`
format. See
Victor-Su-Ortiz pushed a commit to Victor-Su-Ortiz/aiconfig that referenced this pull request Jan 4, 2024
This comes after Sarmad's schema updates in lastmile-ai#589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in lastmile-ai#610 which converts from pure text --> `OutputData` format.


We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type.

Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir:
```
npx ts-node typescript/demo/function-call-stream.ts
npx ts-node typescript/demo/demo.ts
npx ts-node typescript/demo/test-hf.ts
```


For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed


## TODO
I still need to add function call support directly to `OutputData` format. See
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants