-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Output data type #589
Conversation
| { | ||
kind: "string" | "file_uri" | "base64"; | ||
value: string; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right, I guess we can rely on the mime_type below to know the represented type
data: JSONValue; | ||
|
||
data: | ||
| JSONValue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Long-term, do we want to support arbitrary JSONValue here or just let them dump things into metadata so we can always have a kind/value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can decide that. My 2c are it's still good to support arbitrary JSONValue here, but we can revisit that. It will require a breaking change to the schema too, so we can consider this for v2 schema
class OutputData(BaseModel): | ||
""" | ||
OutputData represents the output content in a standard format. | ||
""" | ||
|
||
kind: Literal["string", "file_uri", "base64"] | ||
value: str | ||
|
||
|
||
class ExecuteResult(BaseModel): | ||
""" | ||
ExecuteResult represents the result of executing a prompt. | ||
""" | ||
|
||
# Type of output | ||
output_type: Literal["execute_result"] | ||
# nth choice. | ||
execution_count: Union[int, None] = None | ||
# The result of the executing prompt. | ||
data: Any | ||
data: Union[Any, OutputData] | ||
# The MIME type of the result. If not specified, the MIME type will be assumed to be plain text. | ||
mime_type: Optional[str] = None | ||
# Output metadata | ||
metadata: Dict[str, Any] | ||
|
||
|
||
class Error(BaseModel): | ||
""" | ||
Error represents an error that occurred while executing a prompt. | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the only changes -- the rest are autoformatting changes
Add some structure to the outputs that allow the frontend to know how to render them. Still need to update the Python types, and modelparsers to return data in this type.
@rossdanlm I'm going to ship this -- can you take up the model parser updates to satisfy these types. Can start with a couple of them (maybe Dalle and GPT) to see what it's like. cc @rholinshead |
This comes after Sarmad's schema updates in #589 We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too
This comes after Sarmad's schema updates in #589 We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed
Add schema explicitly for JSON Adding the $schema property to JSON with the schemastore schema. Currently our schema has just one version but in the future we should respect the schema that the config already has or introduce proper versioning. Test Plan: ``` from aiconfig import AIConfigRuntime # Load the aiconfig (without $schema). config = AIConfigRuntime.load('travel.aiconfig.json') config.save() # Ensure $schema is specified and I can get IntelliSense in the json file now. ``` --- Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/598). * __->__ #598 * #589
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` aalready returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. Not sure how to run the `demo.ts` file which would also be a reasonable test to ensure everything there works too For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
…ata (#603) [typescript] Save output.data with text content instead of response data This comes after Sarmad's schema updates in #589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in #610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
This comes after Sarmad's schema updates in lastmile-ai#589. To keep diffs small and easier to review, this simply converts from model-specific outputs --> pure text. I have a diff in lastmile-ai#610 which converts from pure text --> `OutputData` format. We only needed to update the `hf.py` and `openai.py`, because `palm.py` already returns output in the form of `string | null` type. Ran yarn automated tests, but there aren't any specifically for openai. I also ran the typescript demos to make sure that they still work. Run these commands from `aiconfig` top-level dir: ``` npx ts-node typescript/demo/function-call-stream.ts npx ts-node typescript/demo/demo.ts npx ts-node typescript/demo/test-hf.ts ``` For the extensions, we only have typescript for `hf.ts` (trivial: just changed `response` to `response.generated_text`), while `llama.ts` already outputs it in text format so no changes needed ## TODO I still need to add function call support directly to `OutputData` format. See
[RFC] Output data type
Add some structure to the outputs that allow the frontend to know how to render them.
Still need to update the Python types, and modelparsers to return data in this type.