You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an application that takes an image converts that image into base64 to create a input request for API call.
The input schema structure created by my application looks something like this,
{
"instances":
[
{
"base64": "base64 string of image",
"mode_type": "some value"
"metadata": "some metadata like timestamp"
}
]
}
Now, I have to use this application to call a torch serve hosted application. From going through the Torch Serve documents I understood that the torch serve hosted API would accept an input in the below structure,
{
"instances":
[
{
"data": [input_data]
}
]
}
where the input_data is the data that is directly accepted by the model. For understanding purpose lets say it is Numpy array.
Here is my question:
If I wanted to use my application to call a Torch Serve API, How easy or difficult it would be? Having in account that similar discrepancy is there in the output structure which might require some pre or post processing of the base64 into appropriate format.
How can I integrate my application with Torch Serve API seamlessly?
The text was updated successfully, but these errors were encountered:
Hi @tarunsk1998
If I understand correctly you are expecting a json structure with an embedded image (encoded as base64) as input request.
That should be no problem. You'll just need to decode the data into the right format in your handler.
This usually happens in the preprocessing function of the handler.
Same for the output. You can convert your model outputs to any format (that can be serialized) in the postprocess function.
As an example, here the postprocessing is just applying the argmax to a tensor and converting it into a list. But you can do any type of additional conversion or processing. As long as the return value is a list and each element is serializable.
I have an application that takes an image converts that image into base64 to create a input request for API call.
The input schema structure created by my application looks something like this,
{
"instances":
[
{
"base64": "base64 string of image",
"mode_type": "some value"
"metadata": "some metadata like timestamp"
}
]
}
Now, I have to use this application to call a torch serve hosted application. From going through the Torch Serve documents I understood that the torch serve hosted API would accept an input in the below structure,
{
"instances":
[
{
"data": [input_data]
}
]
}
where the input_data is the data that is directly accepted by the model. For understanding purpose lets say it is Numpy array.
Here is my question:
If I wanted to use my application to call a Torch Serve API, How easy or difficult it would be? Having in account that similar discrepancy is there in the output structure which might require some pre or post processing of the base64 into appropriate format.
How can I integrate my application with Torch Serve API seamlessly?
The text was updated successfully, but these errors were encountered: