Skip to content

Commit

Permalink
server : update /props with "total_slots" value (#5373)
Browse files Browse the repository at this point in the history
* include total "num_slots" in default_generation_settings_for_props

* cleanup total_slots return value in /props endpoint

* update /props endpoint docs with total_slots

* remove num_slots from default_generation_settings_for_props

* update /props endpoint section
  • Loading branch information
jparkerweb authored Feb 7, 2024
1 parent f68664a commit f3e2b4f
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 3 deletions.
4 changes: 3 additions & 1 deletion examples/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,13 +276,15 @@ Notice that each `probs` is an array of length `n_probs`.
{
"assistant_name": "",
"user_name": "",
"default_generation_settings": { ... }
"default_generation_settings": { ... },
"total_slots": 1
}
```
- `assistant_name` - the required assistant name to generate the prompt in case you have specified a system prompt for all slots.
- `user_name` - the required anti-prompt to generate the prompt in case you have specified a system prompt for all slots.
- `default_generation_settings` - the default generation settings for the `/completion` endpoint, has the same fields as the `generation_settings` response object from the `/completion` endpoint.
- `total_slots` - the total number of slots for process requests (defined by `--parallel` option)
- **POST** `/v1/chat/completions`: OpenAI-compatible Chat Completions API. Given a ChatML-formatted json description in `messages`, it returns the predicted completion. Both synchronous and streaming mode are supported, so scripted and interactive applications work fine. While no strong claims of compatibility with OpenAI API spec is being made, in our experience it suffices to support many apps. Only ChatML-tuned models, such as Dolphin, OpenOrca, OpenHermes, OpenChat-3.5, etc can be used with this endpoint. Compared to `api_like_OAI.py` this API implementation does not require a wrapper to be served.
Expand Down
4 changes: 2 additions & 2 deletions examples/server/server.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -432,7 +432,6 @@ struct llama_server_context
}

default_generation_settings_for_props = get_formated_generation(slots.front());
default_generation_settings_for_props["num_slots"] = params.n_parallel;
default_generation_settings_for_props["seed"] = -1;

batch = llama_batch_init(n_ctx, 0, params.n_parallel);
Expand Down Expand Up @@ -2639,7 +2638,8 @@ int main(int argc, char **argv)
json data = {
{ "user_name", llama.name_user.c_str() },
{ "assistant_name", llama.name_assistant.c_str() },
{ "default_generation_settings", llama.default_generation_settings_for_props }
{ "default_generation_settings", llama.default_generation_settings_for_props },
{ "total_slots", llama.params.n_parallel }
};
res.set_content(data.dump(), "application/json; charset=utf-8");
});
Expand Down

0 comments on commit f3e2b4f

Please sign in to comment.