You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: include/llama.h
+8Lines changed: 8 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1208,6 +1208,14 @@ extern "C" {
1208
1208
llama_token_data_array * candidates,
1209
1209
float temp);
1210
1210
1211
+
/// @details XTC sampler as described in https://github.com/oobabooga/text-generation-webui/pull/6335
1212
+
LLAMA_API voidllama_sample_xtc(
1213
+
structllama_context * ctx,
1214
+
llama_token_data_array * candidates_p,
1215
+
float probability,
1216
+
float threshold,
1217
+
size_t min_keep);
1218
+
1211
1219
/// @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
1212
1220
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
1213
1221
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
0 commit comments