What's the best way to have a proper log for chained prompts? #6146
-
I'm using kernel.InvokePromptAsync , my logs are verbose, i can see token usage and function returns but nothing about the prompt text that's spending the tokens. My OpenAIPromptExecutionSettings ToolCallBehavior set to ToolCallBehavior.AutoInvokeKernelFunctions I tried to implement a If so, are we forced to "walk blinded" attempting various function description until we see it behave like we expect? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
The prompts are logged at LogLevel.Trace, as they can contain sensitive information. What level are you logging at? |
Beta Was this translation helpful? Give feedback.
The prompts are logged at LogLevel.Trace, as they can contain sensitive information. What level are you logging at?