How to get structured output from a query pipeline? #16747
Replies: 1 comment 9 replies
-
To get structured output from a query pipeline using a Pydantic model, you can integrate the model with the LLM prompt template by using an output parser. Here's how you can achieve this:
Here's a conceptual example of how you might set this up: from llama_index.program.lmformatenforcer import LMFormatEnforcerPydanticProgram
from llama_index.llms.llama_cpp import LlamaCPP
# Initialize the LLM
llm = LlamaCPP()
# Create a program that enforces the output format
program = LMFormatEnforcerPydanticProgram(
output_cls=CondensedQuestion,
prompt_template=rewrite_template,
llm=llm,
verbose=True,
)
# Execute the program with the necessary inputs
output = program(chat_history="Your chat history here", query="Your latest query here")
# Access the structured output
new_user_question = output.new_user_question This setup will allow you to get the new user question as a structured output from the LLM, which you can then use in the next component of your query pipeline [1][2]. |
Beta Was this translation helpful? Give feedback.
-
I have a pydantic class as below:
how to add this output model to the llm prompt template
I want to get the new user question from the LLM as string and use it in the next component
Beta Was this translation helpful? Give feedback.
All reactions