Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing model from scratch #964

Open
CharbelDaher34 opened this issue Feb 21, 2025 · 0 comments
Open

Implementing model from scratch #964

CharbelDaher34 opened this issue Feb 21, 2025 · 0 comments

Comments

@CharbelDaher34
Copy link

I was trying to use my local model (without using ollama), and use this model with pydanticai, i saw that i have to implement the model class from scratch, it worked without structured outputs. When i added structured outputs it stopped working.

This is my code:

''''

class LocalModel(Model):
@Property
def model_name(self) -> str:
"""Return the model name."""
return "local:model-v1"

@property
def system(self) -> str | None:
    """Return the model provider/system. For local models, this is None."""
    return None

async def request(
    self,
    messages: list[ModelMessage] | object,
    model_settings: ModelSettings | None,
    model_request_parameters: ModelRequestParameters,
) -> tuple[ModelResponse, Usage]:
    """
    Make a request to the model with improved validation and error handling.
    """
    # Validate messages
    if not isinstance(messages, list):
        raise ValueError("Messages must be a list of ModelMessage objects")
    
    # Extract prompts with better error handling
    try:
        system_prompt = messages[0].parts[0].content
        user_prompt = messages[0].parts[1].content
    except (IndexError, AttributeError) as e:
        raise ValueError("Messages must contain system and user prompts") from e

    # Check for the provided result_tools.
    result_tools = getattr(model_request_parameters, "result_tools", None)

    if result_tools is not None:
        print(f"Using provided result_tools: {result_tools}")
    
    # Make the API call and handle the response.
    response_text = send_prompts(system_prompt, user_prompt)  (this function returns the answer the local model)


    # Create and return the structured response.
    return ModelResponse(parts=[TextPart(content=response_text)]), Usage()

def __str__(self):
    return "local:model-v1"

class GeminiStructuredAgent:
def init(self, model_name: str):
"""Initialize the Gemini agent with the specified model name.

    Args:
        model_name: Name of the Gemini model to use
    """
    self.model_name = model_name
    
async def analyze_text(self, prompt: str, response_model: Type[T]) -> T:
    """Analyze text using the model and return structured output."""
    system_prompt = f"""
    You are an intelligent assistant. Analyze the provided text 
    and return a structured response following this exact format:
    
    {{
        "title": "string",
        "description": "string",
        "type": "one of: backlog, story, task, bug, other",
        "domain": "one of: backend, frontend, ai, gaming, cybersecurity, other"
    }}
    
    Ensure all fields are provided and match the expected values exactly.
    """
    # ollama_model = OpenAIModel(model_name='llama3.2:1b', base_url='http://localhost:11434/v1')

    model = LocalModel()
    agent = Agent(
        model=model,
        result_type=response_model,
        system_prompt=system_prompt,
    )
    
    
    # model_params.response_format = response_model.model_json_schema()
    result = await agent.run(
        prompt, 
        model_settings={'temperature': 0.7}
    )
    return result

'''

I couldn't any documentation regarding this topic, any help would be appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant