- 
                Notifications
    You must be signed in to change notification settings 
- Fork 715
Closed
Copy link
Labels
aiarea-integrationsIssues pertaining to Aspire Integrations packagesIssues pertaining to Aspire Integrations packages
Milestone
Description
Is there an existing issue for this?
- I have searched the existing issues
Describe the bug
Seems like the strongly-typed model catalog does not handle CPU/GPU/NPU model variations when using Foundry Local.
Expected Behavior
No response
Steps To Reproduce
(the following steps need to be run on a machine with a GPU supported by Foundry Local, e.g. nVidia RTX series)
- Create an Aspire starter app and add reference to Aspire.Hosting.Azure.AIFoundrynuget package to application host.
- Install Foundry Local Aspire.Hosting.Azure.AIFoundry
- Download phi-4-mini model, e.g. using the CLI: foundry model download phi-4-mini
- Add the following to the app host code
var localFoundry = builder.AddAzureAIFoundry("foundry")
    .RunAsFoundryLocal();
_ = localFoundry.AddDeployment("chat", "phi-4-mini", "1", "Microsoft"); // this works
_ = localFoundry.AddDeployment("chat2", AIFoundryModel.Microsoft.Phi4MiniInstruct); // this does not- Run the app
Expected
Both "model deployments" should work
Actual
chat2 sub-resource fails to start with the error:
Failed to start Phi-4-mini-instruct. Error: Model 'Phi-4-mini-instruct' was not found in the catalogue
This is because neither the model ID, nor the model alias match what the strongly typed model catalog contains:
> foundry cache ls
Models cached on device:
   Alias                         Model ID
💾 phi-4-mini                    Phi-4-mini-instruct-cuda-gpuExceptions (if any)
No response
.NET Version info
Aspire version 9.5.0-preview.1.25468.4
Dotnet 10.0.0-rc.1.25451.107
Anything else?
No response
Metadata
Metadata
Assignees
Labels
aiarea-integrationsIssues pertaining to Aspire Integrations packagesIssues pertaining to Aspire Integrations packages