Skip to content

deps(nuget): Bump the microsoft-packages group with 2 updates#53

Merged
Ellerbach merged 2 commits intomainfrom
dependabot/nuget/src/AzureAISearchSimulator.Search/microsoft-packages-6576cab0b0
Feb 25, 2026
Merged

deps(nuget): Bump the microsoft-packages group with 2 updates#53
Ellerbach merged 2 commits intomainfrom
dependabot/nuget/src/AzureAISearchSimulator.Search/microsoft-packages-6576cab0b0

Conversation

@dependabot
Copy link
Copy Markdown
Contributor

@dependabot dependabot bot commented on behalf of github Feb 23, 2026

Updated Microsoft.ML.OnnxRuntime from 1.22.0 to 1.24.2.

Release notes

Sourced from Microsoft.ML.OnnxRuntime's releases.

1.24.2

This is a patch release for ONNX Runtime 1.24, containing several bug fixes, security improvements, and execution provider updates.

Bug Fixes

  • NuGet: Fixed native library loading issues in the ONNX Runtime NuGet package on Linux and macOS. (#​27266)
  • macOS: Fixed Java support and Jar testing on macOS ARM64. (#​27271)
  • Core: Enable Robust Symlink Support for External Data for Huggingface Hub Cache. (#​27374)
  • Core: Added boundary checks for SparseTensorProtoToDenseTensorProto to improve robustness. (#​27323)
  • Security: Fixed an out-of-bounds read vulnerability in ArrayFeatureExtractor. (#​27275)

Execution Provider Updates

  • MLAS: Fixed flakiness and accuracy issues in Lut GEMM (MatMulNBitsLutGemm). (#​27216)
  • QNN: Enabled 64-bit UDMA mode for HTP target v81 or above. (#​26677)
  • WebGPU:
    • Used LazyRelease for prepack allocator. (#​27077)
    • Fixed ConvTranspose bias validation in both TypeScript and C++ implementations. (#​27213)
  • OpenVINO (OVEP): Patch to reduce resident memory by reusing weight files across shared contexts. (#​27238)
  • DNNL: Fixed DNNL build error by including missing files. (#​27334)

Build and Infrastructure

  • CUDA:
    • Added support for CUDA architecture family codes (suffix 'f') introduced in CUDA 12.9. (#​27278)
    • Fixed build errors and warnings for various CUDA versions (12.8, 13.0, 13.1.1). (#​27276)
    • Applied patches for Abseil CUDA warnings. (#​27096, #​27126)
  • Pipelines:
    • Fixed Python packaging pipeline for Windows ARM64 and release. (#​27339, #​27350, #​27299)
    • Fixed DirectML NuGet pipeline to correctly bundle x64 and ARM64 binaries for release. (#​27349)
    • Updated Microsoft.ML.OnnxRuntime.Foundry package for Windows ARM64 support and NuGet signing. (#​27294)
  • Testing: Updated BaseTester to support plugin EPs with both compiled nodes and registered kernels. (#​27176)
  • Telemetry: Added service name and framework name to telemetry events for better usage understanding on Windows. (#​27252, #​27256)

Full Changelog: v1.24.1...v1.24.2

Contributors

@​tianleiwu, @​hariharans29, @​edgchen1, @​xiaofeihan1, @​adrianlizarraga, @​angelser, @​angelserMS, @​ankitm3k, @​baijumeswani, @​bmehta001, @​ericcraw, @​eserscor, @​fs-eire, @​guschmue, @​mc-nv, @​qjia7, @​qti-monumeen, @​titaiwangms, @​yuslepukhin

1.24.1

📢 Announcements & Breaking Changes

Platform Support Changes

  • Python 3.10 wheels are no longer published — Please upgrade to Python 3.11+
  • Python 3.14 support added
  • Free-threaded Python (PEP 703) — Added support for Python 3.13t and 3.14t in Linux (#​26786)
  • x86_64 binaries for macOS/iOS are no longer provided and minimum macOS is raised to 14.0

API Version


✨ New Features

🤖 Execution Provider (EP) Plugin API

A major infrastructure enhancement enabling plugin-based EPs with dynamic loading:

  • Initial kernel-based EP support (#​26206)
  • Weight pre-packing support for plugin EPs (#​26754)
  • EP Context model support (#​25124)
  • Control flow kernel APIs (#​26927)
  • OrtKernelInfo APIs for kernel-based plugin EPs (#​26803)

🔧 Core APIs

  • OrtApi::CreateEnvWithOptions() and OrtEpApi::GetEnvConfigEntries() (#​26971)
  • EP Device Compatibility APIs (#​26922)
  • External Resource Importer API for D3D12 shared resources (#​26828)
  • Session config access from KernelInfo (#​26589)

📊 Dependencies & Integration


🖥️ Execution Provider Updates

NVIDIA

  • CUDA EP: Flash Attention updates, GQA kernel fusion, BF16 support for MoE/qMoE/MatMulNBits, CUDA 13.0 support
  • TensorRT EP: Upgraded to TensorRT 10.14, automatic plugin loading, NVFP4 custom ops
  • TensorRT RTX EP: RTX runtime caching, CUDA graph support, BFloat16, memory-mapped engines

Qualcomm QNN EP

  • QNN SDK upgraded to 2.42.0 with new ops (RMSNorm, ScatterElements, GatherND, STFT, RandomUniformLike)
  • Gelu pattern fusion, LPBQ quantization support, ARM64 wheel builds, v81 device support

Intel & AMD

  • OpenVINO EP: Upgraded to 2025.4.1
  • VitisAI EP: External EP loader, compiled model compatibility API
    ... (truncated)

1.23.2

1.23.1

What's Changed

  • Fix Attention GQA implementation on CPU (#​25966)
  • Address edge GetMemInfo edge cases (#​26021)
  • Implement new Python APIs (#​25999)
  • MemcpyFromHost and MemcpyToHost support for plugin EPs (#​26088)
  • [TRT RTX EP] Fix bug for generating the correct subgraph in GetCapability (#​26132)
  • add session_id_ to LogEvaluationStart/Stop, LogSessionCreationStart (#​25590)
  • [build] fix WebAssembly build on macOS/arm64 (#​25653)
  • [CPU] MoE Kernel (#​25958)
  • [CPU] Block-wise QMoE kernel for CPU (#​26009)
  • [C#] Implement missing APIs (#​26101)
  • Regenerate test model with ONNX IR < 12 (#​26149)
  • [CPU] Fix compilation errors because of unused variables (#​26147)
  • [EP ABI] Check if nodes specified in GetCapability() have already been assigned (#​26156)
  • [QNN EP] Add dynamic option to set HTP performance mode (#​26135)

Full Changelog: microsoft/onnxruntime@v1.23.0...v1.23.1

1.23.0

Announcements

  • This release introduces Execution Provider (EP) Plugin API, which is a new infrastructure for building plugin-based EPs. (#​24887 , #​25137, #​25124, #​25147, #​25127, #​25159, #​25191, #​2524)

  • This release introduces the ability to dynamically download and install execution providers. This feature is exclusively available in the WinML build and requires Windows 11 version 25H2 or later. To leverage this new capability, C/C++/C# users should use the builds distributed through the Windows App SDK, and Python users should install the onnxruntime-winml package(will be published soon). We encourage users who can upgrade to the latest Windows 11 to utilize the WinML build to take advantage of this enhancement.

Upcoming Changes

  • The next release will stop providing x86_64 binaries for macOS and iOS operating systems.
  • The next release will increase the minimum supported macOS version from 13.4 to 14.0.
  • The next release will stop providing python 3.10 wheels.

Execution & Core Optimizations

Shutdown logic on Windows is simplified

Now on Windows some global object will be not destroyed if we detect that the process is being shutting down(#​24891) . It will not cause memory leak as when a process ends all the memory will be returned to the operating system. This change can reduce the chance of having crashes on process exit.

AutoEP/Device Management

Now ONNX Runtime has the ability to automatically discovery computing devices and select the best EPs to download and register. The EP downloading feature currently only works on Windows 11 version 25H2 or later.

Execution Provider (EP) Updates

ROCM EP was removed from the source tree. Users are recommended to use Migraphx or Vitis AI EPs from AMD.
A new EP, Nvidia TensorRT RTX, was added.

Web

EMDSK is upgraded from 4.0.4 to 4.0.8

WebGPU EP

Added WGSL template support.

QNN EP

SDK Update: Added support for QNN SDK 2.37.

KleidiAI

Enhanced performance for SGEMM, IGEMM, and Dynamic Quantized MatMul operations, especially for Conv2D operators on hardware that supports SME2 (Scalable Matrix Extension v2).

Known Problems

  • There was a change in build.py that was related to KleidiAI that may cause build failures when doing cross-compiling (#​26175) .

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

@​1duo, @​Akupadhye, @​amarin16, @​AndreyOrb, @​ankan-ban, @​ankitm3k, @​anujj, @​aparmp-quic, @​arnej27959, @​bachelor-dou, @​benjamin-hodgson, @​Bonoy0328, @​chenweng-quic, @​chuteng-quic, @​clementperon, @​co63oc, @​daijh, @​damdoo01-arm, @​danyue333, @​fanchenkong1, @​gedoensmax, @​genarks, @​gnedanur, @​Honry, @​huaychou, @​ianfhunter, @​ishwar-raut1, @​jing-bao, @​joeyearsley, @​johnpaultaken, @​jordanozang, @​JulienMaille, @​keshavv27, @​kevinch-nv, @​khoover, @​krahenbuhl, @​kuanyul-quic, @​mauriciocm9, @​mc-nv, @​minfhong-quic, @​mingyueliuh, @​MQ-mengqing, @​NingW101, @​notken12, @​omarhass47, @​peishenyan, @​pkubaj, @​qc-tbhardwa, @​qti-jkilpatrick, @​qti-yuduo, @​quic-ankus, @​quic-ashigarg, @​quic-ashwshan, @​quic-calvnguy, @​quic-hungjuiw, @​quic-tirupath, @​qwu16, @​ranjitshs, @​saurabhkale17, @​schuermans-slx, @​sfatimar, @​stefantalpalaru, @​sunnyshu-intel, @​TedThemistokleous, @​thevishalagarwal, @​toothache, @​umangb-09, @​vatlark, @​VishalX, @​wcy123, @​xhcao, @​xuke537, @​zhaoxul-qti

1.22.2

What's new?

This release adds an optimized CPU/MLAS implementation of DequantizeLinear (8 bit) and introduces the build option client_package_build, which enables default options that are more appropriate for client/on-device workloads (e.g., disable thread spinning by default).

Build System & Packages

  • Add –client_package_build option (#​25351) - @​jywu-msft
  • Remove the python installation steps from win-qnn-arm64-ci-pipeline.yml (#​25552) - @​snnn

CPU EP

  • Add multithreaded/vectorized implementation of DequantizeLinear for int8 and uint8 inputs (SSE2, NEON) (#​24818) - @​adrianlizarraga

QNN EP

  • Add support for the Upsample, Einsum, LSTM, and CumSum operators (#​24265, #​24616, #​24646, #​24820) - @​quic-zhaoxul, @​1duo, @​chenweng-quic, @​Akupadhye
  • Fuse scale into Softmax (#​24809) - @​qti-yuduo
  • Enable DSP queue polling when performance is set to “burst” mode (#​25361) - @​quic-calvnguy
  • Update QNN SDK to version 2.36.1 (#​25388) - @​qti-jkilpatrick
  • Include the license file from QNN SDK in the Microsoft.ML.OnnxRunitme.QNN NuGet package (#​25158) - @​HectorSVC

1.22.1

What's new?

This release replaces static linking of dxcore.lib with optional runtime loading, lowering the minimum supported version from Windows 10 22H2 (10.0.22621) to 20H1 (10.0.19041). This enables compatibility with Windows Server 2019 (10.0.17763), where dxcore.dll may be absent.

  • change dependency from gitlab eigen to github eigen-mirror #​24884 - @​prathikr
  • Weaken dxcore dependency #​24845 - @​skottmckay
  • [DML] Restore compatibility with Windows Sdk 10.0.17134.0 #​24950 - @​JulienMaille
  • Disable VCPKG's binary cache #​24889 - @​snnn

Commits viewable in compare view.

Updated Microsoft.ML.Tokenizers from 1.0.2 to 2.0.0.

Release notes

Sourced from Microsoft.ML.Tokenizers's releases.

1.7.1

Minor servicing update with dependency updates and PFI bug fix for correctly finding the correct transformer to use.

1.7.0-rc.1

ML.NET 1.7.0 RC 1

Moving forward, we are going to be aligning more with the overall .NET release schedule. As such, this is a smaller release since we had a larger one just about 3 months ago but it aligns us with the release of .NET 6.

New Features

ML.NET

  • Switched to getting version from assembly custom attributes- (#​4512) Remove reliance on getting product version for model.zip/version.txt from FileVersionInfo and replace with using assembly custom attributes. This will help in supporting single file applications. (Thanks @​r0ss88)
  • Can now optionally not dispose of the underlying model when you dispose a prediction engine. (#​5964) A new prediction engine options class has been added that lets you determine if the underlying model should be disposed of or not when the prediction engine itself is disposed of.
  • Can now set the number of threads that onnx runtime uses (#​5962) This lets you specify the number of parallel threads ONNX runtime will use to execute the graph and run the model. (Thanks @​yaeldekel)
  • The PFI API has been completely reworked and is now much more user friendly (#​5934) You can now get the output from PFI as a dictionary mapping the column name (or the slot name) to its PFI result.

DataFrame

  • Can now merge using multiple columns in a JOIN condition (#​5838) (Thanks @​asmirnov82)

Enhancements

ML.NET

  • Run formatting on all src projects (#​5937) (Thanks @​jwood803)
  • Added BufferedStream for reading from DeflateStream - reduces loading time for .NET core (#​5924) (Thanks @​martintomasek)
  • Update editor config to match Roslyn and format samples (#​5893) (Thanks @​jwood803)
  • Few more minor editor config changes (#​5933)

DataFrame

  • Use Equals and = operator for DataViewType comparison (#​5942) (Thanks @​thoron)

Bug Fixes

  • Initialize _bestMetricValue when using the Loss metric (#​5939) (Thanks @​MiroslavKabat)

Build / Test updates

  • Changed the queues used for building/testing from Ubuntu 16.04 to 18.04 (#​5970)
  • Add in support for building with VS 2022. (#​5956)
  • Codecov yml token was added (#​5950)
  • Move from XliffTasks to Microsoft.DotNet.XliffTasks (#​5887)

Documentation Updates

  • Fixed up Readme, updated the roadmap, and new doc detailing some platform limitations. (#​5892)

Breaking Changes

  • None

1.6.0

ML.NET 1.6.0

New Features

  • Support for Arm/Arm64/Apple Silicon has been added. (#​5789) You can now use most ML.NET on Arm/Arm64/Apple Silicon devices. Anything without a hard dependency on x86 SIMD instructions or Intel MKL are supported.
  • Support for specifying a temp path ML.NET will use. (#​5782) You can now set the TempFilePath in the MLContext that it will use.
  • Support for specifying the recursion limit to use when loading an ONNX model (#​5840) The recursion limit defaults to 100, but you can now specify the value in case you need to use a larger number. (Thanks @​Crabzmatic)
  • Support for saving Tensorflow models in the SavedModel format added (#​5797) You can now save models that use the Tensorflow SavedModel format instead of just the frozen graph format. (Thanks @​darth-vader-lg)
  • DataFrame Specific enhancements
  • Extended DataFrame GroupBy operation (#​5821) Extend DataFrame GroupBy operation by adding new property Groupings. This property returns collection of IGrouping objects (the same way as LINQ GroupBy operation does) (Thanks @​asmirnov82)

Enhancements

  • Switched from using a fork of SharpZipLib to using the official package (#​5735)
  • Let user specify a temp path location (#​5782)
  • Clean up ONNX temp models by opening with a "Delete on close" flag (#​5782)
  • Ensures the named model is loaded in a PredictionEnginePool before use (#​5833) (Thanks @​feiyun0112)
  • Use indentation for 'if' (#​5825) (Thanks @​feiyun0112)
  • Use Append instead of AppendFormat if we don't need formatting (#​5826) (Thanks @​feiyun0112)
  • Cast by using is operator (#​5829) (Thanks @​feiyun0112)
  • Removed unnecessary return statements (#​5828) (Thanks @​feiyun0112)
  • Removed code that could never be executed (#​5808) (Thanks @​feiyun0112)
  • Remove some empty statements (#​5827) (Thanks @​feiyun0112)
  • Added in short-circuit logic for conditionals (#​5824) (Thanks @​feiyun0112)
  • Update LightGBM to v2.3.1 (#​5851)
  • Raised the default recursion limit for ONNX models from 10 to 100. (#​5796) (Thanks @​darth-vader-lg)
  • Speed up the inference of the Tensorflow saved_models. (#​5848) (Thanks @​darth-vader-lg)
  • Speed-up bitmap operations on images. (#​5857) (Thanks @​darth-vader-lg)
  • Updated to latest version of Intel MKL. (#​5867)
  • AutoML.NET specific enhancements
  • Offer suggestions for possibly mistyped label column names in AutoML (#​5624) (Thanks @​Crabzmatic)
  • DataFrame Specific enhancements
  • Improve csv parsing (#​5711)
  • IDataView to DataFrame (#​5712)
  • Update to the latest Microsoft.DotNet.Interactive (#​5710)
  • Move DataFrame to machinelearning repo (#​5641)
  • Improvements to the sort routine (#​5776)
  • Improvements to the Merge routine (#​5778)
  • Improve DataFrame exception text (#​5819) (Thanks @​asmirnov82)
  • DataFrame csv DateTime enhancements (#​5834)

Bug Fixes

  • Fix erroneous use of TaskContinuationOptions in ThreadUtils.cs (#​5753)
  • Fix a few locations that can try to access a null object (#​5804) (Thanks @​feiyun0112)
  • Use return value of method (#​5818) (Thanks @​feiyun0112)
  • Adding throw to some exceptions that weren't throwing them originally (#​5823) (Thanks @​feiyun0112)
  • Fixed a situation in the CountTargetEncodingTransformer where it never reached the stop condition (#​5822) (Thanks @​feiyun0112)
  • DataFrame Specific bug fixes
  • Fix issue with DataFrame Merge method (#​5768) (Thanks @​asmirnov82)

... (truncated)

1.5.5

New Features

  • New API allowing confidence parameter to be a double.(#​5623)
    . A new API has been added to accept double type for the confidence level. This helps when you need to have higher precision than an int will allow for. (Thank you @​esso23)
  • Support to export ValueMapping estimator to ONNX was added (#​5577)
  • New API to treat TensorFlow output as batched/not-batched (#​5634) A new API has been added so you can specify if the output from TensorFlow is batched or not.

Enhancements

  • Make ColumnInference serializable (#​5611)

Bug Fixes

  • AutoML.NET specific fixes.
    • Fixed an AutoML aggregate timeout exception (#​5631)
    • Offer suggestions for possibly mistyped label column names in AutoML (#​5624) (Thank you @​Crabzmatic)
  • Update some ToString conversions (#​5627) (Thanks @​4201104140)
  • Fixed an issue in SRCnnEntireAnomalyDetector (#​5579)
  • Fixed nuget.config multi-feed issue (#​5614)
  • Remove references to Microsoft.ML.Scoring (#​5602)
  • Fixed Averaged Perceptron default value (#​5586)

Build / Test updates

  • Fixing official build by adding homebrew bug workaround (#​5596)
  • Nuget.config url fix for roslyn compilers (#​5584)
  • Add SymSgdNative reference to AutoML.Tests.csproj (#​5559)

Documentation Updates

  • Updated documentation for the correct version of CUDA for TensorFlow. (#​5635)
  • Updates documentation for an issue with brew and installing libomp. (#​5635)
  • Updated an ONNX url to the correct url. (#​5635)
  • Added a note in the documentation that the PredictionEngine is not thread safe. (#​5583)

Breaking Changes

  • None

1.5.4

New Features

  • New API for exporting models to Onnx. (#​5544). A new API has been added to Onnx converter to specify the output columns you care about. This will export a smaller and more performant model in many cases.

Enhancements

  • Perf improvement for TopK Accuracy and return all topK in Classification Evaluator (#​5395) (Thank you @​jasallen)
  • Update OnnxRuntime to 1.6 (#​5529)
  • Updated tensorflow.net to 0.20.0 (#​5404)
  • Added in DcgTruncationLevel to AutoML api and increased default level to 10 (#​5433)

Bug Fixes

  • AutoML.NET specific fixes.
    • Fixed AutoFitMaxExperimentTimeTest (#​5506)
    • Fixed code generator tests failure (#​5520)
    • Use Timer and ctx.CancelExecution() to fix AutoML max-time experiment bug (#​5445)
    • Handled exception during GetNextPipeline for AutoML (#​5455)
    • Fixed internationalization bug(#​5162) in AutoML parameter sweeping caused by culture dependent float parsing. (#​5163)
    • Fixed MaxModels exit criteria for AutoML unit test (#​5471)
    • Fixed AutoML CrossValSummaryRunner for TopKAccuracyForAllK (#​5548)
  • Fixed bug in Tensorflow Transforer with handling primitive types (#​5547)
  • Fixed MLNet.CLI build error (#​5546)
  • Fixed memory leaks from OnnxTransformer (#​5518)
  • Fixed memory leak in object pool (#​5521)
  • Fixed Onnx Export for ProduceWordBags (#​5435)
  • Upgraded boundary calculation and expected value calculation in SrCnnEntireAnomalyDetector (#​5436)
  • Fixed SR anomaly score calculation at beginning (#​5502)
  • Improved error message in ColumnConcatenatingEstimator (#​5444)
  • Fixed issue 5020, allow ML.NET to load tf model with primitive input and output column (#​5468)
  • Fixed issue 4322, enable lda summary output (#​5260)
  • Fixed perf regression in ShuffleRows (#​5417)
  • Change the _maxCalibrationExamples default on CalibratorUtils (#​5415)

Build / Test updates

  • Migrated to Arcade build system that is used my multiple dotnet projects. This will give increased build/CI efficiencies going forward. Updated build instructions can be found in the docs/building folder
  • Fixed MacOS builds (#​5467 and #​5457)

Documentation Updates

  • Fixed Spelling on stopwords (#​5524)(Thank you @​LeoGaunt)
  • Changed LoadRawImages Sample (#​5460)

Breaking Changes

  • None

1.5.2

New Features

  • New API and algorithms for time series data. In this release ML.NET introduces new capabilities for working with time series data.
    • Detecting seasonality in time series (#​5231)
    • Removing seasonality from time series prior to anomaly detection (#​5202)
    • Threshold for root cause analysis (#​5218)
    • RCA for anomaly detection can now return multiple dimensions(#​5236)
  • Ranking experiments in AutoML.NET API. ML.NET now adds support for automating ranking experiments. (#​5150, #​5246) Corresponding support will soon be added to Model Builder in Visual Studio.
  • Cross validation support in ranking (#​5263)
  • CountTargetEncodingEstimator. This transforms a categorical column into a set of features that includes the count of each label class, the log-odds for each label class and the back-off indicator (#​4514)

Enhancements

  • Onnx Enhancements
    • Support more types for ONNX export of HashEstimator (#​5104)
    • Added ONNX export support for NaiveCalibrator (#​5289)
    • Added ONNX export support for StopWordsRemovingEstimator and CustomStopWordsRemovingEstimator (#​5279)
    • Support onnx export with previous OpSet version (#​5176)
    • Added a sample for Onnx conversion (#​5195)
  • New features in old transformers
    • Robust Scaler now added to the Normalizer catalog (#​5166)
    • ReplaceMissingValues now supports Mode as a replacement method. (#​5205)
    • Added in standard conversions to convert types to string (#​5106)
  • Output topic summary to model file for LDATransformer (#​5260)
  • Use Channel Instead of BufferBlock (#​5123, #​5313). (Thanks @​jwood803)
  • Support specifying command timeout while using the database loader (#​5288)
  • Added cross entropy support to validation training, edited metric reporting (#​5255)
  • Allow TextLoader to load empty float/double fields as NaN instead of 0 (#​5198)

Bug Fixes

  • Changed default value of RowGroupColumnName from null to GroupId (#​5290)
  • Updated AveragedPerceptron default iterations from 1 to 10 (#​5258)
  • Properly normalize column names in Utils.GetSampleData() for duplicate cases (#​5280)
  • Add two-variable scenario in Tensor shape inference for TensorflowTransform (#​5257)
  • Fixed score column name and order bugs in CalibratorTransformer (#​5261)
  • Fix for conditional error in root cause analysis additions (#​5269)
  • Ensured Sanitized Column Names are Unique in AutoML CLI (#​5177)
  • Ensure that the graph is set to be the current graph when scoring with multiple models (#​5149)
  • Uniform onnx conversion method when using non-default column names (#​5146)
  • Fixed multiple issues related to splitting data. (#​5227)
  • Changed default NGram length from 1 to 2. (#​5248)
  • Improve exception msg by adding column name (#​5232)
  • Use model schema type instead of class definition schema (#​5228)
  • Use GetRandomFileName when creating random temp folder to avoid conflict (#​5229)
  • Filter anomalies according to boundaries under AnomalyAndMargin mode (#​5212)
  • Improve error message when defining custom type for variables (#​5114)
  • Fixed OnnxTransformer output column mapping. (#​5192)
  • Fixed version format of built packages (#​5197)
  • Improvements to "Invalid TValue" error message (#​5189)
  • Added IDisposable to OnnxTransformer and fixed memory leaks (#​5348)
  • Fixes #​4392. Added AddPredictionEnginePool overload for implementation factory (#​4393)
    ... (truncated)

1.5.0

New Features

  • New anomaly detection algorithm (#​5135). ML.NET has previously supported anomaly detection through DetectAnomalyBySrCnn. This function operates in a streaming manner by computing anomalies around each arriving point and examining a window around it. Now we introduce a new function DetectEntireAnomalyBySrCnn that computes anomalies by considering the entire dataset and also supports the ability to set sensitivity and output margin.
  • Root Cause Detection (#​4925) ML.NET now also supports root cause detection for anomalies detected in time series data.

Enhancements

  • Updates to TextLoader
    • Enable TextLoader to accept new lines in quoted fields (#​5125)
    • Add escapeChar support to TextLoader (#​5147)
    • Add public generic methods to TextLoader catalog that accept Options objects (#​5134)
    • Added decimal marker option in TextLoader (#​5145, #​5154)
  • Onnxruntime updated to v1.3 (#​5104). This brings support for additional data types for the HashingEstimator.
  • Onnx export for OneHotHashEncodingTransformer and HashingTransormer (#​5013, #​5152, #​5138)
  • Support for Categorical features in CalculateFeatureContribution of LightGBM (#​5018)

Bug Fixes

In this release we have traced down every bug that would occur randomly and sporadically and fixed many subtle bugs. As a result, we have also re-enabled a lot of tests listed in the Test Updates section below.

  • Fixed race condition for test MulticlassTreeFeaturizedLRTest (#​4950)
  • Fix SsaForecast bug (#​5023)
  • Fixed x86 crash (#​5081)
  • Fixed and added unit tests for EnsureResourceAsync hanging issue (#​4943)
  • Added IDisposable support for several classes (#​4939)
  • Updated libmf and corresponding MatrixFactorizationSimpleTrainAndPredict() baselines per build (#​5121)
  • Fix MatrixFactorization trainer's warning (#​5071)
  • Update CodeGenerator's console project to netcoreapp3.1 (#​5066)
  • Let ImageLoadingTransformer dispose the last image it loads (#​5056)
  • [LightGBM] Fixed bug for empty categorical values (#​5048)
  • Converted potentially large variables to type long (#​5041)
  • Made resource downloading more robust (#​4997)
  • Updated MultiFileSource.Load to fix inconsistent behavior with multiple files (#​5003)
  • Removed WeakReference already cleaned up by GC (#​4995)
  • Fixed Bitmap(file) locking the file. (#​4994)
  • Remove WeakReference list in PredictionEnginePoolPolicy. (#​4992)
  • Added the assembly name of the custom transform to the model file (#​4989)
  • Updated constructor of ImageLoadingTransformer to accept empty imageFolder paths (#​4976)

Onnx bug fixes

  • ColumnSelectingTransformer now infers ONNX shape (#​5079)
  • Fixed KMeans scoring differences between ORT and OnnxRunner (#​4942)
  • CountFeatureSelectingEstimator no selection support (#​5000)
  • Fixes OneHotEncoding Issue (#​4974)
  • Fixes multiclass logistic regression (#​4963)
  • Adding vector tests for KeyToValue and ValueToKey (#​5090)

AutoML fixes

  • Handle NaN optimization metric in AutoML (#​5031)
  • Add projects capability in CodeGenerator (#​5002)
  • Simplify CodeGen - phase 2 (#​4972)
  • Support sweeping multiline option in AutoML (#​5148)

... (truncated)

1.5.0-preview2

New Features (IN-PREVIEW, please provide feedback)

  • TimeSeriesImputer (#​4623) This data transformer can be used to impute missing rows in time series data.
  • LDSVM Trainer (#​4060) The "Local Deep SVM" usess trees as its SVM kernel to create a non-linear binary trainer. A sample can be found here.
  • Onnxruntime updated to v1.2 This also includes support for GPU execution of onnx models
  • Export-to-ONNX for below components:

Bug Fixes

  • Fix issue in WaiterWaiter caused by race condition (#​4829)
    • Onnx Export change to allow for running inference on multiple rows in OnnxRuntime (#​4783)
  • Data splits to default to MLContext seed when not specified (#​4764)
  • Add Seed property to MLContext and use as default for data splits (#​4775)
  • Onnx bug fixes
    • Updating onnxruntime version (#​4882)
    • Calculate ReduceSum row by row in ONNX model from OneVsAllTrainer (#​4904)
    • Several onnx export fixes related to KeyToValue and ValueToKey transformers (#​4900, #​4866, #​4841, #​4889, #​4878, #​4797)
    • Fixes to onnx export for text related transforms (#​4891, #​4813)
    • Fixed bugs in OptionalColumnTransform and ColumnSelecting (#​4887, #​4815)
    • Alternate solution for ColumnConcatenatingTransformer (#​4875)
    • Added slot names support for OnnxTransformer (#​4857)
    • Fixed output schema of OnnxTransformer (#​4849)
    • Changed Binarizer node to be cast to the type of the predicted label … (#​4818)
    • Fix for OneVersusAllTrainer (#​4698)
    • Enable OnnxTransformer to accept KeyDataViewTypes as if they were UInt32 (#​4824)
    • Fix off by 1 error with the cats_int64s attribute for the OneHotEncoder ONNX operator (#​4827)
    • Changed Binarizer node to be cast to the type of the predicted label … (#​4818)
    • Updated handling of missing values with LightGBM, and added ability to use (0) as missing value (#​4695)
    • Double cast to float for some onnx estimators (#​4745)
    • Fix onnx output name for GcnTransform (#​4786)
  • Added support to run PFI on uncalibrated binary classification models (#​4587)
  • Fix bug in WordBagEstimator when training on empty data (#​4696)
  • Added Cancellation mechanism to Image Classification (through the experimental nuget) (fixes #​4632) (#​4650)
  • Changed F1 score to return 0 instead of NaN when Precision + Recall is 0 (#​4674)
  • TextLoader, BinaryLoader and SvmLightLoader now check the existence of the input file before training (#​4665)
  • ImageLoadingTransformer now checks the existence of input folder before training (#​4691)
  • Use random file name for AutoML experiment folder (#​4657)
  • Using invariance culture when converting to string (#​4635)
  • Fix NullReferenceException when it comes to Recommendation in AutoML and CodeGenerator (#​4774)

Enhancements

  • Added in support for System.DateTime type for the DateTimeTransformer (#​4661)
  • Additional changes to ExpressionTransformer (#​4614)
  • Optimize generic MethodInfo for Func (#​4588)
    ... (truncated)

1.5.0-preview

New Features (IN-PREVIEW, please provide feedback)

  • Export-to-ONNX for below components:

    • WordTokenizingTransformer (#​4451)
    • NgramExtractingTransformer (#​4451)
    • OptionalColumnTransform (#​4454)
    • KeyToValueMappingTransformer (#​4455)
    • LbfgsMaximumEntropyMulticlassTrainer (4462)
    • LightGbmMulticlassTrainer (4462)
    • LightGbmMulticlassTrainer with SoftMax (4462)
    • OneVersusAllTrainer (4462)
    • SdcaMaximumEntropyMulticlassTrainer (4462)
    • SdcaNonCalibratedMulticlassTrainer (4462)
    • CopyColumn Transform (#​4486)
    • PriorTrainer (#​4515)
  • DateTime Transformer (#​4521)

  • Loader and Saver for SVMLight file format (#​4190)
    Sample

  • Expression transformer (#​4548)
    The expression transformer takes the expression in the form of text using syntax of a simple expression language, and performs the operation defined in the expression on the input columns in each row of the data. The transformer supports having a vector input column, in which case it applies the expression to each slot of the vector independently. The expression language is extendable to user defined operations.
    Sample

Bug Fixes

  • Fix using permutation feature importance with Binary Prediction Transformer and CalibratedModelParametersBase loaded from disk. (#​4306)
  • Fixed model saving and loading of OneVersusAllTrainer to include SoftMax. (#​4472)
  • Ignore hidden columns in AutoML schema checks of validation data. (#​4490)
  • Ensure BufferBlocks are completed and empty in RowShufflingTransformer. (#​4479)
  • Create methods not being called when loading models from disk. (#​4485)
  • Fixes onnx exports for binary classification trainers. (#​4463)
  • Make PredictionEnginePool.GetPredictionEngine thread safe. (#​4570)
  • Memory leak when using FeaturizeText transform. (#​4576)
  • System.ArgumentOutOfRangeException issue in CustomStopWordsRemovingTransformer. (#​4592)
  • Image Classification low accuracy on EuroSAT Dataset. (4522)

Stability fixes by Sam Harwell

  • Prevent exceptions from escaping FileSystemWatcher events. (#​4535)
  • Make local functions static where applicable. (#​4530)
  • Disable CS0649 in OnnxConversionTest. (#​4531)
  • Make test methods public. (#​4532)
  • Conditionally compile helper code. (#​4534)
  • Avoid running API Compat for design time builds. (#​4529)
  • Pass by reference when null is not expected. (#​4546)
  • Add Xunit.Combinatorial for test projects. (#​4545)
  • Use Theory to break up tests in OnnxConversionTest. (#​4533)
  • Update code coverage integration. (#​4543)
  • Use std::unique_ptr for objects in LdaEngine. (#​4547)
  • Enable VSTestBlame to show details for crashes. (#​4537)
  • Use std::unique_ptr for samplers_ and likelihood_in_iter_. (#​4551)
  • Add tests for IParameterValue implementations. (#​4549)
  • Convert LdaEngine to a SafeHandle. (#​4538)
    ... (truncated)

1.4.0

New Features

  • General Availability of Image Classification API
    Introduces Microsoft.ML.Vision package that enables image classification by leveraging an existing pre-trained deep neural network model. Here the API trains the last classification layer using TensorFlow by using its C# bindings from TensorFlow .NET. This is a high level API that is simple yet powerful. Below are some of the key features:

    • GPU training: Supported on Windows and Linux, more information here.
    • Early stopping: Saves time by stopping training automatically when model has been stabelized.
    • Learning rate scheduler: Learning rate is an integral and potentially difficult part of deep learning. By providing learning rate schedulers, we give users a way to optimize the learning rate with high initial values which can decay over time. High initial learning rate helps to introduce randomness into the system, allowing the Loss function to better find the global minima. While the decayed learning rate helps to stabilize the loss over time. We have implemented Exponential Decay Learning rate scheduler and Polynomial Decay Learning rate scheduler.
    • Pre-trained DNN Architectures: The supported DNN architectures used internally for transfer learning are below:
      • Inception V3.
      • ResNet V2 101.
      • ResNet V2 50.
      • MobileNet V2.

    Example code:

    var pipeline = mlContext.MulticlassClassification.Trainers.ImageClassification(
                    featureColumnName: "Image", labelColumnName: "Label");
    
    ITransformer trainedModel = pipeline.Fit(trainDataView);

    Samples

    Defaults

    Learning rate scheduling

    Early stopping

    ResNet V2 101 train-test split

    End-to-End

  • General Availability of Database Loader
    The database loader enables to load data from databases into the IDataView and therefore enables model training directly against relational databases. This loader supports any relational database provider supported by System.Data in .NET Core or .NET Framework, meaning that you can use any RDBMS such as SQL Server, Azure SQL Database, Oracle, SQLite, PostgreSQL, MySQL, Progress, etc.

    It is important to highlight that in the same way as when training from files, when training with a database ML .NET also supports data streaming, meaning that the whole database doesn’t need to fit into memory, it’ll be reading from the database as it needs so you can handle very large databases (i.e. 50GB, 100GB or larger).

    Example code:

    //Lines of code for loading data from a database into an IDataView for a later model training
    //...
    string connectionString = @"Data Source=YOUR_SERVER;Initial Catalog= YOUR_DATABASE;Integrated Security=True";
    
    string commandText = "SELECT * from SentimentDataset";
    
    DatabaseLoader loader = mlContext.Data.CreateDatabaseLoader();
    DbProviderFactory providerFactory = DbProviderFactories.GetFactory("System.Data.SqlClient");
    DatabaseSource dbSource = new DatabaseSource(providerFactory, connectionString, commandText);

... (truncated)

1.4.0-preview2

New Features

Bug Fixes

  • OnnxSequenceType and ColumnName attributes together doesn't work (#​4187)
  • Fix memory leak in TensorflowTransformer (#​4223)
  • Enable permutation feature importance to be used with model loaded from disk (#​4262)
  • IsSavedModel returns true when loaded TensorFlow model is a frozen model (#​4262)
  • Exception when using OnnxSequenceType attribute directly without specify sequence type (#​4272, #​4297)

Samples

  • TensorFlow full model retrain sample (#​4127)

Breaking Changes

None.

Obsolete API

Enhancements

  • Improve exception message in LightGBM (#​4214)
  • FeaturizeText should allow only outputColumnName to be defined (#​4211)
  • Fix NgramExtractingTransformer GetSlotNames to not allocate a new delegate on every invoke (#​4247)
  • Resurrect broken code coverage build and re-enable code coverage for pull request (#​4261)
  • NimbusML entrypoint for permutation feature importance (#​4232)
  • Reuse memory when copying outputs from TensorFlow graph (#​4260)
  • DateTime to DateTime standard conversion (#​4273)
  • CodeCov version upgraded to 1.7.2 (#​4291)

CLI and AutoML API

None.

Remarks

... (truncated)

1.4.0-preview

New Features

  • Deep Neural Networks Training (0.16.0-preview) (#​4151)

    Improves the in-preview ImageClassification API further:

    • Increases DNN training speed by ~10x compared to the same API in 0.15.1 release.
    • Prevents repeated computations by caching featurized image values to disk from intermediate layers to train the final fully-connected layer.
    • Reduced and constant memory footprint.
    • Simplifies the API by not requiring the user to pre-process the image.
    • Introduces callback to provide metrics during training such as accuracy, cross-entropy.
    • Improved image classification sample.
          public static ImageClassificationEstimator ImageClassification(
              this ModelOperationsCatalog catalog,
              string featuresColumnName,
              string labelColumnName,
              string scoreColumnName = "Score",
              string predictedLabelColumnName = "PredictedLabel",
              Architecture arch = Architecture.InceptionV3,
              int epoch = 100,
              int batchSize = 10,
              float learningRate = 0.01f,
              ImageClassificationMetricsCallback metricsCallback = null,
              int statisticFrequency = 1,
              DnnFramework framework = DnnFramework.Tensorflow,
              string modelSavePath = null,
              string finalModelPrefix = "custom_retrained_model_based_on_",
              IDataView validationSet = null,
              bool testOnTrainSet = true,
              bool reuseTrainSetBottleneckCachedValues = false,
              bool reuseValidationSetBottleneckCachedValues = false,
              string trainSetBottleneckCachedValuesFilePath = "trainSetBottleneckFile.csv",
              string validationSetBottleneckCachedValuesFilePath = "validationSetBottleneckFile.csv"
              )

    Design specification

    Sample

  • Database Loader (0.16.0-preview) (#​4070,#​4091,#​4138)

    Additional DatabaseLoader support:

    • Support DBNull.
    • Add CreateDatabaseLoader<TInput> to map columns from a .NET Type.
    • Read multiple columns into a single vector

    Design specification

... (truncated)

1.3.1

New Features

  • Deep Neural Networks Training (PREVIEW) (#​4057)
    Introduces in-preview 0.15.1 Microsoft.ML.DNN package that enables full DNN model retraining and transfer learning in .NET using C# bindings for tensorflow provided by Tensorflow .NET. The goal of this package is to allow high level DNN training and scoring tasks such as image classification, text classification, object detection, etc using simple yet powerful APIs that are framework agnostic but currently they only uses Tensorflow as the backend. The below APIs are in early preview and we hope to get customer feedback that we can incorporate in the next iteration.

    ![DNN stack](https://github.com/dotnet/machinelearning/blob/master/docs/release-notes/1.3.1/dnn_s...

...

Description has been truncated

Bumps Microsoft.ML.OnnxRuntime from 1.22.0 to 1.24.2
Bumps Microsoft.ML.Tokenizers from 1.0.2 to 2.0.0

---
updated-dependencies:
- dependency-name: Microsoft.ML.OnnxRuntime
  dependency-version: 1.24.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: microsoft-packages
- dependency-name: Microsoft.ML.Tokenizers
  dependency-version: 2.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: microsoft-packages
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies For dependabot dotnet For .NET changes especially in the dependabot labels Feb 23, 2026
@Ellerbach Ellerbach merged commit 840524d into main Feb 25, 2026
3 checks passed
@Ellerbach Ellerbach deleted the dependabot/nuget/src/AzureAISearchSimulator.Search/microsoft-packages-6576cab0b0 branch February 25, 2026 10:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies For dependabot dotnet For .NET changes especially in the dependabot

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cannot use middleware dependencies when using custom ModelLoader on AddPredictionEnginePool

1 participant