-
Notifications
You must be signed in to change notification settings - Fork 488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(deps): update dependency xgboost to v2 #514
base: develop
Are you sure you want to change the base?
Conversation
⚠ Artifact update problemRenovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is. ♻ Renovate will retry this branch, including artifacts, only when one of the following happens:
The artifact failure details are included below: File name: poetry.lock
|
2c1fc5e
to
86462a7
Compare
86462a7
to
e7ea716
Compare
e7ea716
to
b33af6c
Compare
b33af6c
to
6ab0c1e
Compare
|
6ab0c1e
to
b24c9fb
Compare
b24c9fb
to
f465652
Compare
This PR contains the following updates:
0.90
->2.1.2
Release Notes
dmlc/xgboost (xgboost)
v2.1.2
: 2.1.2 Patch ReleaseCompare Source
The 2.1.2 patch release makes the following bug fixes:
pip check
does not fail due to a bad platform tag (#10755)poll.h
andmmap
(#10767)Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
Source tarball
v2.1.1
: 2.1.1 Patch ReleaseCompare Source
The 2.1.1 patch release make the following bug fixes:
broadcast
in thescatter
call so thatpredict
function won't hang (#10632) by @trivialfis/sys/fs/cgroup/cpu.max
are not readable by the user (#10623) by @trivialfisIn addition, it contains several enhancements:
xgboost-cpu
(#10603) by @hcho3Full Changelog: dmlc/xgboost@v2.1.0...v2.1.1
Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
Source tarball
v2.1.0
: Release 2.1.0 stableCompare Source
2.1.0 (2024 Jun 20)
We are thrilled to announce the XGBoost 2.1 release. This note will start by summarizing some general changes and then highlighting specific package updates. As we are working on a new R interface, this release will not include the R package. We'll update the R package as soon as it's ready. Stay tuned!
Networking Improvements
An important ongoing work for XGBoost, which we've been collaborating on, is to support resilience for improved scaling and federated learning on various platforms. The existing networking library in XGBoost, adopted from the RABIT project, can no longer meet the feature demand. We've revamped the RABIT module in this release to pave the way for future development. The choice of using an in-house version instead of an existing library is due to the active development status with frequent new feature requests like loading extra plugins for federated learning. The new implementation features:
Related PRs (#9597, #9576, #9523, #9524, #9593, #9596, #9661, #10319, #10152, #10125, #10332, #10306, #10208, #10203, #10199, #9784, #9777, #9773, #9772, #9759, #9745, #9695, #9738, #9732, #9726, #9688, #9681, #9679, #9659, #9650, #9644, #9649, #9917, #9990, #10313, #10315, #10112, #9531, #10075, #9805, #10198, #10414).
The existing option of using
MPI
in RABIT is removed in the release. (#9525)NCCL is now fetched from PyPI.
In the previous version, XGBoost statically linked NCCL, which significantly increased the binary size and led to hitting the PyPI repository limit. With the new release, we have made a significant improvement. The new release can now dynamically load NCCL from an external source, reducing the binary size. For the PyPI package, the
nvidia-nccl-cu12
package will be fetched during installation. With more downstream packages reusing NCCL, we expect the user environments to be slimmer in the future as well. (#9796, #9804, #10447)Parts of the Python package now require glibc 2.28+
Starting from 2.1.0, XGBoost Python package will be distributed in two variants:
manylinux_2_28
: for recent Linux distros with glibc 2.28 or newer. This variant comes with all features enabled.manylinux2014
: for old Linux distros with glibc older than 2.28. This variant does not support GPU algorithms or federated learning.The
pip
package manager will automatically choose the correct variant depending on your system.Starting from May 31, 2025, we will stop distributing the
manylinux2014
variant and exclusively distribute themanylinux_2_28
variant. We made this decision so that our CI/CD pipeline won't have depend on software components that reached end-of-life (such as CentOS 7). We strongly encourage everyone to migrate to recent Linux distros in order to use future versions of XGBoost.Note. If you want to use GPU algorithms or federated learning on an older Linux distro, you have two alternatives:
Multi-output
We continue the work on multi-target and vector leaf in this release:
XGBoosterTrainOneIter.
This new function supports strided matrices and CUDA inputs. In addition, custom objectives now return the correct shape for prediction. (#9508)hinge
objective now supports multi-target regression (#9850)Please note that the feature is still in progress and not suitable for production use.
Federated Learning
Progress has been made on federated learning with improved support for column-split, including the following updates:
Ongoing work for SYCL support.
XGBoost is developing a SYCL plugin for SYCL devices, starting with the
hist
tree method. (#10216, #9800, #10311, #9691, #10269, #10251, #10222, #10174, #10080, #10057, #10011, #10138, #10119, #10045, #9876, #9846, #9682) XGBoost now supports launchable inference on SYCL devices, and work on adding SYCL support for training is ongoing.Looking ahead, we plan to complete the training in the coming releases and then focus on improving test coverage for SYCL, particularly for Python tests.
Optimizations
Deprecation and breaking changes
Package-specific breaking changes are outlined in respective sections. Here we list general breaking changes in this release:
Universal binary JSON
is now the default format for saving models (#9947, #9958, #9954, #9955). See https://github.com/dmlc/xgboost/issues/7547 for more info.XGBoosterGetModelRaw
is now removed after deprecation in 1.6. (#9617)XGDMatrixSetDenseInfo
andXGDMatrixSetUIntInfo
are now deprecated. Use the array interface based alternatives instead.Features
This section lists some new features that are general to all language bindings. For package-specific changes, please visit respective sections.
deviance
. (#9757)lambdarank_normalization
parameter. (#10094)QuantileDMatrix
on CPU. (#10043)Bug fixes
FieldEntry
constructor specialization syntax error (#9980)lambdarank_pair_method
. (#10098)gblinear
from treating categorical features as numerical. (#9946)Document
Here is a list of documentation changes not specific to any XGBoost package.
base_score
. (#9882)Python package
Other than the changes in networking, we have some optimizations and document updates in dask:
from xgboost import dask
instead ofimport xgboost.dask
to avoid drawing in unnecessary dependencies for non-dask users. (#9742)PySpark has several new features along with some small fixes:
verbosity=3
. (#10172)Breaking changes
For the Python package,
eval_metric
,early_stopping_rounds
, andcallbacks
from now removed from thefit
method in the sklearn interface. They were deprecated in 1.6. Use the parameters with the same name in constructors instead. (#9986)Features
Following is a list of new features in the Python package:
cudf.pandas
(#9602),torch.Tensor
(#9971), and more scipy types (#9881).random_state
(#9743)DMatrix
withNone
input. (#10052)enable_categorical
(#9877, #9884)JVM package
Here is a list of JVM-specific changes. Like the PySpark package, the JVM package also gains stage-level scheduling.
Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
Source tarball
v2.0.3
: 2.0.3 Patch ReleaseCompare Source
The 2.0.3 patch release make the following bug fixes:
Full Changelog: dmlc/xgboost@v2.0.2...v2.0.3
Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
v2.0.2
: 2.0.2 Patch ReleaseCompare Source
The 2.0.2 patch releases make the following bug fixes:
v2.0.1
: 2.0.1 Patch ReleaseCompare Source
This is a patch release for bug fixes.
Bug fixes
In addition, this is the first release where the JVM package is distributed with native support for Apple Silicon.
Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
Source tarball
v2.0.0
: Release 2.0.0 stableCompare Source
2.0.0 (2023 Sep 12)
We are excited to announce the release of XGBoost 2.0. This note will begin by covering some overall changes and then highlight specific updates to the package.
Initial work on multi-target trees with vector-leaf outputs
We have been working on vector-leaf tree models for multi-target regression, multi-label classification, and multi-class classification in version 2.0. Previously, XGBoost would build a separate model for each target. However, with this new feature that's still being developed, XGBoost can build one tree for all targets. The feature has multiple benefits and trade-offs compared to the existing approach. It can help prevent overfitting, produce smaller models, and build trees that consider the correlation between targets. In addition, users can combine vector leaf and scalar leaf trees during a training session using a callback. Please note that the feature is still a working in progress, and many parts are not yet available. See #9043 for the current status. Related PRs: (#8538, #8697, #8902, #8884, #8895, #8898, #8612, #8652, #8698, #8908, #8928, #8968, #8616, #8922, #8890, #8872, #8889, #9509) Please note that, only the
hist
(default) tree method on CPU can be used for building vector leaf trees at the moment.New
device
parameter.A new
device
parameter is set to replace the existinggpu_id
,gpu_hist
,gpu_predictor
,cpu_predictor
,gpu_coord_descent
, and the PySpark specific parameteruse_gpu
. Onward, users need only thedevice
parameter to select which device to run along with the ordinal of the device. For more information, please see our document page (https://xgboost.readthedocs.io/en/stable/parameter.html#general-parameters) . For example, withdevice="cuda", tree_method="hist"
, XGBoost will run thehist
tree method on GPU. (#9363, #8528, #8604, #9354, #9274, #9243, #8896, #9129, #9362, #9402, #9385, #9398, #9390, #9386, #9412, #9507, #9536). The old behavior ofgpu_hist
is preserved but deprecated. In addition, thepredictor
parameter is removed.hist
is now the default tree methodStarting from 2.0, the
hist
tree method will be the default. In previous versions, XGBoost choosesapprox
orexact
depending on the input data and training environment. The new default can help XGBoost train models more efficiently and consistently. (#9320, #9353)GPU-based approx tree method
There's initial support for using the
approx
tree method on GPU. The performance of theapprox
is not yet well optimized but is feature complete except for the JVM packages. It can be accessed through the use of the parameter combinationdevice="cuda", tree_method="approx"
. (#9414, #9399, #9478). Please note that the Scala-based Spark interface is not yet supported.Optimize and bound the size of the histogram on CPU, to control memory footprint
XGBoost has a new parameter
max_cached_hist_node
for users to limit the CPU cache size for histograms. It can help prevent XGBoost from caching histograms too aggressively. Without the cache, performance is likely to decrease. However, the size of the cache grows exponentially with the depth of the tree. The limit can be crucial when growing deep trees. In most cases, users need not configure this parameter as it does not affect the model's accuracy. (#9455, #9441, #9440, #9427, #9400).Along with the cache limit, XGBoost also reduces the memory usage of the
hist
andapprox
tree method on distributed systems by cutting the size of the cache by half. (#9433)Improved external memory support
There is some exciting development around external memory support in XGBoost. It's still an experimental feature, but the performance has been significantly improved with the default
hist
tree method. We replaced the old file IO logic with memory map. In addition to performance, we have reduced CPU memory usage and added extensive documentation. Beginning from 2.0.0, we encourage users to try it with thehist
tree method when the memory saving byQuantileDMatrix
is not sufficient. (#9361, #9317, #9282, #9315, #8457)Learning to rank
We created a brand-new implementation for the learning-to-rank task. With the latest version, XGBoost gained a set of new features for ranking task including:
lambdarank_pair_method
for choosing the pair construction strategy.lambdarank_num_pair_per_sample
for controlling the number of samples for each group.lambdarank_unbiased
parameter.NDCG
using thendcg_exp_gain
parameter.NDCG
is now the default objective function.XGBRanker
.For more information, please see the tutorial. Related PRs: (#8771, #8692, #8783, #8789, #8790, #8859, #8887, #8893, #8906, #8931, #9075, #9015, #9381, #9336, #8822, #9222, #8984, #8785, #8786, #8768)
Automatically estimated intercept
In the previous version,
base_score
was a constant that could be set as a training parameter. In the new version, XGBoost can automatically estimate this parameter based on input labels for optimal accuracy. (#8539, #8498, #8272, #8793, #8607)Quantile regression
The XGBoost algorithm now supports quantile regression, which involves minimizing the quantile loss (also called "pinball loss"). Furthermore, XGBoost allows for training with multiple target quantiles simultaneously with one tree per quantile. (#8775, #8761, #8760, #8758, #8750)
L1 and Quantile regression now supports learning rate
Both objectives use adaptive trees due to the lack of proper Hessian values. In the new version, XGBoost can scale the leaf value with the learning rate accordingly. (#8866)
Export cut value
Using the Python or the C package, users can export the quantile values (not to be confused with quantile regression) used for the
hist
tree method. (#9356)column-based split and federated learning
We made progress on column-based split for federated learning. In 2.0, both
approx
,hist
, andhist
with vector leaf can work with column-based data split, along with support for vertical federated learning. Work on GPU support is still on-going, stay tuned. (#8576, #8468, #8442, #8847, #8811, #8985, #8623, #8568, #8828, #8932, #9081, #9102, #9103, #9124, #9120, #9367, #9370, #9343, #9171, #9346, #9270, #9244, #8494, #8434, #8742, #8804, #8710, #8676, #9020, #9002, #9058, #9037, #9018, #9295, #9006, #9300, #8765, #9365, #9060)PySpark
After the initial introduction of the PySpark interface, it has gained some new features and optimizations in 2.0.
use_gpu
is deprecated. Thedevice
parameter is preferred.Other General New Features
Here's a list of new features that don't have their own section and yet are general to all language bindings.
Other
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.