Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,10 @@
<https://en.wikipedia.org/wiki/Perceptron>`_

`Large Margin Classification Using the Perceptron Algorithm
<http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8200>`_
<https://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8200>`_

`Discriminative Training Methods for Hidden Markov Models
<http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.6725>`_
<https://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.6725>`_


:param loss: The default is :py:class:`'hinge' <nimbusml.loss.Hinge>`. Other
Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/DnnFeaturizer.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
* ``"Alexnet"``

The default value is ``"Resnet18"``.
See `Deep Residual Learning for Image Recognition <http://www.cv-
See `Deep Residual Learning for Image Recognition <https://www.cv-
foundation.org/openaccess/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html>`_
for details about ResNet.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@
`Field Aware Factorization Machines
<https://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf>`_,
`Field-aware Factorization Machines for CTR Prediction
<http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf>`_,
<https://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf>`_,
`Adaptive Subgradient Methods for Online Learning and Stochastic
Optimization
<http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf>`_
<https://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf>`_


:param feature: see `Columns </nimbusml/concepts/columns>`_.
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/FastForestBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@
**Reference**

`Wikipedia: Random forest
<http://en.wikipedia.org/wiki/Random_forest>`_
<https://en.wikipedia.org/wiki/Random_forest>`_

`Quantile regression forest
<http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf>`_
<https://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf>`_

`From Stumps to Trees to Forests
<https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/FastForestRegressor.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@
**Reference**

`Wikipedia: Random forest
<http://en.wikipedia.org/wiki/Random_forest>`_
<https://en.wikipedia.org/wiki/Random_forest>`_

`Quantile regression forest
<http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf>`_
<https://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf>`_

`From Stumps to Trees to Forests
<https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-
Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/FastLinearBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
content/uploads/2016/06/main-3.pdf>`_

`Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization <http://www.jmlr.org/papers/volume14/shalev-
Minimization <https://www.jmlr.org/papers/volume14/shalev-
shwartz13a/shalev-shwartz13a.pdf>`_


Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/FastLinearClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
content/uploads/2016/06/main-3.pdf>`_

`Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization <http://www.jmlr.org/papers/volume14/shalev-
Minimization <https://www.jmlr.org/papers/volume14/shalev-
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broken link

shwartz13a/shalev-shwartz13a.pdf>`_


Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/FastLinearRegressor.txt
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
content/uploads/2016/06/main-3.pdf>`_

`Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization <http://www.jmlr.org/papers/volume14/shalev-
Minimization <https://www.jmlr.org/papers/volume14/shalev-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommend checking that all the new links work:

Suggested change
Minimization <https://www.jmlr.org/papers/volume14/shalev-
Minimization <http://www.jmlr.org/papers/volume14/shalev-

Original works: http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf

shwartz13a/shalev-shwartz13a.pdf>`_


Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/FastTreesBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
<https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting>`_

`Greedy function approximation: A gradient boosting machine.
<http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451>`_
<https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451>`_

:param optimizer: Default is ``sgd``.

Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/FastTreesRegressor.txt
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@
<https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting>`_

`Greedy function approximation: A gradient boosting machine.
<http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451>`_
<https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451>`_

:param optimizer: Default is ``sgd``.

Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/FastTreesTweedieRegressor.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
<https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting>`_

`Greedy function approximation: A gradient boosting machine.
<http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451>`_
<https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451>`_

:param optimizer: Default is ``sgd``.

Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/GamBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
functions learned will step between the discretization boundaries.

This implementation is based on the this `paper
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_,
<https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_,
but diverges from it in several important respects: most
significantly,
in each round of boosting, rather than do one feature at a time, it
Expand Down Expand Up @@ -57,7 +57,7 @@
`Generalized additive models
<https://en.wikipedia.org/wiki/Generalized_additive_model>`_,
`Intelligible Models for Classification and Regression
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_
<https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_


:param normalize: Specifies the type of automatic normalization used:
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/GamRegressor.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
functions learned will step between the discretization boundaries.

This implementation is based on the this `paper
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_,
<https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_,
but diverges from it in several important respects: most
significantly,
in each round of boosting, rather than do one feature at a time, it
Expand Down Expand Up @@ -57,7 +57,7 @@
`Generalized additive models
<https://en.wikipedia.org/wiki/Generalized_additive_model>`_,
`Intelligible Models for Classification and Regression
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_
<https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.352.7619>`_


:param normalize: Specifies the type of automatic normalization used:
Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/LightLda.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
topical vectors. LightLDA is an extremely
efficient implementation of LDA developed in MSR-Asia that
incorporates a number of optimization techniques
`(http://arxiv.org/abs/1412.1576) <http://arxiv.org/abs/1412.1576>`_.
`(https://arxiv.org/abs/1412.1576) <https://arxiv.org/abs/1412.1576>`_.
With the LDA transform, we can
train a topic model to produce 1 million topics with 1 million
vocabulary on a 1-billion-token document set one
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/LocalDeepSvmBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,14 @@
More details about LD-SVM can be found in this paper `Local deep
kernel
learning for efficient non-linear SVM prediction
<http://research.microsoft.com/en-
<https://research.microsoft.com/en-
us/um/people/manik/pubs/Jose13.pdf>`_.


**Reference**

`Local deep kernel learning for efficient non-linear SVM prediction
<http://research.microsoft.com/en-
<https://research.microsoft.com/en-
us/um/people/manik/pubs/Jose13.pdf>`_


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,14 +69,14 @@

**Reference**

`Wikipedia: L-BFGS <http://en.wikipedia.org/wiki/L-BFGS>`_
`Wikipedia: L-BFGS <https://en.wikipedia.org/wiki/L-BFGS>`_

`Wikipedia: Logistic
regression <http://en.wikipedia.org/wiki/Logistic_regression>`_
regression <https://en.wikipedia.org/wiki/Logistic_regression>`_

`Scalable
Training of L1-Regularized Log-Linear Models
<http://research.microsoft.com/apps/pubs/default.aspx?id=78900>`_
<https://research.microsoft.com/apps/pubs/default.aspx?id=78900>`_

`Test Run - L1
and L2 Regularization for Machine Learning
Expand Down
6 changes: 3 additions & 3 deletions src/python/docs/docstrings/LogisticRegressionClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -70,14 +70,14 @@

**Reference**

`Wikipedia: L-BFGS <http://en.wikipedia.org/wiki/L-BFGS>`_
`Wikipedia: L-BFGS <https://en.wikipedia.org/wiki/L-BFGS>`_

`Wikipedia: Logistic
regression <http://en.wikipedia.org/wiki/Logistic_regression>`_
regression <https://en.wikipedia.org/wiki/Logistic_regression>`_

`Scalable
Training of L1-Regularized Log-Linear Models
<http://research.microsoft.com/apps/pubs/default.aspx?id=78900>`_
<https://research.microsoft.com/apps/pubs/default.aspx?id=78900>`_

`Test Run - L1
and L2 Regularization for Machine Learning
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/OneClassSVMAnomalyDetector.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@
us/library/azure/dn913103.aspx>`_

`Estimating the Support of a High-Dimensional Distribution
<http://research.microsoft.com/pubs/69731/tr-99-87.pdf>`_
<https://research.microsoft.com/pubs/69731/tr-99-87.pdf>`_

`New Support Vector Algorithms
<http://www.stat.purdue.edu/~yuzhu/stat598m3/Papers/NewSVM.pdf>`_
<https://www.stat.purdue.edu/~yuzhu/stat598m3/Papers/NewSVM.pdf>`_

`LIBSVM: A Library for Support Vector Machines
<https://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf>`_
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/PcaAnomalyDetector.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@

`Randomized Methods for Computing the Singular Value Decomposition
(SVD) of very large matrices
<http://web.stanford.edu/group/mmds/slides2010/Martinsson.pdf>`_
<https://web.stanford.edu/group/mmds/slides2010/Martinsson.pdf>`_
`A randomized algorithm for principal component analysis
<https://arxiv.org/abs/0809.2274>`_,
`Finding Structure with Randomness: Probabilistic Algorithms for
Constructing Approximate Matrix Decompositions
<http://users.cms.caltech.edu/~jtropp/papers/HMT11-Finding-Structure-
<https://users.cms.caltech.edu/~jtropp/papers/HMT11-Finding-Structure-
SIREV.pdf>`_


Expand Down
8 changes: 4 additions & 4 deletions src/python/docs/docstrings/SgdBinaryClassifier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,14 @@
associated optimization problem is sparse, then Hogwild SGD achieves
a
nearly optimal rate of convergence. For a detailed reference, please
refer to `http://arxiv.org/pdf/1106.5730v2.pdf
<http://arxiv.org/pdf/1106.5730v2.pdf>`_.
refer to `https://arxiv.org/pdf/1106.5730v2.pdf
<https://arxiv.org/pdf/1106.5730v2.pdf>`_.


**Reference**

`http://arxiv.org/pdf/1106.5730v2.pdf
<http://arxiv.org/pdf/1106.5730v2.pdf>`_
`https://arxiv.org/pdf/1106.5730v2.pdf
<https://arxiv.org/pdf/1106.5730v2.pdf>`_


:param normalize: Specifies the type of automatic normalization used:
Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/SigmoidKernel.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Apply sigmoid function. tanh(gamma*<x,y>+c).

.. remarks::
`SigmoidKernel <http://www2.spsc.tugraz.at/www-
`SigmoidKernel <https://www2.spsc.tugraz.at/www-
archive/AdvancedSignalProcessing/WS05-Mistral/advances.pdf>`_ is a
kernel function
that computes the similarity between two features.
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/docstrings/SsweEmbedding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@
versions of `GloVe Models
<https://nlp.stanford.edu/projects/glove/>`_, `FastText
<https://en.wikipedia.org/wiki/FastText>`_, and `Sswe
<http://anthology.aclweb.org/P/P14/P14-1146.pdf>`_.
<https://anthology.aclweb.org/P/P14/P14-1146.pdf>`_.

.. remarks::
Sentiment-specific word embedding (SSWE) is a DNN featurizer
developed
by MSRA (`paper <http://anthology.aclweb.org/P/P14/P14-1146.pdf>`_).
by MSRA (`paper <https://anthology.aclweb.org/P/P14/P14-1146.pdf>`_).
It
incorporates sentiment information into the neural network to learn
sentiment specific word embedding. It proves to be useful in various
Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/SupervisedBinner.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
the default is to normalize features before training.

``SupervisedBinner`` implements the `Entropy-Based Discretization
<http://www.aaai.org/Papers/KDD/1996/KDD96-019.pdf>`_.
<https://www.aaai.org/Papers/KDD/1996/KDD96-019.pdf>`_.
Partition of the data is performed recursively to select the split
with highest entropy gain with respect to the label.
Therefore, the final binned features will have high correlation with
Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/docstrings/WordEmbedding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
available options are various versions of `GloVe Models
<https://nlp.stanford.edu/projects/glove/>`_, `FastText
<https://en.wikipedia.org/wiki/FastText>`_, and `Sswe
<http://anthology.aclweb.org/P/P14/P14-1146.pdf>`_.
<https://anthology.aclweb.org/P/P14/P14-1146.pdf>`_.


:param model_kind: Pre-trained model used to create the vocabulary.
Expand Down
6 changes: 3 additions & 3 deletions src/python/docs/sphinx/_static/mystyle.css
Original file line number Diff line number Diff line change
Expand Up @@ -6726,8 +6726,8 @@ button.close {
*
*/
/*!
* Font Awesome 4.2.0 by @davegandy - http://fontawesome.io - @fontawesome
* License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License)
* Font Awesome 4.2.0 by @davegandy - https://fontawesome.io - @fontawesome
* License - https://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License)
*/
/* FONT PATH
* -------------------------- */
Expand Down Expand Up @@ -8432,7 +8432,7 @@ label {
padding: 0px;
}
/* Flexible box model classes */
/* Taken from Alex Russell http://infrequently.org/2009/08/css-3-progress/ */
/* Taken from Alex Russell https://infrequently.org/2009/08/css-3-progress/ */
/* This file is a compatability layer. It allows the usage of flexible box
model layouts accross multiple browsers, including older browsers. The newest,
universal implementation of the flexible box model is used when available (see
Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/sphinx/ci_script/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,8 @@
'relative': True,
'reference_url': {
'nimbusml': None,
'matplotlib': 'http://matplotlib.org',
'numpy': 'http://www.numpy.org/',
'matplotlib': 'https://matplotlib.org',
'numpy': 'https://www.numpy.org/',
'scipy': 'https://www.scipy.org/'},
}

Expand Down
4 changes: 2 additions & 2 deletions src/python/docs/sphinx/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,8 @@ def install_and_import(package):
'relative': True,
'reference_url': {
'nimbusml': None,
'matplotlib': 'http://matplotlib.org',
'numpy': 'http://www.numpy.org/',
'matplotlib': 'https://matplotlib.org',
'numpy': 'https://www.numpy.org/',
'scipy': 'https://www.scipy.org/'},
}

Expand Down
2 changes: 1 addition & 1 deletion src/python/docs/sphinx/make.bat
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ if errorlevel 9009 (
echo.Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
echo.https://sphinx-doc.org/
exit /b 1
)

Expand Down
6 changes: 3 additions & 3 deletions src/python/docs/sphinx/metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ This corresponds to evaltype='binary'.

**Negative Recall** - see `Precision and Recall <https://en.wikipedia.org/wiki/Precision_and_recall>`_

**Log-loss** - see `Log Loss <http://wiki.fast.ai/index.php/Log_Loss>`_
**Log-loss** - see `Log Loss <https://wiki.fast.ai/index.php/Log_Loss>`_

**Log-loss reduction** - RIG(Y|X) * 100 = (H(Y) - H(Y|X)) / H(Y) * 100. Ranges from [-inf, 100], where
100 is perfect predictions and 0 indicates mean predictions.
Expand All @@ -48,7 +48,7 @@ This corresponds to evaltype='binary'.

**F1 Score** - see `Precision and Recall <https://en.wikipedia.org/wiki/Precision_and_recall>`_

**AUPRC** - see `Area under Precision-Recall Curve <http://pages.cs.wisc.edu/~boyd/aucpr_final.pdf>`_
**AUPRC** - see `Area under Precision-Recall Curve <https://pages.cs.wisc.edu/~boyd/aucpr_final.pdf>`_

.. note:: Note about ROC

Expand All @@ -74,7 +74,7 @@ This corresponds to evaltype='multiclass'.
**Accuracy(macro-avg)** - Every class contributes equally to the accuracy metric. Minority classes are
given equal weight as the larger classes.

**Log-loss** - see `Log Loss <http://wiki.fast.ai/index.php/Log_Loss>`_
**Log-loss** - see `Log Loss <https://wiki.fast.ai/index.php/Log_Loss>`_

**Log-loss reduction** - RIG(Y|X) * 100 = (H(Y) - H(Y|X)) / H(Y) * 100. Ranges from [-inf, 100], where
100 is perfect predictions and 0 indicates mean predictions.
Expand Down
2 changes: 1 addition & 1 deletion src/python/nimbusml.pyproj
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
<Project ToolsVersion="4.0" xmlns="https://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<SchemaVersion>2.0</SchemaVersion>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ class FactorizationMachineBinaryClassifier(
`Field Aware Factorization Machines
<https://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf>`_,
`Field-aware Factorization Machines for CTR Prediction
<http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf>`_,
<https://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf>`_,
`Adaptive Subgradient Methods for Online Learning and Stochastic
Optimization
<http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf>`_
<https://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf>`_


:param feature: see `Columns </nimbusml/concepts/columns>`_.
Expand Down
Loading