Skip to content

Commit

Permalink
Merge pull request PaddlePaddle#72 from tink2123/new_changes
Browse files Browse the repository at this point in the history
 update paddle changes and add api_guides folder
  • Loading branch information
shanyi15 authored Sep 18, 2018
2 parents 83f3c9d + a1f9eaa commit 5f852a2
Show file tree
Hide file tree
Showing 2 changed files with 172 additions and 0 deletions.
106 changes: 106 additions & 0 deletions doc/fluid/api_guides/low_level/optimizer/optimizer_all.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
.. _api_guide_optimizer:


Optimizer
#########

神经网络最终是一个 `最优化问题 <https://en.wikipedia.org/wiki/Optimization_problem>`_ ,
在经过 `前向计算和反向传播 <https://zh.wikipedia.org/zh-hans/反向传播算法>`_ 后,
:code:`Optimizer` 使用反向传播梯度,优化神经网络中的参数。

1.SGD/SGDOptimizer
------------------

:code:`SGD` 是实现 `随机梯度下降 <https://arxiv.org/pdf/1609.04747.pdf>`_ 的一个 :code:`Optimizer` 子类,是 `梯度下降 <https://zh.wikipedia.org/zh-hans/梯度下降法>`_ 大类中的一种方法。
当需要训练大量样本的时候,往往选择 :code:`SGD` 来使损失函数更快的收敛。

API Reference 请参考 api_fluid_optimizer_SGDOptimizer_

.. _api_fluid_optimizer_SGDOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-8-sgdoptimizer

2.Momentum/MomentumOptimizer
----------------------------

:code:`Momentum` 优化器在 :code:`SGD` 基础上引入动量,减少了随机梯度下降过程中存在的噪声问题。
用户在使用时可以将 :code:`ues_nesterov` 参数设置为False或True,分别对应传统 `Momentum(论文4.1节)
<https://arxiv.org/pdf/1609.04747.pdf>`_ 算法和 `Nesterov accelerated gradient(论文4.2节)
<https://arxiv.org/pdf/1609.04747.pdf>`_ 算法。

API Reference 请参考 api_fluid_optimizer_MomentumOptimizer_

.. _api_fluid_optimizer_MomentumOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-9-momentumoptimizer

3. Adagrad/AdagradOptimizer
---------------------------
`Adagrad <http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf>`_ 优化器可以针对不同参数样本数不平均的问题,自适应地为各个参数分配不同的学习率。

API Reference 请参考 api_fluid_optimizer_AdagradOptimizer_

.. _api_fluid_optimizer_AdagradOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-10-adagradoptimizer

4.RMSPropOptimizer
------------------
`RMSProp优化器 <http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf>`_ ,是一种自适应调整学习率的方法,
主要解决使用Adagrad后,模型训练中后期学习率急剧下降的问题。

API Reference 请参考 api_fluid_optimizer_RMSPropOptimizer_

.. _api_fluid_optimizer_RMSPropOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-14-rmspropoptimizer

5.Adam/AdamOptimizer
--------------------
`Adam <https://arxiv.org/abs/1412.6980>`_ 的优化器是一种自适应调整学习率的方法,
适用于大多非 `凸优化 <https://zh.wikipedia.org/zh/凸優化>`_ 、大数据集和高维空间的场景。在实际应用中,:code:`Adam` 是最为常用的一种优化方法。

API Reference 请参考 api_fluid_optimizer_AdamOptimizer_

.. _api_fluid_optimizer_AdamOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-11-adamoptimizer


6.Adamax/AdamaxOptimizer
------------------------

`Adamax <https://arxiv.org/abs/1412.6980>`_ 是 :code:`Adam` 算法的一个变体,对学习率的上限提供了一个更简单的范围,使学习率的边界范围更简单。

API Reference 请参考 api_fluid_optimizer_AdamxOptimizer_

.. _api_fluid_optimizer_AdamxOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-12-adamaxoptimizer


7.DecayedAdagrad/ DecayedAdagradOptimizer
-------------------------------------------

`DecayedAdagrad <http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf>`_ 优化器,可以看做是引入了衰减速率的 :code:`Adagrad` 算法,解决使用Adagrad后,模型训练中后期学习率急剧下降的问题。

API Reference 请参考 api_fluid_optimizer_DecayedAdagrad_

.. _api_fluid_optimizer_DecayedAdagrad: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-13-decayedadagradoptimizer


8. Ftrl/FtrlOptimizer
----------------------

`FtrlOptimizer <https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf>`_ 优化器结合了 `FOBOS算法 <https://stanford.edu/~jduchi/projects/DuchiSi09b.pdf>`_ 的高精度与 `RDA算法
<http://www1.se.cuhk.edu.hk/~sqma/SEEM5121_Spring2015/dual-averaging.pdf>`_ 的稀疏性,是目前效果非常好的一种 `Online Learning <https://en.wikipedia.org/wiki/Online_machine_learning>`_ 算法。

API Reference 请参考 api_fluid_optimizer_FtrlOptimizer_

.. _api_fluid_optimizer_FtrlOptimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-15-ftrloptimizer

9.ModelAverage
-----------------

:code:`ModelAverage` 优化器,在训练中通过窗口来累计历史 parameter,在预测时使用取平均值后的paramet,整体提高预测的精度。

API Reference 请参考 api_fluid_optimizer_ModelAverage_

.. _api_fluid_optimizer_ModelAverage: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-17-modelaverage


10.Optimizer
--------------
:code:`Optimizer` 这个类是 :code:`Fluid` 中优化器的基类。它的作用是定义优化器的公共接口,用户通过该类调用上述经典的优化算法。

API Reference 请参考 api_fluid_optimizer_

.. _api_fluid_optimizer: http://www.paddlepaddle.org/docs/0.14.0/api/fluid/en/optimizer.html#permalink-18-optimizer
66 changes: 66 additions & 0 deletions doc/fluid/dev/versioning_en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Versioning (Work In Progress)


PaddlePaddle framework follows Semantic Versioning 2.0 (semver).
Each release has version of the following format: MAJOR.MINOR.PATCH
(e.g. 1.2.0). Some key points:


* Major version number change can result in backward-incompatible changes. Codes working in old version don’t necessarily work in the new version. In addition, data, such as program model and checkpointed parameters, generated by the previous major version might not work in the new version. Tools will be attempted to be built to help the release migration.

* Minor version number change always maintain backward compatibility. It normally contains compatible improvements and bug fixes.

* Patch number change is for bug fixes.

* Violation of the policy are considered as bugs and should be fixed.

### What is Covered

* All public documented Python APIs, excluding those live in the contrib namespace.

### What is Not Covered

* If an API’s implementation has bugs, we reserve the rights to fix the bugs and change the behavior.

* The Python APIs in contrib namespace.

* The Python function and classes that start with ‘_’.

* The offline tools.

* The data generated by the framework, such as serialized Program model file and checkpointed variables, are subject to different versioning scheme described below.

* C++ Inference APIs. (To be covered)


## Data


Data refers to the artifacts generated by the framework. Here, we specifically mean model Program file and the checkpointed variables.



* Backward Compatibility: User sometimes generates Data at PaddlePaddle version 1.1 and expects it to be consumed by PaddlePaddle version 1.2.
This can happen when an new online system wants to serve an old model trained previously.



* Forward Compatibility: User sometimes generates Data at PaddlePaddle version 1.2 and expects it to be consumed by PaddlePaddle version 1.1.
The can happen when an new successful research model want to be served by an old online system that is not frequently upgraded.



### Versioning

Data version. Data is assigned an integer version number. Version is increased when incompatible change is introduced.

PaddlePaddle framework has an interval of Data version that it supports. PadlePaddle framework within the same major version (semver) cannot drop support of lower version of Data. Hence, a minor version change cannot drop support of Data version.


For example, For PaddlePaddle version 1.1, it supports Program version 3 to 5. Later, Program version is increased from 5 to 6 due to addition of an attribute. As a result PaddlePaddle version 1.1 won’t be able to consume it. PaddlePaddle 1.2 should support Program version 3 to 6. PaddlePaddle can only drop support for Program version 3 until PaddlePaddle version 2.0.



### Known Issues

Currently, forward compatibility for new Data version is best-effort.

0 comments on commit 5f852a2

Please sign in to comment.