Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update #18

Merged
merged 132 commits into from
Jul 5, 2021
Merged

update #18

merged 132 commits into from
Jul 5, 2021

Conversation

AnnaTrainingG
Copy link
Owner

PR types

PR changes

Describe

MingMingShangTian and others added 30 commits June 22, 2021 15:56
* transform complex scale to tensor

* add test_case for complex scalar

* modify import paddle
* using argparse to handle selections

* 2 TODOs

* 先不更改pipeline配置,这里强制改成GPU版本

* sorted the all_names

* exec gpu sample codes tests incrementally

* get all apis from the pr.spec file

* condition with WITH_GPU

WITH_GPU == ON

save

* delete the useless codes

* delete the useless codes.

test=document_fix

* echo the diff result

test=document_fix

* dont reuse the variables

* rename fun to _func not work. put it into the skiplist

038ffc7
test=document_fix

* skip it in check api approvals

test=document_fix

save

* skip the private _variables

* print signatures wrong. now rename it to _func

test=document_fix
* new api diagonal, test=develop

* add new api diagonal, test=develop

* new api diagonal, test=develop

* add new api paddle.diagonal, test=develop

* use framework::stride replace ComputeDimStride

* replace cudaMalloc/cudaMemcpy by TensorFormVector in cudaKernel and cudaGradKernel

* perfect funciton: when attr(offset) is exceed attr(axis1) or attr(axis2), set the diagonal dim is 0

* fix RP-Mac-CI bug: replace framework::stride() by ComputDimStride.

* perfect code-block

* perfect code of python API diagonal

* api supports dtype of float16 and bool

* api supports dtype of float16 and bool

* modify unittest code

* modify unittest code

* perfect dtype describe

* perfect code-block
* fix bug about deallocating None, test=develop
* elastic unitest

* rename demo
* optimize attr default value, test=develop

* refine, test=develop

* refine, test=develop

* refine, test=develop

* fix bug in AttrReader, test=develop

* fix bug, test=develop

* fix double_grad, test=develop

* refine, test=develop

* refine, test=develop

* fix checker null, test=develop

* for test, test=develop

* refine, test=develop

* refine, test=develop

* refine, test=develop

* refine, test=develop

* refine, test=develop

* refine, test=develop
* base changes for split op

* 90% of split functionality added

* full fp32 functionality

* added bf16 test

* added submemory caching

* added bf test to static mode whitelist

* minor change

* enabled split op for inference

* minor fix

* minor fix
* tmp

* pass con_element_add2_act

* recover unittests CMakeLists

* init pass enhance

* fix the attr according to review

* repair the attr conv2d

* repair axis of elementwise_add

* CI-coverage test=allcase

* repari some attr

* recover batch_norm_act

* conv_elementwise_add2_act_fuse
* refactor check_pr_approval, allow using github login-id

2. remove the tricks for paddle.fluid.layers.ops.func

* add testcases

* simplify the test data, and added to file diff approvals

* remove a approver

* test_print_signatrues runs on a simple pipeline, no paddle installed

* testcases for print_signatrures and sampcd . python3 only.

* remove unused import directives

* remove unused import directives
* add trt LT version helper

* remove deprecated nvinfer1::DimsCHW and replace it to nvinfer1::Dims3

* remove deprecated nvinfer1::DimsNCHW and replace it to nvinfer1::Dims4

* update deserialize engine

* update to createNetworkV2

* update to createNetworkV2

* update buildWithConfig and remove redundent config settings

* replace createNetwork to createNetworkV2

* fix int8

* addMatrixMultiply

* remove unnecessary const cast

* IBuilder->setInt8Calibrator() is deprecated

* auto enable fp16 when using int8

* remove the redundant line
* Modify the search order of dynamic library

* Modify the search order of dynamic library
JZ-LIANG and others added 27 commits July 1, 2021 13:35
* dygraph sharding

* update unitest hybrid_parallel_communicate_group
* fix safe bug of scatter/scatter_nd
* add random and prevent deadlock
* refine the old code

* support moving_average_abs_max and per_channel_abs_max

* Add moving_average_abs_max_scale op

* Convert the test program
* delete useless GELU in gelu npu op

* add description

* fix format

* add check_grad in gelu unittest
…paddle.nn.quant (#33871)

* Save all scales to target ops
* Move quant layers to paddle.nn.quant
@AnnaTrainingG AnnaTrainingG merged commit 25ba21c into AnnaTrainingG:develop Jul 5, 2021
AnnaTrainingG pushed a commit that referenced this pull request Dec 6, 2023
Slightly improve installation process
AnnaTrainingG pushed a commit that referenced this pull request Dec 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.