Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Backport #17002, #17068 and #17114 to 1.6 branch #17137

Merged
merged 3 commits into from
Dec 20, 2019

Commits on Dec 20, 2019

  1. Improve the speed of the pointwise fusion graph pass (apache#17114)

    * Debug the long startup time
    
    * Optimize backward fusion
    
    * Figure out why the fusion pass is called twice
    
    * Cleaning
    
    * Small optimization
    ptrendx committed Dec 20, 2019
    Configuration menu
    Copy the full SHA
    be9b20c View commit details
    Browse the repository at this point in the history
  2. [BUGFIX] Fix trainer param order (apache#17068)

    * fix trainer param order
    
    * Update trainer.py
    
    * Update trainer.py
    
    * Update trainer.py
    eric-haibin-lin authored and ptrendx committed Dec 20, 2019
    Configuration menu
    Copy the full SHA
    67b5cc5 View commit details
    Browse the repository at this point in the history
  3. [reproducibility] multi_sum_sq review, AtomicAdd removal (apache#17002)

    * Update multi_sum_sq to avoid AtomicAdd
    
    * Add specific test for multi_sum_sq
    
    * Add a determism test and lint issues
    
    * better test for cheching op is deterministic
    
    * Follow MXNet letters case format
    
    * Reduce dimensions of tensors in the test
    MoisesHer authored and ptrendx committed Dec 20, 2019
    Configuration menu
    Copy the full SHA
    7d20ac4 View commit details
    Browse the repository at this point in the history