Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

subgraph TODO #11896

Open
5 of 9 tasks
zheng-da opened this issue Jul 26, 2018 · 5 comments
Open
5 of 9 tasks

subgraph TODO #11896

zheng-da opened this issue Jul 26, 2018 · 5 comments

Comments

@zheng-da
Copy link
Contributor

zheng-da commented Jul 26, 2018

Subgraph was proposed as a general mechanism for integrating many different backends.
https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+acceleration+libraries

As the project progresses, we get more people joined for the project. To have better collaboration, we like to maintain a TODO list here for better collaboration. As we get more tasks for this project, we'll update the list.

  • merge the default subgraph operator with CachedOp and make CachedOp a normal operator @zheng-da [MXNET-876] make CachedOp a normal operator #11641
  • search for weight arrays in the subgraph, reorder their layout and cache the weight arrays with optimal layout. This task may need to be done for specific backends.
  • enable graph partitioning in CachedOp and bind of the symbol executor.
  • infer shape/dtype/storage info before graph partitioning and pass nnvm::Graph that carries shape/dtype/storage info as an argument when creating a subgraph node.
  • Customize memory planning. This task is now needed for MKLDNN because MKLDNN still uses CachedOp to execute operators inside a subgraph. It may be required by other backends as well.
  • Memory format conversion on subgraph boundary.
  • Support control flow operators. More generally, support graphs with subgraph nodes.

There are two tasks that are MKLDNN-specific.

  • Support the imperative mode. convert the format of outputs back to the default format.
  • Subgraph accuracy issue.

@reminisce @pengzhao-intel @TaoLv @ZhennanQin @ashokei

@pengzhao-intel
Copy link
Contributor

Thanks, @zheng-da , the subgraph will be a unified interface between different implementations and make the whole architecture very clear.

We're very happy to co-work together to accelerate the progress :) And MKL-DNN would be the first backend with this wonderful bridge 🥇

@mbrookhart
Copy link

mbrookhart commented Jul 26, 2018

@zheng-da I think all we need for nGraph is

 enable graph partitioning in CachedOp and bind of the symbol executor.

  infer shape/dtype/storage info before graph partitioning and pass nnvm::Graph that carries shape/dtype/storage info as an argument when creating a subgraph node.

We'll review internally and try to assign an Engineer to those items.

@zheng-da
Copy link
Contributor Author

@azai91

@azai91
Copy link
Contributor

azai91 commented Jul 27, 2018

for task 2, what is the optimal layout for the weights?

@zheng-da
Copy link
Contributor Author

zheng-da commented Jul 28, 2018

@azai91 the optimal layout is defined by the operator. This is the tricky part of this task. How to determine the optimal layout without breaking the current API?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants