Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[RFC] Apache MXNet 2.0 Roadmap #16167

Open
szha opened this issue Sep 13, 2019 · 36 comments
Open

[RFC] Apache MXNet 2.0 Roadmap #16167

szha opened this issue Sep 13, 2019 · 36 comments
Labels
RFC Post requesting for comments Roadmap

Comments

@szha
Copy link
Member

szha commented Sep 13, 2019

Overview

Status: https://github.com/apache/incubator-mxnet/projects/18
The status for each project will be updated by the contributor who's driving it. If you have more projects that you intend to drive please first discuss here.

The purpose of this RFC is to organize and present the roadmap towards 2.0. As 2.0 will be a major release, changes that would break backward compatibility are permissible.

The proposed changes in this RFC are either collected from past roadmap discussions such as #9686, or are based on various common issues from the past. This RFC organizes these changes into self-contained projects to facilitate clear definition of project, captures the risks and status quo to the best of our knowledge. To help navigate, the projects are further divided into several high-level areas. Some of the listed projects are already in progress, and are included to provide a clear overview.

The objectives of Apache MXNet 2.0 include:

  • Improve expressiveness and usability of user-facing API.
  • Improve expressiveness and usability of the technical stack for lower development cost and maintainability.

In terms of frontend, this roadmap focuses mostly on Python-frontend since MXNet has been taking a Python-first approach. The expectation with respect to other language bindings is that they would evolve along with the backend evolution and make use of the improvements. Given that breaking changes can occur, maintainers of different language bindings are expected to participate in related interface definition discussions.

1. MXNet NP Module

NumPy has long been established as the standard math library in Python, the most prevalent language for the deep learning community. With this library as the cornerstone, there are now the largest ecosystem and community for scientific computing. The popularity of NumPy comes from its flexibility and generality.

In #14253, the MXNet community reached consensus on moving towards a NumPy-compatible programing experience and committed to a major endeavor on providing NumPy compatible operators.

The primary goal of the projects below is to provide the equivalent usability and expressiveness of NumPy in MXNet to facilitate Deep Learning model development, which not only helps existing deep learning practitioners but also provides people in the existing NumPy community with a shortcut for getting started in Deep Learning. The efforts towards this goal would also help a secondary goal, which is to enable the existing NumPy ecosystem to utilize GPUs and accelerators to speed up large scale computation.

1.1. NumPy Operator Testing

Scope:

  1. adopt array_function and numpy existing tests.
  2. extend testing to GPU
  3. investigate numpy testing strategies
  4. decide correctness criteria for acceptance

1.2. NumPy Operator performance profiling

Scope:

  1. Automatically profile the performance of NumPy operators

1.3. NumPy operator coverage

Scope:

  1. improve operator until full NumPy coverage, with prioritization towards operators used in the ecosystem and deep learning in general

Operator coverage as of 07/03/2019

|    module |     NumPy | deepNumPy |       jax |      cupy |
|-----------|-----------|-----------|-----------|-----------|
|        np |       603 |        89 |       445 |       321 |
|   ndarray |        71 |        32 |        71 |        56 |
|    random |        63 |         5 |        15 |        49 |
|    linalg |        31 |         2 |         8 |        15 |

1.4. NumPy Extension Operator Reorganization and Renaming

Scope:

  1. consistent type usage for index input and return values from sort, topk Use dtype=int for the indices returned by TopK #11031 [MXNET-507] Set dtype=int32 for ret_indices in ordering ops #11134, topk regression #12197
  2. array creation operators with flexible dtype definition [MXNET-798] Fix the dtype cast from non float32 in Gradient computation #12290. (dtype=None)
  3. moving_mean/moving_var in batchnorm
  4. consistent usage of axis vs dim
  5. promote or deprecate contrib operators

1.5. NumPy ndarray type extension

Scope:

  1. bfloat16 support (not in NumPy yet but useful for deep learning) (low priority — Intel)
  2. boolean type support
  3. complex (for FFT)

1.6. NumPy ndarray boolean indexing

Scope:

  1. allow boolean masks in NumPy ndarray indexing by adding the operator, potentially through extending op.where

1.7. Hybridizable basic (and advanced) indexing

Scope:

  1. Allow operations such as y = x[1:3, 2, ...] to be hybridizable

Note: Preliminary work: #15663

2. Graph Enhancement and 3rdparty support

The objective of the following projects is to enable easier development of third-party extensions without requiring changes to be checked in the MXNet project. Examples of such extensions include third-party operator library and accelerators.

2.1. Graph Partitioning for Dynamic Shape Operators

Scope:

  1. partition inside control flow operators (and all cached ops)
  2. partition on operators with dynamic shapes for partial memory planning and caching.

2.2. Improved Third-party Operator Support

Scope:

  1. allow registering custom operators by exposing C API (and frontend API) to register NNVM op at runtime.
  2. verify serialization, deserialization, and graph passes for graphs with these operators are working properly.

2.3. Improved Third-party Backend Support (subgraph property)

Scope:

  1. expose a graph pass for standard graph partitioning with back-end-specific criteria as a C API and frontend API.

2.4. Large tensor support by default

Scope:

  1. enable default support for tensor with int64 dimension sizes
  2. make sure there’s no significant performance regression in operators

Risks:

  1. performance regression may happen in a subset of operators, which can disproportionally affect certain models.
  2. compatibility and silent behavior change.

Notes: in progress (RFC: https://lists.apache.org/thread.html/df53b8c26e9e0433378dd803baba9fec4dd922728a5ce9135dc164b3@%3Cdev.mxnet.apache.org%3E)

3. API Changes

The objective of the following projects is to address the technical debts accumulated during the development of MXNet 0.x and 1.x with respect to the API definition.

3.1. C-API Clean-up

C-API is the foundational API in MXNet that all language bindings depend on.

Scope:

  1. use packed function for flexibility (and potentially efficiency through avoiding string parsing)
  2. do not expose backend accelerator-specific types such as mkldnn::memory in C-API
  3. do not rely on topological ordering for argument passing (Reliance on topological ordering for graph inputs #15362).
  4. verification of thread-safety and performance for C API

Risks:

  1. backend integration may require refactoring or even redesign
  2. existing use cases such as other frontend may be broken without substitute
  3. feedback is scattered and we may miss the opportunity to change some APIs in 2.0

3.2. Unify Executor

Scope:

  1. SymbolBlock equivalent in C/C++, unify the executor implementation for symbol/module and the one for gluon blocks
  2. migrate other versions of inference API
  3. Support mirror option in the unified executor

3.3. Gradient of Gradient support

Scope:

  1. higher order gradient support for a subset of operators

Risks:

  1. large number of backward operators could introduce significant technical debt if not properly verified.
  2. ill-informed prioritization may result in usability issue (e.g. common GAN not supported)

3.4. Autograd Extension

Scope:

  1. improve interface to support specifying intermediate output grad nodes
  2. improve interface for better usability. (retain_graph → something not involving graph)
  3. update graph pass for correctness

3.5. NNVM-backend Operator Interface Changes

Scope:

  1. support more than one temporary spaces
  2. split forward shape/type inference and reverse shape/type inference for better error messaging.
  3. deferred initialization removal (or improve error/info message)
  4. accompanying operator implementation changes

Risks:

  1. some changes may make operator implementation less error-prone while less flexible, and thus require some reworking.

4. Gluon 2.0

Since the introduction of the Gluon API, it has superceded other API for model development such as symbolic API and model API. Conceptually, Gluon is the first attempt in the deep learning community to unify the flexibility of imperative programming with the performance benefits of symbolic programming, through trace-based just-in-time compilation.

The objectives of the following projects are:

  • address usability issue as a result of the divergence in the behavior of NDArray and Symbol.
  • extend the JIT to improve the coverage of hybridization.
  • introduce new functionality to facilitate more areas of research such as Baysian methods and AutoML.
  • improve the usability and performance of the utility in Gluon.

4.1. Unifying symbolic and imperative mode for tensor library

Scope:

  1. unify the operator implementation and behaviors of symbolic and imperative execution modes (How to debug hybridize() failures? #10875)
  2. allow naming for ndarray similar to symbol
  3. address the necessary changes in shape/type inference.

4.2. Unifying Block and HybridBlock

Scope:

  1. move hybridization logic to a JIT decorator
  2. extend parameter management to Block
  3. user-friendly warning for native control flow in JIT code.

4.3. Gluon Block Enhancement

Scope:

  1. inspection of graph internals similar to monitor for Module (PR 15839)
  2. support additional types in argument such as dict, kwargs, None
  3. fused parameters and gradients respectively
  4. register custom parameter

4.4. Enable Symbolic Shape (& Dtype) for Array Creation in NNVM-backend

Scope:

  1. allow flexible creation of array based on shapes of other arrays that are only known at runtime
  2. add constant symbol type as the return value of symbol.shape (?)
  3. support constant symbol as operator arguments (?)
  4. constant folding for constant symbols

4.5. Gluon Distributions Module

Scope:

  1. sampling and pdf definition for distributions. Distribution https://github.com/amzn/MXFusion. PDF operators for the random samplers, and also the Dirichlet #14617.
  2. wrap operators into more usable classes.
  3. reproducible global seed

4.6. Gluon Metrics Module

Scope:

  1. address usability and performance issues in mxnet.metric using hybridizable NumPy op

4.7. Gluon Optimizer Module

Scope:

  1. API changes such as consistent weight decay (Inconsistent weight decay logics in multiple optimizers #9881), change default value to not apply wd on bias terms (do not regularize beta and bias #11953)
  2. hybridizable optimizers
  3. new optimizers (Optimizer wish list #9182)

4.8. Gluon Data API Extension and Fixes

Scope:

  1. address diverging interfaces and remove transform= constructor arg (Transforms are not compatible with DownloadedDatasets #11141).
  2. reorganize io/image modules and provide data loader instead.
  3. lowering dataloader to backend for efficiency (Low CPU usage of MXNet in subprocesses #13593)
  4. shared memory propagation?

4.9. Gluon Estimator Extension for Experimenting Utilities

Scope:

  1. logging of configuration (DeepNLU), state, and performance for checkpointing for easier resume
  2. pre-defined estimators for common problems

4.10. Gluon Estimator Refactoring for Examples and Tutorials

Scope:

  1. modularize and refactor unstructured scripts and examples into estimator class utilities

4.11. Gluon Distributed Training Usability Enhancement

Scope:

  1. more flexibility for communication with kvstore UDFs
  2. add distribution strategies to estimator
  3. plugin for communication backends (horovod, byteps, parameter server) for data parallel training
  4. data sharding/sampling/streaming enhancement for distributed training

5. Documentation

Documentation is the most important factor for new adoption of a library. The following projects aim to:

  • address the usability and discoverability issues in the current MXNet website
  • improve the quality of documentation to make it correct, clear, and concise.
  • help adoption of the changes in MXNet 2.0 from existing users.

5.1. MXNet 2.0 Migration Guide

Scope:

  1. document high-level mapping from old functionality to new API for data pipeline, modeling, optimization, training loop, metric, inspection and logging, debugging.

Risks:

  1. parallel development of the doc may result in outdated doc.
  2. auto doc verification is needed.

5.2. MXNet 2.0 Developer Guide

Scope:

  1. carefully document the design and contribution guide for features with low entry bar such as operator, gluon block, doc, optimizer, metric, examples and tutorials.
  2. clear and up-to-date system design overview.
  3. clear roadmap

5.3. Adopt beta.mxnet.io as official website

Scope:

  1. infrastructure change for new doc build
  2. merge into master with NumPy.mxnet.io
  3. improve load time and browsing experience
  4. CDN in popular region such as China, with automated validation and testing.

Note: https://github.com/ThomasDelteil/mxnet.io-v2

6. Profiling and Debugging

Profiling and debugging is a common step in the development of deep learning models, and proper tools can help significantly improve developer's productivity. The objective of these projects is to provide such tools to make it easier to discover issues in correctedness and performance of models.

6.1. Memory Profiler

Scope:

  1. memory profiler logging support in backend
  2. automatic array naming tool based on scope
  3. tree-map visualization tool for inspecting profiler dump

6.2. Enhanced Debugging Tool

Scope:

  1. Enable user-specified error handling
  2. Improve error message
  3. Stacktrace inspection in debug API
  4. Automatic error reporting tool
  5. Runtime API for turning off asynchronous execution

7. Advanced Operators

The objective of these projects are to extend the tensor library and operators for better performance and for advanced use.

7.1. Strided ndarray support

Scope:

  1. support strided array in a subset of operators
  2. support auto-transpose of strided array in graph pass and executor

7.2. Ragged ndarray and operators

Scope:

  1. introduce ragged (variable length) tensor as 1st class tensor. Support zero-copy from RaggedNDArray to NDArray when no dimension is ragged.
  2. Load balancing strategy for operators that take RaggedNDArray as input
  3. cover operators for NLP applications (RNN, transformer)

7.3. Improved Sparse Support

Scope:

  1. sparse format and operator support
  2. scipy coverage
  3. operators for graph neural-networks (e.g. ops in minigun)

Minimum support:

  • format: csr,
  • zerocopy to DLPack
  • integration with minigun kernels

Next-level support:

  • format: coo and block sparse.

8. Building and Configuration

8.1. CMake improvement and Makefile deprecation

Scope:

  1. reimplement CMakeLists for DMLC dependencies
  2. reimplement CMakeLists for MXNet to support 1) building best performing binary in any platform 2) building portable binary distribution for pip

8.2. MXNet Configurator

Scope:

  1. drop environment variables and centralize them as config.
  2. define functionalities that support runtime-switch (candidates: memory pool, engine, worker thread pools) and expose frontend API
  3. allow saving and loading of mxnet system config

9. Advanced training and deployment

9.1. Automatic Quantization and Quantized Training for NumPy

Scope:

  1. automatic quantization based on heuristic (or learning)
  2. BMXNet

9.2. Mobile and edge-device deployment

Scope:

  1. replace amalgamation with more user-friendly function (TF-lite equivalent).
  2. tutorial and example
  3. metal support

10. Performance

10.1. MXNet Execution Overhead

Scope:

  1. [Discussion] Overhead in MXNet Execution #14883
@szha szha pinned this issue Sep 13, 2019
@szha szha added the Roadmap label Sep 13, 2019
@pengzhao-intel
Copy link
Contributor

@szha Really great proposal and we may want to add some items in 2.0 too.
Is there a timeline of 2.0?

@mxnet-label-bot
Copy link
Contributor

Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it.
Here are my recommended label(s): Feature

@zachgk
Copy link
Contributor

zachgk commented Sep 16, 2019

Is there a plan to create a branch either for the 1.x version and have master reflect 2.0 or to create a branch for the 2.0 version and keep master on 1.x for now?

@szha
Copy link
Member Author

szha commented Sep 17, 2019

@pengzhao-intel a tentative target date is by end of Q1 2020.

@zachgk we will create a branch for 2.0. Initially we will keep master to be 1.x and have 2.0 in a new branch. After 1.6 release we will revisit how to make the 2.0 branch the master.

@braindotai
Copy link

Just a quick cheer up for a new website of MXNet... its way more awesome and beautiful than I expected.
Though minor bugs are still there, for ex- most of the link in the tutorials are broken and not working.
Anyways great work so far.

@stereomatchingkiss
Copy link

stereomatchingkiss commented Dec 10, 2019

Any plan to simplify the build of c and c++ api for mxnet2.0?It is hard(or very hard) to build a working version of mxnet with cpp api on different platforms(windows, linux, mac), every new release of the mxnet may or may not break something and we need to spend many hours to figure out how to make it work.

I am happy with python api, but not all of the tasks suitable for python. Almost every deep learning tools are based on c and c++, but almost everyone of them are difficult to or partially work with c and c++.

@szha
Copy link
Member Author

szha commented Dec 10, 2019

@stereomatchingkiss good point. What are you using c/c++ api for?

@stereomatchingkiss
Copy link

stereomatchingkiss commented Dec 10, 2019

@stereomatchingkiss good point. What are you using c/c++ api for?

  1. Develop stand alone app on desktop and mobile(maybe on another devices like rpi4 or jetson nano in the future)
  2. Wrapper of another language(ex : php)
  3. Run the inference task on aws lambda, we do not want to prune the libs of python manually if we could build a slim library of mxnet/tensorflow/pytorch.

Maybe you could open a post to ask the users what are they expect for c or c++ api, I guess most of them only need to use the api to perform inference task but not training(python do a great job about this), this should help you shrink the size of the libs and made the codes less complicated.

@edmBernard
Copy link

edmBernard commented Dec 11, 2019

@stereomatchingkiss That's a bit what amalgamation part was for ? a simplified inference interface. The last time I use amalgamation (some years ago) it was often break by update and not really maintain.

@szha szha added the RFC Post requesting for comments label Dec 15, 2019
@szha
Copy link
Member Author

szha commented Dec 15, 2019

The status of MXNet 2.0 project is tracked at: https://github.com/apache/incubator-mxnet/projects/18. The status for each project will be updated by the contributor who's driving it. If you have more projects that you intend to drive please first discuss here.

@szha
Copy link
Member Author

szha commented Dec 15, 2019

Once 1.6 release is complete, we will create a branch for MXNet 1.x for future releases and start using master branch for 2.0 development.

@sxjscience
Copy link
Member

Should we create a new branch for 2.0? I think we are also planing for 1.7.0 #16864

@leezu
Copy link
Contributor

leezu commented Dec 27, 2019

In the past we always kept development on the master branch, thus how about branching out 1.7.0 release branch and keeping development on master?

@TaoLv
Copy link
Member

TaoLv commented Dec 27, 2019

+1 for using master branch for 2.0 development. I think we need 3 branches at least:

  1. master branch: for 2.0 development
  2. v1.x: for 1.x development and maintenance
  3. v1.7.x: for 1.7.x release

@szha
Copy link
Member Author

szha commented Dec 28, 2019

That's what I had in mind. The v1.7.x branch doesn't have to be created until code freeze for 1.7.0

@TaoLv
Copy link
Member

TaoLv commented Dec 31, 2019

3.1. C-API Clean-up
C-API is the foundational API in MXNet that all language bindings depend on.

@szha I'm looking at the item 3.1.2. Could you please explain the scope of C-API? Do you mean those APIs sit in the src/c_api/ folder?

@szha
Copy link
Member Author

szha commented Dec 31, 2019

@TaoLv one promising direction that the community is converging to is the interface based on packed function (motivation as described by @tqchen in #17097 (comment)). What this means to the project is that the existing c API will be updated to follow the packed function interface.

@apeforest
Copy link
Contributor

apeforest commented Feb 19, 2020

Is there a plan to remove the cudnn_off argument from the neural network operators such as Dropout, Convolution, Pool etc. It creates a few usability issues:
(1) Once a model is exported. It requires users to change this flag in all the layers manually if they want to enable/disable cuDNN.
(2) When the cudnn_off is set to true in some layers, the global env variable MXNET_CUDNN_AUTOTUNE_DEFAULT becomes don't care. It's very confusing to users to see an error message like "Please turn off MXNET_CUDNN_AUTOTUNE_DEFAULT" by indeed it does not do anything.
(3) Why did we expose such implementation detail to users at the first place? In the worst case, we should just provide a global variable to turn on/off cuDNN in all layers instead of at operator level.

@kalcohol
Copy link

kalcohol commented Feb 25, 2020

Thanks for this awesome work, it has benefited me a great deal.

Here are some disadvantages(may be) listed blow:

  1. it seems that c and c++ interface both could work, but can not finish single task only by one;
  2. low bit training or inference is not available via c/c++(ver. 1.6.0 fix fp16 training);
  3. static linking lib is (very) far away from easy to use, cmake configuration file(like MxNetConfig.cmake, etc.) generated by cmake will enough for end users to integrate libmxnet.a and other large bunch of static third party libs(it's not easy to maintain gentlemanly demeanor all a day when manually linking these day by day). people could easy to hack loading interface of a dynamic library.
  4. smaller size of lib will more friendly to edge devices.
  5. more c++ training demo, including how to use kvstore(multiple cards and multiple servers), it's really not easy to understand.

Good day everyone.

@leezu
Copy link
Contributor

leezu commented Feb 25, 2020

@kalcohol please create a new issue about "static linking lib is (very) far away from easy to use", describing your setup in more detail and potentially suggestions how to improve the user experience.

@kalcohol
Copy link

@kalcohol please create a new issue about "static linking lib is (very) far away from easy to use", describing your setup in more detail and potentially suggestions how to improve the user experience.

#17692 add this tiny requist.

sxjscience pushed a commit that referenced this issue Feb 29, 2020
* refactor optimizer

* refactor optimizer

* fix svrg test

* fix rmsprop param naming

* fix signum test

* fix pylint and perl test

* fix perl test and signsgd test

* fix

* retrigger ci

* reduce ci overheads
@timespaceuniverse
Copy link

timespaceuniverse commented Mar 28, 2020

@szha
i checked some docs and projects about distributed training ,
'Horovod' is project from uber team , 'Gloo' is project from facebook team.
The basic idea is to use trick from HPC computing field which is more efficient then traditional param-server:
http://andrew.gibiansky.com/blog/machine-learning/baidu-allreduce/?from=timeline
There is a tool called openmpi on which the 'Horvod' project is based ,but i found openmpi is too difficult to configure and use .
I also check the 'Gloo' which seems to use 'redis' to replace 'openmpi' .
I strongly suggest not to use Horovod directly which is based on openmpi that is too complex and old.

I also find bytedance has a good project solving the same problem not using MPI ,
https://github.com/bytedance/byteps

maybe we cant better integrate bytedance solution in roadmap 2.0 .
or we can have a mxnet internal solution similar to bytedance solution.

@eric-haibin-lin
Copy link
Member

@lilongyue the integration of bytePS to mxnet is in this PR #17555

@timespaceuniverse
Copy link

@lilongyue the integration of bytePS to mxnet is in this PR #17555
that's great !

MoisesHer pushed a commit to MoisesHer/incubator-mxnet that referenced this issue Apr 10, 2020
* refactor optimizer

* refactor optimizer

* fix svrg test

* fix rmsprop param naming

* fix signum test

* fix pylint and perl test

* fix perl test and signsgd test

* fix

* retrigger ci

* reduce ci overheads
@zheng-da
Copy link
Contributor

A quick comment: DGL contains all sampling implementation and no longer relies on the implementation in MXNet. I think we should deprecate the graph sampling implementation in MXNet.

anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this issue May 29, 2020
* refactor optimizer

* refactor optimizer

* fix svrg test

* fix rmsprop param naming

* fix signum test

* fix pylint and perl test

* fix perl test and signsgd test

* fix

* retrigger ci

* reduce ci overheads
leezu added a commit that referenced this issue Jul 20, 2020
Replaced by cmake buildsystem as per #16167
@fhieber
Copy link
Contributor

fhieber commented Jul 22, 2020

@szha is there a recent estimate on the timeline for MXNet 2.0? Would you recommend to develop downstream toolkits (e.g. Sockeye) against the master branch now or rather wait a little bit longer?
Is there already documentation on how to transition MXNet 1.x projects to 2.x?

@szha
Copy link
Member Author

szha commented Jul 22, 2020

@fhieber we are planning to release the first public beta on this somewhere in August. At the moment we are finalizing some API changes and also validating them in GluonNLP. We will publish a transition doc as part of the public beta.

@TristonC
Copy link
Contributor

TristonC commented Aug 7, 2020

@szha We need to add moving AMP package from contrib to core? We will file RFC for this task.

@Neutron3529
Copy link
Contributor

@szha I found an inconvenient thing that there is no concat layer for gluon. Is it possible to add a concat layer for gluon?

@davisliang
Copy link

Making MXNET_SAFE_ACCUMULATION=1 default when running on float16 would be very convenient!

@szha
Copy link
Member Author

szha commented Aug 19, 2020 via email

chinakook pushed a commit to chinakook/mxnet that referenced this issue Nov 23, 2020
@deepakkumar1984
Copy link
Contributor

deepakkumar1984 commented Apr 12, 2021

I made some good progress with the C# version for v2 changes. I have implemented most of the numpy operators in v2 till date and in phase of updating Gluon interface as per latest python version and to use numpy api's. Can we include/promote this project from the main website to attact more contributors.

https://github.com/deepakkumar1984/MxNet.Sharp

@szha
Copy link
Member Author

szha commented Apr 12, 2021

@deepakkumar1984 awesome work, thanks for contributing to the ecosystem! I think we can definitely highlight it in the ecosystem page as a community project. Feel free to send a pull request to add it there. If you are interested, once it gets close to completion, we could also publish a blog to attract more attention.

How do you envision the codebase to be maintained and hosted going forward?

@deepakkumar1984
Copy link
Contributor

Thanks @szha, I will start working on the PR to highlight in the ecosystem page. I did started on writing some tutorials eg. https://mxnet.tech-quantum.com/docs-2/getting-started/create-a-neural-network/, but prefer in future to maintain these blogs similar to other bindings like https://mxnet.apache.org/versions/2.0/api/csharp. MxNet Sharp is more than just binding of the api's, I have implemented the Gluon package in version 1.5 itself and now in process of upgrading them. Also the gluon.probability will be implemented after completion of the gluon interface.

I am happy if the core project MxNet.Sharp can be merged with the main branch something like: https://github.com/apache/incubator-mxnet/csharp-package

I have other projects which are making small steps like GluonCV, GluonNLP, GluonTS, AutoGluon and SciKit learn (MxNet version). I can seperate them from my branch and keep them with me for now and probably start linking them in future in the ecosystem page when they are completing one by one.

@barry-jin
Copy link
Contributor

Cpp-package will be added back in #20131. As this language binding will still rely on symbolic programming, some of the module like APIs removed in #18531 will also be added back. So, we may need to support these module APIs for some languange bindings, especially for cpp-package. @szha @leezu

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
RFC Post requesting for comments Roadmap
Projects
None yet
Development

No branches or pull requests