Skip to content

Conversation

@tjruwase
Copy link
Contributor

Add replacement API with configurable kernel optimizations
Release context memory ASAP
Add training flag to replacement API

Transformer Layer creation API with configurable kernel optimizations
Add training flag to creation API
@tjruwase tjruwase changed the base branch from master to jeffra/inject_v2 December 14, 2020 15:17
@jeffra
Copy link
Collaborator

jeffra commented Dec 14, 2020

Offline discussion, new API to implement:

config = deepspeed.ops.DeepSpeedTransformerConfig(bsz=32, seq=128, ..., huggingface=true)
model = deepspeed.ops.replace_module(
           orig_module_impl=transformers.modeling_bert.BertLayer,
           model=model,
           config=config)

short-term TODO (not in this PR): add support for some kind of consistent hashing/versioning of the orig_module_impl to ensure we know how to translate between modules and they haven't changed.

@tjruwase tjruwase merged commit 1b42798 into jeffra/inject_v2 Dec 15, 2020
cmikeh2 added a commit that referenced this pull request Nov 3, 2023
Co-authored-by: Jeff Rasley <jerasley@microsoft.com>
Co-authored-by: Ammar Ahmad Awan <ammar.awan@microsoft.com>
Co-authored-by: Connor Holmes <connorholmes@microsoft.com>
Co-authored-by: Masahiro Tanaka <mtanaka@microsoft.com>
Co-authored-by: Logan Adams <114770087+loadams@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants