Skip to content

Commit 1fa6bc1

Browse files
authored
Merge branch 'master' into ENG-1404-cli-remove-region-flag-for-the-byoc-cluster-creation
2 parents bfb78a9 + 863dfb2 commit 1fa6bc1

File tree

3 files changed

+8
-5
lines changed

3 files changed

+8
-5
lines changed

.github/workflows/legacy-checkpoints.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ jobs:
105105
aws s3 cp checkpoints.zip s3://pl-public-data/legacy/ --acl public-read
106106
if: inputs.push_to_s3
107107

108-
enable-ckpt-test:
108+
adding-ckpt-test:
109109
runs-on: ubuntu-20.04
110110
if: inputs.create_pr
111111
needs: create-legacy-ckpts
@@ -120,7 +120,7 @@ jobs:
120120
- name: Create Pull Request
121121
uses: peter-evans/create-pull-request@v4
122122
with:
123-
title: Enable testing with legacy checkpiont created with ${{ needs.create-legacy-ckpts.outputs.pl-version }}
123+
title: Adding test for legacy checkpiont created with ${{ needs.create-legacy-ckpts.outputs.pl-version }}
124124
delete-branch: true
125125
labels: |
126126
tests

.github/workflows/release-pypi.yml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,9 +116,12 @@ jobs:
116116
pkg-pattern: "*"
117117
pypi-token: ${{ secrets.PYPI_TOKEN_LAI }}
118118

119-
create-legacy-ckpt:
119+
legacy-checkpoints:
120120
needs: publish-packages
121121
uses: ./.github/workflows/legacy-checkpoints.yml
122122
with:
123123
push_to_s3: true
124124
create_pr: true
125+
secrets:
126+
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
127+
AWS_SECRET_KEY_ID: ${{ secrets.AWS_SECRET_KEY_ID }}

docs/source-pytorch/model/train_model_basic.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -116,11 +116,11 @@ Under the hood, the Lightning Trainer runs the following training loop on your b
116116

117117
.. code:: python
118118
119-
autoencoder = LitAutoEncoder(encoder, decoder)
119+
autoencoder = LitAutoEncoder(Encoder(), Decoder())
120120
optimizer = autoencoder.configure_optimizers()
121121
122122
for batch_idx, batch in enumerate(train_loader):
123-
loss = autoencoder(batch, batch_idx)
123+
loss = autoencoder.training_step(batch, batch_idx)
124124
125125
loss.backward()
126126
optimizer.step()

0 commit comments

Comments
 (0)