Skip to content

Commit 0479629

Browse files
committed
Update inference examples to version 0.8.1
1 parent a519f65 commit 0479629

File tree

4 files changed

+189
-186
lines changed

4 files changed

+189
-186
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ See [Docker image](docs/docker-image.md) for details.
8484
- [Model construction](docs/model-construction.md)
8585
- [Pretrained models](docs/pretrained-models.md)
8686
- [Training examples](docs/training-examples.md)
87-
- [Inference examples](examples/inference.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/krasserm/perceiver-io/blob/0.8.0/examples/inference.ipynb)
87+
- [Inference examples](examples/inference.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/krasserm/perceiver-io/blob/0.8.1/examples/inference.ipynb)
8888
- [Building blocks](docs/building-blocks.md)
8989

9090
## Articles

docs/getting-started.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ torch.save(model.state_dict(), "/path/to/model.pt")
152152

153153
For generating text from a prompt via top-k sampling, `CausalLanguageModel` provides a `generate()` method. The following
154154
example first loads a trained model from a checkpoint and then generates text from a short sample prompt. An interactive
155-
demo is also available in the [Colab notebook](https://colab.research.google.com/github/krasserm/perceiver-io/blob/0.8.0/examples/inference.ipynb).
155+
demo is also available in the [Colab notebook](https://colab.research.google.com/github/krasserm/perceiver-io/blob/0.8.1/examples/inference.ipynb).
156156

157157
```python
158158
from perceiver.data.text import TextPreprocessor

0 commit comments

Comments
 (0)