-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve plotting in notebook gpflux_with_keras_layers #41
Conversation
…lux into vincent/notebooks/hybrid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I've put minor grammatical things but otherwise no problems that I can see
docs/notebooks/deep_cde.ipynb
Outdated
@@ -234,7 +234,8 @@ | |||
"source": [ | |||
"## Deep Gaussian process with latent variables\n", | |||
"\n", | |||
"We suggest a Deep Gaussian process with a latent variable in the first layer to improve the error bars on the given dataset. The latent variable allows to model the heteroscedasticity, while an extra layer makes the model more expressive to catch sharp transitions.\n", | |||
"To tackle the problem we suggest a Deep Gaussian process with a latent variable in the first layer. The latent variable will be able to capture the \n", | |||
"heteroscedasticity, while the two layered deep GP is able to model the sharp transitions. \n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Grammatically should be "two-layered" with hyphen.
docs/notebooks/deep_cde.ipynb
Outdated
@@ -380,7 +381,7 @@ | |||
"source": [ | |||
"### Fit\n", | |||
"\n", | |||
"We can now fit the model. Because of the `DirectlyParameterizedEncoder`, which stores a sorted array of means and std. dev. for each point in the dataset, it is important to set the `batch_size` to the number of datapoints and set `shuffle` to `False`." | |||
"We can now fit the model. Because of the `DirectlyParameterizedEncoder` it is important to set the batch size to the number of datapoints and turn off shuffle. This is so that we use the associated latent variable for each datapoint. If we would use an Amortized Encoder network this would not be necessary." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor but probably "Amortized Encoder" shouldn't have caps.
No description provided.