Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve plotting in notebook gpflux_with_keras_layers #41

Merged
merged 15 commits into from
Aug 24, 2021

Conversation

vdutor
Copy link
Member

@vdutor vdutor commented Aug 3, 2021

No description provided.

Copy link
Collaborator

@sebastianober sebastianober left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I've put minor grammatical things but otherwise no problems that I can see

@@ -234,7 +234,8 @@
"source": [
"## Deep Gaussian process with latent variables\n",
"\n",
"We suggest a Deep Gaussian process with a latent variable in the first layer to improve the error bars on the given dataset. The latent variable allows to model the heteroscedasticity, while an extra layer makes the model more expressive to catch sharp transitions.\n",
"To tackle the problem we suggest a Deep Gaussian process with a latent variable in the first layer. The latent variable will be able to capture the \n",
"heteroscedasticity, while the two layered deep GP is able to model the sharp transitions. \n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammatically should be "two-layered" with hyphen.

@@ -380,7 +381,7 @@
"source": [
"### Fit\n",
"\n",
"We can now fit the model. Because of the `DirectlyParameterizedEncoder`, which stores a sorted array of means and std. dev. for each point in the dataset, it is important to set the `batch_size` to the number of datapoints and set `shuffle` to `False`."
"We can now fit the model. Because of the `DirectlyParameterizedEncoder` it is important to set the batch size to the number of datapoints and turn off shuffle. This is so that we use the associated latent variable for each datapoint. If we would use an Amortized Encoder network this would not be necessary."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor but probably "Amortized Encoder" shouldn't have caps.

@vdutor vdutor merged commit 36932c2 into develop Aug 24, 2021
@vdutor vdutor deleted the vincent/notebooks/hybrid branch August 24, 2021 15:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants