You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/gp.rst
+8-8
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ We assume that our samples are in a vector called ``samples`` and that our obser
13
13
.. literalinclude:: ../../src/tutorials/gp.cpp
14
14
:language: c++
15
15
:linenos:
16
-
:lines: 75-84
16
+
:lines: 75-85
17
17
18
18
Basic usage
19
19
------------
@@ -23,14 +23,14 @@ We first create a basic GP with an Exponential kernel (``kernel::Exp<Params>``)
23
23
.. literalinclude:: ../../src/tutorials/gp.cpp
24
24
:language: c++
25
25
:linenos:
26
-
:lines:58-62
26
+
:lines:60-63
27
27
28
28
The type of the GP is defined by the following lines:
29
29
30
30
.. literalinclude:: ../../src/tutorials/gp.cpp
31
31
:language: c++
32
32
:linenos:
33
-
:lines: 87-89
33
+
:lines: 87-90
34
34
35
35
To use the GP, we need :
36
36
@@ -40,7 +40,7 @@ To use the GP, we need :
40
40
.. literalinclude:: ../../src/tutorials/gp.cpp
41
41
:language: c++
42
42
:linenos:
43
-
:lines:91-98
43
+
:lines:92-99
44
44
45
45
Here we assume that the noise is the same for all samples and that it is equal to 0.01.
46
46
@@ -57,7 +57,7 @@ To visualize the predictions of the GP, we can query it for many points and reco
57
57
.. literalinclude:: ../../src/tutorials/gp.cpp
58
58
:language: c++
59
59
:linenos:
60
-
:lines: 101-111
60
+
:lines: 101-112
61
61
62
62
63
63
Hyper-parameter optimization
@@ -71,21 +71,21 @@ A new GP type is defined as follows:
71
71
.. literalinclude:: ../../src/tutorials/gp.cpp
72
72
:language: c++
73
73
:linenos:
74
-
:lines:115-117
74
+
:lines:116-118
75
75
76
76
It uses the default values for the parameters of ``SquaredExpARD``:
77
77
78
78
.. literalinclude:: ../../src/tutorials/gp.cpp
79
79
:language: c++
80
80
:linenos:
81
-
:lines:63-64
81
+
:lines:64-65
82
82
83
83
After calling the ``compute()`` method, the hyper-parameters can be optimized by calling the ``optimize_hyperparams()`` function. The GP does not need to be recomputed and we pass ``false`` for the last parameter in ``compute()`` as we do not need to compute the kernel matrix again (it will be recomputed in the hyper-parameters optimization).
84
84
85
85
.. literalinclude:: ../../src/tutorials/gp.cpp
86
86
:language: c++
87
87
:linenos:
88
-
:lines:121-122
88
+
:lines: 122-123
89
89
90
90
91
91
We can have a look at the difference between the two GPs:
0 commit comments