diff --git a/posts/2014-07-Conv-Nets-Modular/index.html b/posts/2014-07-Conv-Nets-Modular/index.html index 212075f..ee1c192 100644 --- a/posts/2014-07-Conv-Nets-Modular/index.html +++ b/posts/2014-07-Conv-Nets-Modular/index.html @@ -279,7 +279,7 @@

Formalizing Convolutional Neu

If one combines this with the equation for \(A(x)\),

\[A(x) = \sigma(Wx + b)\]

one has everything they need to implement a convolutional neural network, at least in theory.

-

In practice, this is often not best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

+

In practice, this is often not the best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

The convolution operation is a powerful tool. In mathematics, it comes up in diverse contexts, ranging from the study of partial differential equations to probability theory. In part because of its role in PDEs, convolution is very important in the physical sciences. It also has an important role in many applied areas, like computer graphics and signal processing.

For us, convolution will provide a number of benefits. Firstly, it will allow us to create much more efficient implementations of convolutional layers than the naive perspective might suggest. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of \(x\)s – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet. Finally, convolution will give us a significantly different perspective for reasoning about convolutional layers.

diff --git a/posts/tags/convolutional neural networks.xml b/posts/tags/convolutional neural networks.xml index dded6f6..1888952 100644 --- a/posts/tags/convolutional neural networks.xml +++ b/posts/tags/convolutional neural networks.xml @@ -435,7 +435,7 @@ Filters learned by the first convolutional layer. The top half corresponds to th

If one combines this with the equation for \(A(x)\),

\[A(x) = \sigma(Wx + b)\]

one has everything they need to implement a convolutional neural network, at least in theory.

-

In practice, this is often not best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

+

In practice, this is often not the best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

The convolution operation is a powerful tool. In mathematics, it comes up in diverse contexts, ranging from the study of partial differential equations to probability theory. In part because of its role in PDEs, convolution is very important in the physical sciences. It also has an important role in many applied areas, like computer graphics and signal processing.

For us, convolution will provide a number of benefits. Firstly, it will allow us to create much more efficient implementations of convolutional layers than the naive perspective might suggest. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of \(x\)s – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet. Finally, convolution will give us a significantly different perspective for reasoning about convolutional layers.

diff --git a/posts/tags/deep learning.xml b/posts/tags/deep learning.xml index bb76d44..d67d452 100644 --- a/posts/tags/deep learning.xml +++ b/posts/tags/deep learning.xml @@ -959,7 +959,7 @@ Filters learned by the first convolutional layer. The top half corresponds to th

If one combines this with the equation for \(A(x)\),

\[A(x) = \sigma(Wx + b)\]

one has everything they need to implement a convolutional neural network, at least in theory.

-

In practice, this is often not best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

+

In practice, this is often not the best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

The convolution operation is a powerful tool. In mathematics, it comes up in diverse contexts, ranging from the study of partial differential equations to probability theory. In part because of its role in PDEs, convolution is very important in the physical sciences. It also has an important role in many applied areas, like computer graphics and signal processing.

For us, convolution will provide a number of benefits. Firstly, it will allow us to create much more efficient implementations of convolutional layers than the naive perspective might suggest. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of \(x\)s – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet. Finally, convolution will give us a significantly different perspective for reasoning about convolutional layers.

diff --git a/posts/tags/modular neural networks.xml b/posts/tags/modular neural networks.xml index baaf228..ff7fc45 100644 --- a/posts/tags/modular neural networks.xml +++ b/posts/tags/modular neural networks.xml @@ -194,7 +194,7 @@ Filters learned by the first convolutional layer. The top half corresponds to th

If one combines this with the equation for \(A(x)\),

\[A(x) = \sigma(Wx + b)\]

one has everything they need to implement a convolutional neural network, at least in theory.

-

In practice, this is often not best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

+

In practice, this is often not best the way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

The convolution operation is a powerful tool. In mathematics, it comes up in diverse contexts, ranging from the study of partial differential equations to probability theory. In part because of its role in PDEs, convolution is very important in the physical sciences. It also has an important role in many applied areas, like computer graphics and signal processing.

For us, convolution will provide a number of benefits. Firstly, it will allow us to create much more efficient implementations of convolutional layers than the naive perspective might suggest. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of \(x\)s – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet. Finally, convolution will give us a significantly different perspective for reasoning about convolutional layers.

diff --git a/posts/tags/neural networks.xml b/posts/tags/neural networks.xml index b4be67f..682d793 100644 --- a/posts/tags/neural networks.xml +++ b/posts/tags/neural networks.xml @@ -1200,7 +1200,7 @@ Filters learned by the first convolutional layer. The top half corresponds to th

If one combines this with the equation for \(A(x)\),

\[A(x) = \sigma(Wx + b)\]

one has everything they need to implement a convolutional neural network, at least in theory.

-

In practice, this is often not best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

+

In practice, this is often not the best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

The convolution operation is a powerful tool. In mathematics, it comes up in diverse contexts, ranging from the study of partial differential equations to probability theory. In part because of its role in PDEs, convolution is very important in the physical sciences. It also has an important role in many applied areas, like computer graphics and signal processing.

For us, convolution will provide a number of benefits. Firstly, it will allow us to create much more efficient implementations of convolutional layers than the naive perspective might suggest. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of \(x\)s – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet. Finally, convolution will give us a significantly different perspective for reasoning about convolutional layers.

diff --git a/posts/temp1/index.html b/posts/temp1/index.html index 5c6d8c3..24f258d 100644 --- a/posts/temp1/index.html +++ b/posts/temp1/index.html @@ -513,7 +513,7 @@

Formalizing Convolutional Neu

If one combines this with the equation for \(A(x)\),

\[A(x) = \sigma(Wx + b)\]

one has everything they need to implement a convolutional neural network, at least in theory.

-

In practice, this is often not best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

+

In practice, this is often not the best way to think about convolutional neural networks. There is an alternative formulation, in terms of a mathematical operation called convolution, that is often more helpful.

The convolution operation is a powerful tool. In mathematics, it comes up in diverse contexts, ranging from the study of partial differential equations to probability theory. In part because of its role in PDEs, convolution is very important in the physical sciences. It also has an important role in many applied areas, like computer graphics and signal processing.

For us, convolution will provide a number of benefits. Firstly, it will allow us to create much more efficient implementations of convolutional layers than the naive perspective might suggest. Secondly, it will remove a lot of messiness from our formulation, handling all the bookkeeping presently showing up in the indexing of \(x\)s – the present formulation may not seem messy yet, but that’s only because we haven’t got into the tricky cases yet. Finally, convolution will give us a significantly different perspective for reasoning about convolutional layers.