Skip to content

W3 Ch13 #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
JSchuurmans opened this issue Jan 13, 2018 · 1 comment
Open

W3 Ch13 #15

JSchuurmans opened this issue Jan 13, 2018 · 1 comment

Comments

@JSchuurmans
Copy link

Intro to computer vision
'While still mainly focusing on image classification, figuring out what an image shows, we will work with larger, more complex images as you would encounter them in the real world.'
This sentence does not flow smoothly. Take out the definition of image classification .
For example:
We keep the focus on image classification, figuring out what an image shows. We start working with larger, more complex images as you would encounter them in the real world.

Images in finance
... images. But ... --> ... images, but ... / .... images. However, ...

'A slightly less fancy but never the less important application...' (insert commas and delete but)
... less fancy, never the less important, application ...

Now, enough .... (add this sentence to the previous paragraph)

ConvNets
... ConvNets but also ... (replace but by and)

Filters on MNIST
A nine for contrast is made up of four rounded lines that form a circle at the top and a straight, vertical line.
Why not just a circle and a straight line?

'When detecting numbers, there are a few lower level features that make a number. A seven for example is a combination of one vertical straight line, one straight horizontal line on the top and one straight horizontal line through the middle. A nine for contrast is made up of four rounded lines that form a circle at the top and a straight, vertical line.' (Appears double in the paragraph, delete one)

Filters on Color images
Filters always capture the whole depth of the previous layer. They never get slided depth wise, only along the height and width of the image. (What exactly do you mean?)

@JCKrick
Copy link

JCKrick commented Jan 14, 2018

Too add further:
Images currently do not load in the Jupyter notebook.

This also helps get the outer pixels of the image get included in the filters as often as the inner pixels, since the outer pixels of the actual image are no longer the outer pixels of the padded image.

This sentence must be changed. Suggestion: "This also helps to include the outer pixels just as often as the inner pixels of the image in the filters as the former outer pixels are now inner pixels and the new outer pixels are dummies of zero, also called padding.

Padding:

Valid padding ensures that the filter actually fits on the image and does not 'stand over' at some side. Same padding additionally insures that the output of the convolutional layer has the same size as the input.

Actually, valid padding means that no padding is applied and same padding means that padding is applied so that the whole image is taken into consideration.
Valid drops values so that the filter fits the image, which is an important piece of information when choosing which padding to use.
Same Padding adds zeros so that the filter can actually catch the whole picture without dropping any values but the cost is that it considers dummies or zeros on the edges. I would include this information here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants