Skip to content

W4 Ch17 #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
JSchuurmans opened this issue Jan 15, 2018 · 0 comments
Open

W4 Ch17 #20

JSchuurmans opened this issue Jan 15, 2018 · 0 comments

Comments

@JSchuurmans
Copy link

Ch. 17 - NLP and Word Embeddings
'financial industry where large amounts'
financial industry, where large amounts

'analyst reports all the way'
analyst reports, all the way

' social media, text is '
social media. Text is

Tokenizing text
' This prevents us from assigning tokens to words that are hardly ever used, mostly because of typos or because they are not actual words or because they are just very uncommon. This prevents us from over fitting to texts that contain strange words or wired spelling errors.'
(Although you are trying to say something different, the sentences read the same. Try changing one.)
over fitting -> overfitting

'Through backpropagation we can'
Through backpropagation, we can

Embeddings
over fits ( can be changed to overfits, as overfitting is a concept)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant