-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[it] cs-229-deep-learning #78
base: master
Are you sure you want to change the base?
Conversation
Thank you for your work @proceduralia! It seems that questions 50-54 of the template are missing, could you please also take a look at them? |
I added them. |
Thank you very much for your hard work @proceduralia! Now, let's wait for another native speaker to come and review the translation. |
|
||
**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** | ||
|
||
⟶ Dato i i-esimo livello della rete e j j-esima unità nascosta del livello, abbiamo: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Si potrebbero aggiungere delle virgole:
Dati i, i-esimo livello della rete, e j, j-esima unità nascosta del livello, abbiamo:
Oppure (forse meglio) riformulare:
Dati i e j, ripsettivmente i-esimo livello della rete e j-esima unità nascosta del livello, abbiamo:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forse sarebbe più opportuno usare:
**6. Osservando i, il livello i-esimo della rete neurale, e j come l'unita j-esima nascosta del livello, otteniamo:
Seppur la traduzione riportata sopra è comunque valida
|
||
**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** | ||
|
||
⟶ Funzione di perdita entropia incrociata ― In ambito reti neurali, l'entropia incrociata L(z,y) è comunemente usata ed è definita come segue: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Andrebbe bene anche:
Nel contesto delle reti neurali, l'entropia incrociata L(z,y) è ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Condivido
Hi @proceduralia, please feel free to take a look at @giuseppe-testa's comments and incorporate any changes, if applicable. That way, we'll be able to move forward with the merge! |
Hi dear all, I could review them if you are still waiting for some reviews. Let me know the procedures on how to operate. Should I open a new Pull requests on both the topics 'Deep Learnint etc' or confirm the changes per each opened [it] pull requests? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed some anomalies on usage of italian, and wrong translations
|
||
**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:** | ||
|
||
⟶ Dato i i-esimo livello della rete e j j-esima unità nascosta del livello, abbiamo: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forse sarebbe più opportuno usare:
**6. Osservando i, il livello i-esimo della rete neurale, e j come l'unita j-esima nascosta del livello, otteniamo:
Seppur la traduzione riportata sopra è comunque valida
|
||
**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:** | ||
|
||
⟶ Funzione di perdita entropia incrociata ― In ambito reti neurali, l'entropia incrociata L(z,y) è comunemente usata ed è definita come segue: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Condivido
|
||
**15. Step 1: Take a batch of training data.** | ||
|
||
⟶ Passo 1: Prendere un gruppo di dati per l'apprendimento. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Errato, training data non corrisponde all'apprendimento, bensì ai dati dell'addestramento.
**15: Passo 1: Prende un gruppo di dati dell'addestramento.
|
||
**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** | ||
|
||
⟶ È solitamente utilizzata dopo un livello completamente connesso o convoluzionale e prima di una non linearità e mira a consentire maggiori tassi di apprendimento e a ridurre la forte dipendenza dall'inizializzazione. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
È di solito utilizzata dopo un livello convoluzionale o complementamente connesso, e prima di un livello non lineare, e mira a consentire maggiori tassi di apprendimento e a ridurre la forte dipendenza dell'inizializzazione.
No description provided.