Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[it] cs-229-deep-learning #78

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

proceduralia
Copy link

No description provided.

@shervinea
Copy link
Owner

Thank you for your work @proceduralia!

It seems that questions 50-54 of the template are missing, could you please also take a look at them?

@shervinea shervinea added the in progress Work in progress label Oct 19, 2018
@proceduralia
Copy link
Author

I added them.
Thank you for noticing it and for all your efforts, @shervinea!

@shervinea shervinea added reviewer wanted Looking for a reviewer and removed in progress Work in progress labels Oct 19, 2018
@shervinea
Copy link
Owner

Thank you very much for your hard work @proceduralia!

Now, let's wait for another native speaker to come and review the translation.


**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:**

⟶ Dato i i-esimo livello della rete e j j-esima unità nascosta del livello, abbiamo:
Copy link

@giuseppe-testa giuseppe-testa Mar 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Si potrebbero aggiungere delle virgole:
Dati i, i-esimo livello della rete, e j, j-esima unità nascosta del livello, abbiamo:

Oppure (forse meglio) riformulare:

Dati i e j, ripsettivmente i-esimo livello della rete e j-esima unità nascosta del livello, abbiamo:

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forse sarebbe più opportuno usare:

**6. Osservando i, il livello i-esimo della rete neurale, e j come l'unita j-esima nascosta del livello, otteniamo:

Seppur la traduzione riportata sopra è comunque valida


**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:**

⟶ Funzione di perdita entropia incrociata ― In ambito reti neurali, l'entropia incrociata L(z,y) è comunemente usata ed è definita come segue:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Andrebbe bene anche:

Nel contesto delle reti neurali, l'entropia incrociata L(z,y) è ...

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Condivido

@shervinea
Copy link
Owner

Hi @proceduralia, please feel free to take a look at @giuseppe-testa's comments and incorporate any changes, if applicable. That way, we'll be able to move forward with the merge!

@shervinea shervinea changed the title [it] Deep learning [it] cs-229-deep-learning Oct 6, 2020
@gguzzy
Copy link

gguzzy commented Sep 30, 2024

Hi dear all, I could review them if you are still waiting for some reviews. Let me know the procedures on how to operate.

Should I open a new Pull requests on both the topics 'Deep Learnint etc' or confirm the changes per each opened [it] pull requests? Thanks.

Copy link

@gguzzy gguzzy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed some anomalies on usage of italian, and wrong translations


**6. By noting i the ith layer of the network and j the jth hidden unit of the layer, we have:**

⟶ Dato i i-esimo livello della rete e j j-esima unità nascosta del livello, abbiamo:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forse sarebbe più opportuno usare:

**6. Osservando i, il livello i-esimo della rete neurale, e j come l'unita j-esima nascosta del livello, otteniamo:

Seppur la traduzione riportata sopra è comunque valida


**10. Cross-entropy loss ― In the context of neural networks, the cross-entropy loss L(z,y) is commonly used and is defined as follows:**

⟶ Funzione di perdita entropia incrociata ― In ambito reti neurali, l'entropia incrociata L(z,y) è comunemente usata ed è definita come segue:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Condivido


**15. Step 1: Take a batch of training data.**

⟶ Passo 1: Prendere un gruppo di dati per l'apprendimento.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errato, training data non corrisponde all'apprendimento, bensì ai dati dell'addestramento.

**15: Passo 1: Prende un gruppo di dati dell'addestramento.


**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.**

⟶ È solitamente utilizzata dopo un livello completamente connesso o convoluzionale e prima di una non linearità e mira a consentire maggiori tassi di apprendimento e a ridurre la forte dipendenza dall'inizializzazione.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

È di solito utilizzata dopo un livello convoluzionale o complementamente connesso, e prima di un livello non lineare, e mira a consentire maggiori tassi di apprendimento e a ridurre la forte dipendenza dell'inizializzazione.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
reviewer wanted Looking for a reviewer
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants