You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper is very interesting. My only concern is about the evaluation stage. I think it is
very important to describe the subjects of the pictures used to evaluate the system.
Indeed, authors only say that they asked participants to take 5 pictures, resulting in a
benchmark of 150 pictures.
The knowledge of the characteristics of these pictures is fundamental to understand the
obtained results. For instance, number of photos with text, number of photos with people,
outdoor vs. indoor, etc. This would allow users to better appreciate the contribution.
The text was updated successfully, but these errors were encountered:
Io propongo di spiegare che noi abbiamo tutte le foto e tutti i tag, ma che abbiamo lasciato come future work ricontrollare le foto e che non abbiamo cercato di proporlo come gioco per coinvolgere più gente possibile
I think it is very important to describe the subjects of the pictures used to evaluate the system. Indeed, authors only say that they asked participants to take 5 pictures, resulting in a benchmark of 150 pictures. The knowledge of the characteristics of these pictures is fundamental to understand the obtained results.
Qui penso che possiamo cavarcela dicendo che non ci sono stati particolari vincoli sui soggetti
For instance, number of photos with text, number of photos with people, outdoor vs. indoor, etc. This would allow users to better appreciate the contribution.
Per indoor vs outdoor potrebbe bastare un parsing dei tag, tendenzialmente ce n'erano diversi con il tag indoor o simili, per cui con uno script possiamo contarli e fare in modo che ci segnali le foto da controllare manualmente
The text was updated successfully, but these errors were encountered: