You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On our environment, each run of the 10-fold takes nearly 3.75 hours.
So, the total time used for training is 3.75 hours or 37.5 hours?
In fact, I have ran classification.py on GCJ dataset to train the model. However, it has taken about 140 hours for training, which is a significant performance degradation. Below is the configuration of my hardware environment.
Intel i7 6700 3.4GHz
NVIDIA Titan XP 11G
DDR4 2400MHz, 32GB
The main frequency of CPU and clock rate of memory are slightly lower than yours. In my opinion, it is unreasonable to observe such degradation of time performance.
Can you give more details about the calculation of the reported training time? Thanks!
The text was updated successfully, but these errors were encountered:
So, the total time used for training is 3.75 hours or 37.5 hours?
In fact, I have ran
classification.py
on GCJ dataset to train the model. However, it has taken about 140 hours for training, which is a significant performance degradation. Below is the configuration of my hardware environment.The main frequency of CPU and clock rate of memory are slightly lower than yours. In my opinion, it is unreasonable to observe such degradation of time performance.
Can you give more details about the calculation of the reported training time? Thanks!
The text was updated successfully, but these errors were encountered: