You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your codes, it really helps.
And I notices the 60 line code in AdaBoost.py: votes[i, :] = np.array(list(pool.map(partial(_get_feature_vote, image=images[i]), features)))
It maybe accurs error because pool.map would mix the order of feature,
while the 89 line in AdaBoost.py usd the feature as below, but features' order may differ from votes[i, ;], : best_feature = features[best_feature_idx]
I noticed using pool.map(partial(_cal_feature, features), image_faces) and pool.map(partial(_cal_feature, features), image_no_faces) can handle this problem and make full use of cpu. The training time will reduce from 30s to 8s in my 6 CPU compute.
The details you can see in my Pull requests.
The text was updated successfully, but these errors were encountered:
dongxijia
changed the title
Mutiprocessing speed can be faster
Mutiprocessing could be faster
Oct 11, 2019
Thanks for your codes, it really helps.
And I notices the 60 line code in AdaBoost.py:
votes[i, :] = np.array(list(pool.map(partial(_get_feature_vote, image=images[i]), features)))
It maybe accurs error because pool.map would mix the order of feature,
while the 89 line in AdaBoost.py usd the feature as below, but features' order may differ from votes[i, ;], :
best_feature = features[best_feature_idx]
I noticed using pool.map(partial(_cal_feature, features), image_faces) and pool.map(partial(_cal_feature, features), image_no_faces) can handle this problem and make full use of cpu. The training time will reduce from 30s to 8s in my 6 CPU compute.
The details you can see in my Pull requests.
The text was updated successfully, but these errors were encountered: