You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that, after searching, we got .config and .inherited file. For validation, we need .config and .init file. So the only way to get .init file is to fine-tune with the .inherited file?
The acc reported during search is predicted right? And if we want to get the real test results, we have to fine-tune 450 epochs, right? You name it fine-tune, but it looks more like a retraing process.
Since we already got the .inherited file, how can we use that directly to do a real testing on the Imagenet validation set?
The text was updated successfully, but these errors were encountered:
Did you encounter this kind of error again when searching? An error is reported when the configured ofa version is ofa 0.1.0-202012082159. Then try ofa 0.0.4-2012082155 but the same error still occurs.
Traceback (most recent call last):
File "msunas.py", line 8, in
from evaluator import OFAEvaluator, get_net_info
File "/data8T/nsganetv2-master/evaluator.py", line 8, in
from codebase.networks import NSGANetV2
File "/data8T/nsganetv2-master/codebase/networks/init.py", line 1, in
from ofa.imagenet_codebase.networks.proxyless_nets import ProxylessNASNets, proxyless_base, MobileNetV2
ModuleNotFoundError: No module named 'ofa.imagenet_codebase'
I noticed that, after searching, we got .config and .inherited file. For validation, we need .config and .init file. So the only way to get .init file is to fine-tune with the .inherited file?
The acc reported during search is predicted right? And if we want to get the real test results, we have to fine-tune 450 epochs, right? You name it fine-tune, but it looks more like a retraing process.
Since we already got the .inherited file, how can we use that directly to do a real testing on the Imagenet validation set?
Quote from the paper: "An alternative approach to solve the bi-level NAS problem, i.e., simultaneously optimizing the architecture and learn the optimal model weights."
After working with the source code and read the paper again, i feel like this repo is not the exact implementation of the paper. The trained weights of candidates are dropped after getting the KPI. The result is a list of architecture codes + its KPI. You need to retrain the candidate. Sure you can easily just save the weights of the candidate and continue the training, but does it make sense to update the weights of the supernet like gradient-based algorithms?
I noticed that, after searching, we got .config and .inherited file. For validation, we need .config and .init file. So the only way to get .init file is to fine-tune with the .inherited file?
The acc reported during search is predicted right? And if we want to get the real test results, we have to fine-tune 450 epochs, right? You name it fine-tune, but it looks more like a retraing process.
Since we already got the .inherited file, how can we use that directly to do a real testing on the Imagenet validation set?
The text was updated successfully, but these errors were encountered: