You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,When I was training the network, I did not add the BN layer, and the other structures were the same as the original. The training set used is only BSD40, and patch size is 40*40. In the test set12 on the PSNR is higher than the original, and the results from the loss convergence is also faster convergence than the BN layer.
The text was updated successfully, but these errors were encountered:
but my network does not converge when I remove BN,what's your resolution?please
Hmm, my network structure is Conv + relu + bn, so when I remove BN, I find it convergent and has similar performance. So at first I thought BN was an unnecessary module. My experiment result is to remove BN module and optimize it by Adam. When the noise level is 25, the result can reach 29.15.
Hello,When I was training the network, I did not add the BN layer, and the other structures were the same as the original. The training set used is only BSD40, and patch size is 40*40. In the test set12 on the PSNR is higher than the original, and the results from the loss convergence is also faster convergence than the BN layer.
The text was updated successfully, but these errors were encountered: