You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am very new on github and Python and I have to train a unet in order to segmentarize the lungs on xray scan. I have my own dataset which contains around 850 pairs of images (input and output) and images are in the same folder. Images are named like that '562.png' and the correspondig output is named '562_segmented.png'.
I tried to implement a code like in the demo_toy_problem code but when I try to train my model, my Minibatch-error is around 100% (minimum is 98,8%) and nothing better happen if I change parameters of the training model (optimizer, number of epochs and iterations, ...).
Here is the code that I implemented:
`import tf_unet.unet as unet
import tf_unet.image_gen as image_gen
import tf_unet.image_util as image_util
import tf_unet.init as init
import tf_unet.layers as layers
import tf_unet.util as util
Is someone know why my error rate isn't decreasing over the epochs ? I saw on other issues that it can be the data provider but I can't find an other way to deal with my data without using ImageDataProvider.
Thank you so much for your help,
Luvidia
The text was updated successfully, but these errors were encountered:
Luvidia
changed the title
Results over epochs around 100%
Results over epochs: error around 100%
Apr 30, 2019
Hello everybody,
I am very new on github and Python and I have to train a unet in order to segmentarize the lungs on xray scan. I have my own dataset which contains around 850 pairs of images (input and output) and images are in the same folder. Images are named like that '562.png' and the correspondig output is named '562_segmented.png'.
I tried to implement a code like in the demo_toy_problem code but when I try to train my model, my Minibatch-error is around 100% (minimum is 98,8%) and nothing better happen if I change parameters of the training model (optimizer, number of epochs and iterations, ...).
Here is the code that I implemented:
`import tf_unet.unet as unet
import tf_unet.image_gen as image_gen
import tf_unet.image_util as image_util
import tf_unet.init as init
import tf_unet.layers as layers
import tf_unet.util as util
generator = image_util.ImageDataProvider('/workspace/segment_project/dataset_resize/*',
data_suffix='.png',
mask_suffix='_segmented.png'
)
net = unet.Unet(channels=generator.channels,
n_class=generator.n_class, layers=3, features_root=124, cost=u'dice_coefficient')
trainer = unet.Trainer(net, optimizer=u'adam')
path = trainer.train(generator, "./unet_trained", training_iters=36, epochs=10, dropout=0.5, display_step=2)`
Is someone know why my error rate isn't decreasing over the epochs ? I saw on other issues that it can be the data provider but I can't find an other way to deal with my data without using ImageDataProvider.
Thank you so much for your help,
Luvidia
The text was updated successfully, but these errors were encountered: