-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
what are the outputs? #3
Comments
I have been faced with similar situations where there were artifacts. It may be helpful to use more training images and adjust hyperparameters. You can also try training seperately. |
After adjusting several things, the quality of the image improved considerably. see image, yet the artefact remained. The artefact seems to be a result of something like a RGB grid mis-alignment. Or it may be caused by saturation? |
Probably it is caused by the transposed convolution which causes a grid-like pattern on the output tensor. |
can you solve the problem? |
You can first change variable |
extrinsics is a tuple, I modified extrinsics = torch.stack(tuple(extrinsics), dim=0) to extrinsics = torch.stack(tuple(torch.from_numpy(extrinsics)), dim=0), but still not。 @SSRSGJYD |
You can just write: extrinsics = torch.FloatTensor(extrinsics) |
Thank you very much for your help, now it works @SSRSGJYD |
@IQ17 Hi, I see your results with this neural renderer. I am quite curious about what this neural texture map looks like. I mean the texture map (in RGB space with parameterized UV space) not the rendered image. Could you please provide this neural texture data or figure to me? Quite thanks! I even can not find these results in the paper of deferred neural rendering. or could you provide some of this neural texture?@SSRSGJYD Thanks again! |
@ChenFengYe Hi, here some of my results of training the neural renderer. These are slices of the neural texture converted to an RGB image. The first 3 'layers' (first image) of the neural texture were forced to learn RGB values and therefore they look similar to a common RGB texture. |
@oKatanaaa Thanks for these inspiring results! Btw, how could you force the first 3 channels for RGB? It is quite similar to my current work. |
@ChenFengYe Sorry for the late response. When the texture is being sampled, the first three channels are forced to learn RGB values via minimizing L2 loss between these sampled channels and the image. I will try to illustrate that with a pseudo code: Pseudo code:
|
@oKatanaaa Thanks for your sharing. This code is very helpful! |
Hi, I try to visualize the neural texture, but it looks like this. I think I run into trouble but I can not deal with it. Could you share your visualization code? Thanks~ |
hi,I have the same question.What is your final solution? |
Same here. |
@oKatanaaa I modified the up layer as follows, class up(nn.Module): I'm wondering if you might have any suggestions for resolving this issue? |
Hi, thanks for the code!
I just started to learn this paper with your code.
As far as I understand, there are two neural networks, and I trained them jointly using the train.py, with 410 basketball images.
When I run render.py with the trained model, there are something I don't understand on the outputted image. What are they and how can I erase them?
Thanks!
The text was updated successfully, but these errors were encountered: