-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Artifacts in 3D-TV Denoised Images #594
Comments
Interesting. Is that ghost that you get the first slice? This was coded 10 years ago, so I don't exactly remember the implementation, but it could be caused to periodic boundary conditions. |
In any case, you can find the math and the relevant paper in my PhD thesis (in my github profile) and the code is in |
Fnatstic experiment, indeed this is why I asked GPU size etc. Seems that something is broken in the memory-in-out where either the memory doesn't get appropriately reset, or the chunk of memory being copied is wrong. I will investigate. |
Great! Looking forward to your solution |
A posibility: https://github.com/CERN/TIGRE/blob/master/Common/CUDA/tvdenoising.cu#L378-L389 may need to be changed to:
I can not try now, but if you want to try that, great! |
I have tested your new code, but the results are still the same as before, the same ghost is still there |
Okey, I will investigate further, but this may be hard to fix, as during all my years, I have never seen this happen, which means its a relatively nitche case, and I am not sure I will be able to reproduce. I'll try! |
Can you tell me exactly which GPU you have? I can try to simulate your machine if I know exactly how much memory your GPU has. |
Hello @AnderBiguri : TIGRE/Common/CUDA/tvdenoising.cu Line 375 in f0e9100
the first one may be the main reason of the ghost, And I tran to insert the fellow new codes in TIGRE/Common/CUDA/tvdenoising.cu Lines 402 to 434 in f0e9100
The new codes:
It is just add a line to set the d_u、d_px、d_py、d_pz、d_src to be zero before setting the new value. And finnal this attempt work!!! there is no ghost whatever the para I used. Of course, there should be a more efficient solution, but I haven't tried it yet; |
I have tried using the flip data of the first and end slice as the buffer data, and here is the new code(Note: Multi-GPU is not considered here): TIGRE/Common/CUDA/tvdenoising.cu Lines 378 to 389 in f0e9100
TIGRE/Common/CUDA/tvdenoising.cu Lines 402 to 434 in f0e9100
Here is the result: |
hi @zezisme , I am not sure I undetstood very well the second issue or these last images. Can you edit the text to explain them a bit longer? |
This is the flow chart of the second problem, but I am not sure whether this problem has a little big impact on denoising. |
Right, let me rephrase it to see if I fully understood: The issue you are highlighting is that after the first split of the image is processed, the second one will use updated data as a buffer, rather than the original data. Yes you are correct then, this is what happens. Looking at my notes, I seem to have tested this when I coded the multi-GPU code and I decided that even if this is wrong, it has a limited impact on the denoising, so I was not going to do much about it. The solution otherwise requires a full extra copy of the image in memory, which I decided that was too costly for the little effect it had. However, this should have no effect in the end-slides I think. I believe that the end-slides is a problem of the boundary condition. This code assumes Neumann boundary conditions (i.e. gradient is zero in the slice direction at the first/last slice). This means that the gradient at the edges will always be smaller than everywhere else, and so will it be then the denoising strength. Perhaps using periodic boundary conditions would fix the issue (if this is the issue). Critically, it would mean to change these lines to access the next (rather than previous) pixel value in the https://github.com/CERN/TIGRE/blob/master/Common/CUDA/tvdenoising.cu#L70-L123 |
Yes, I agree that the second problem is not the cause of ghost!the main reason is the first problem that have mentioned before. |
I just updated TIGRE to solve the ghosting issue. Does the change (its from your code) solve it? |
And for the boundary buf expansion method, I have a new method to solve the noise enhancement problem. Here is the new code (just copy the boundary slice as the buf slice to avoid the negative z-axis gradient caused by the mirror slice) for TIGRE/Common/CUDA/tvdenoising.cu Lines 402 to 443 in cf42782
The fellowing change may just work in single GPU, If you use multiple GPUs, it may go wrong!
|
I have read your modification, which can indeed solve the ghost problem. However, for the problem of assuming that the boundary gradient is 0(Neumann boundary conditions), this will make the boundary slice noise larger. But this may not be a very important problem |
Hi @zezisme. I only added the memory zeroing part, not the part that you mentioned that is only for one GPU. For now, you shared 3 pieces of code:
So, questions for you:
|
Answer three questions:
|
@zezisme Thanks a lot for the explanation! I think the best way to tackle the fact that the edge slices have different strength is to change the math, not the memory. Can you see if we change the lines I linked before to the following fixed the problem? Essentially this changes the boundary conditions from Neumman to mirror.
|
Hello @AnderBiguri |
Ah you may be right... Let met think about it a little more. Indeed I think we always call the kernels with a z value of size_slice+buffer*2 which perhaps we should not do for edge slices (first and last). |
Yes, do you have a better solution now? I have tried passing a parameter to the kernels to identify the real boundary slice, but I meet a new problem, and I am a little confused now. |
@zezisme maybe your solution is the best, but I need some time to test and play with multi-GPU cases to ensure its right. An alternative for now is to add some slices to the input image to |
OK,thank you very much!I will choose the solution that adding some slices to the image before TVdenoising. And look forward to your subsequent more effective solutions! |
hi @zezisme I will be quite busy the next weeks, so if you want to try to produce a nice piece of code that would also be multi-GPU compatible to fix the last error, feel free to give it a shot and make a Pull Request. If not, no worries, I'll try to tackle this later on. Just that I won't have time for it in the next few weeks. |
Hello, I found that when using im3DDenoise to denoise 3D images, as the number of iterations increases, edge slices will produce image artifacts, and these artifacts seem to come from other slices. Is there any reference document for this algorithm? Can you provide the corresponding mathematical derivation document ? I want know is the issue come from the mathematical principle or
its implementation method?
Actual Behavior
TV_lambda = 200;
FDK
FDK+TVdenoise(50 iterations)
FDK+TVdenoise(100 iterations)
FDK+TVdenoise(200 iterations)
The artifact seems come from the 267th slice image(Or near this slice, And the total slice num is 400)
It seems to be related to a ratio of 1/3((400-267)/400≈1/3)
Code to reproduce the problem (If applicable)
Specifications
The text was updated successfully, but these errors were encountered: