Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Higher precision float (decimal) support? #1143

Open
chrisaliotta opened this issue Jan 1, 2024 · 2 comments
Open

Higher precision float (decimal) support? #1143

chrisaliotta opened this issue Jan 1, 2024 · 2 comments

Comments

@chrisaliotta
Copy link

@m4rs-mt and team, excellent work on the library! Is it currently possible to support higher precision floats (128bit) with the current library?

Is this something we can implement:
https://developer.nvidia.com/blog/implementing-high-precision-decimal-arithmetic-with-cuda-int128/

I would like to help, but I have no idea where to start. I could certainly work on a DMath helper library, but as for compiling to PTX and implementing to the __int128 data type, l lack sufficient knowledge.

Thanks.
Chris

@chrisaliotta chrisaliotta changed the title Higher precision floats (decimal) support? Higher precision float (decimal) support? Jan 2, 2024
@m4rs-mt
Copy link
Owner

m4rs-mt commented Jan 9, 2024

Hi @chrisaliotta, welcome to the ILGPU community and a happy new year! This would be really amazing to have and we would love assisting you with integrating these types. As for the PTX-lowering part, this is something @MoFtZ and I can focus on.

@IsaMorphic
Copy link
Contributor

Hi @chrisaliotta !! I'm happy to let you know that my software implementation of IEEE 754 binary128 for .NET Core now fully supports ILGPU! I've even successfully used it with my own Mandelbrot fractal rendering library, also written in C#.

Check it out!
https://github.com/IsaMorphic/QuadrupleLib

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants