You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that the current implementation of normalize: return v * inversesqrt(dot(v, v));
is less numerically stable than: return v / sqrt(dot(v, v));
even for double precision vectors.
By this I mean that if you repeatedly call the original implementation, the result will perpetually oscillate between two values. The division version will converge and is very stable with double precision. A simple test is to generate a series of random vectors, normalize, then normalize again and compare the first and second results. This can cause (in my case) hard-to-diagnose issues when using glm to solve computational geometry problems that require high precision.
EDIT: there are still rare cases where normalizing double precision vectors doesn't converge but division does appear to be more stable
I found that the current implementation of normalize:
return v * inversesqrt(dot(v, v));
is less numerically stable than:
return v / sqrt(dot(v, v));
even for double precision vectors.
By this I mean that if you repeatedly call the original implementation, the result will perpetually oscillate between two values. The division version will converge and is very stable with double precision. A simple test is to generate a series of random vectors, normalize, then normalize again and compare the first and second results. This can cause (in my case) hard-to-diagnose issues when using glm to solve computational geometry problems that require high precision.
EDIT: there are still rare cases where normalizing double precision vectors doesn't converge but division does appear to be more stable
I also found this link that points out the same issue:
https://stackoverflow.com/questions/23303598/3d-vector-normalization-issue
The text was updated successfully, but these errors were encountered: