You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Capturing an idea of a potential performance improvement: currently we use a fixed hash table size for the hull — sqrt(n). This generally works very well, but can be too low for certain kinds of input where hull sizes tend to be big.
For optimal performance, hash size should depend on the size of the convex hull, but we don't know that size beforehand. So what we could try to do is to reinitialize the hash from scratch multiple times during triangulation, adjusting it to the hull size. If we don't do it often (e.g. once per 1000 processed points), it shouldn't have a noticeable overhead.
The text was updated successfully, but these errors were encountered:
The paper explains an algorithm to generate "simple polygons for characterizing the shape of a set of points in the plane". I don't know whether we are using that algorithm already, but it is based on an already pre-existing delaunay triangulation, and delivers in O(n log n). I realize we have that hull already, just throwing this in.
Capturing an idea of a potential performance improvement: currently we use a fixed hash table size for the hull —
sqrt(n)
. This generally works very well, but can be too low for certain kinds of input where hull sizes tend to be big.For optimal performance, hash size should depend on the size of the convex hull, but we don't know that size beforehand. So what we could try to do is to reinitialize the hash from scratch multiple times during triangulation, adjusting it to the hull size. If we don't do it often (e.g. once per 1000 processed points), it shouldn't have a noticeable overhead.
The text was updated successfully, but these errors were encountered: