Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ship speed sanity check #73

Open
squaregoldfish opened this issue Feb 18, 2016 · 0 comments
Open

Ship speed sanity check #73

squaregoldfish opened this issue Feb 18, 2016 · 0 comments

Comments

@squaregoldfish
Copy link
Member

Message thread of ideas between Ansley, Karl and Steve:

Hi Steve,

I was having a discussion about ship speeds in SOCAT and how we have to be rather lenient on the acceptable speeds. Ansley brought up idea that the errors in position are going to cause larger errors in the ship speed when the time between measurements are smaller. In other words, an error of say, 50 meters, is not very significant when the measurements are taken a couple hours apart, but is very significant when the measurements are taken a couple minutes apart. Not sure I had fully realized that before. And, of course, the error in longitude become less significant the closer to the poles. But I would assume the error in position is some number of meters (not in deg lon and deg lat) so that would not matter.

Just something to think about when error-checking computed values. Maybe checking is the value plus/minus error overlaps with the acceptable range then consider it acceptable? Of course that means knowing what the error is on these values - not sure if that is fairly standard for positions. Or maybe you have something worked out already.

Karl

Hi Karl,

Interesting thoughts. Maybe we should consider the average speed over a given period, rather than between consecutive points only. IF you find something suspicious you could look at the pairs of points to work out which look wrong. Or you could look at the standard deviation of all the speeds to find big outliers.

Either way it requires looking at the data set as a whole, as you suggest should be the revised strategy. I'm inclined to agree, as the speed gains from the line-by-line approach are negligible and yours allows for much greater flexibility and a properly unified API.

We can look into this properly when I've finished revising the API. In the meantime I'll these messages as an issue in github so we don't forget.

Cheers,
Steve.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant