Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Close() method, for use when making many New() filemutexs. #2

Merged
merged 1 commit into from
Oct 28, 2017

Conversation

sb10
Copy link
Contributor

@sb10 sb10 commented Jul 31, 2017

Added test for real simultaneous lock attempt.
Removed debug Printf from test script.

The motivation for the new Close() method is that I create many different filemutex lock files and with normal Unlock() eventually hit the error:

too many open files

So Close() is needed to clean up the file descriptors.

The original TestSimultaneousLock test was not actually testing simultaneous behaviour. Rather, what happened was a Lock() followed by an Unlock(), followed by a Lock() and Unlock(). Which is sequential, and didn't prove that you can't have 2 locks held open simultaneously.

Added test for real simultaneous lock attempt.

Removed debug Printf from test script.
Copy link
Owner

@alexflint alexflint left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this together @sb10. I had two small comments but otherwise I'm ready to merge this.

panic(err)
}
syscall.Close(m.fd)
syscall.Unlink(m.path)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit worried about a function called "Close" also deleting the file. If I were using this API, I'd expect Close() to have similar semantics to os.File.Close, which closes the file descriptor but does not actually delete the file. How about having Close() simply close the file descriptor, and then adding a new function CloseAndRemove() to additionally unlink the path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's no reason why anyone would want the lock file to exist if there will no longer be a lock held on it, so I don't think it makes sense to have 2 methods. So I could either rename Close() to CloseAndRemove(), or come up with some other shorter name that conveys things better. Finish()? This makes it clearer you're not supposed to use it again.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The case I'm thinking of is where there are multiple processes that grab and release locks on the same file (which is one of the use cases this library was designed for)

Copy link
Contributor Author

@sb10 sb10 Aug 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the idea is that each process does New() -> Lock() -> Close(), and only one of the processes will hold the lock at any one time. This works (I think, not strictly tested) if Close() unlinks. I don't see the use case of needing the lock file to remain on disk once Close()d.

Copy link
Contributor

@rosenhouse rosenhouse Oct 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sb10 As an API user, I would be very surprised to find that Closeing a file handle would delete the file.

In our use case for CNI plugins, we are opening the lock on a directory containing data that needs to outlive multiple cycles of Lock and Close.

We would not be able to use this library if Close caused our data directory to be deleted.

}
syscall.Close(m.fd)
syscall.Unlink(m.path)
m.mu.Unlock()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm what if you call Close() after Unlock()? The go docs say of sync.Mutex:

It is a run-time error if m is not locked on entry to Unlock

It seems to me that the mu member is not even needed - perhaps just remove it entirely in this pr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you call Unlock() or Lock() after Close(), you get a panic due to bad file descriptor. The documentation for Close() suggests you don't do that. The alternative of handling the closed state in the other methods doesn't seem worthwhile since you need to know you're using it wrong regardless.

If I don't unlock at the end of Close(), however, and another Lock() is attempted, you'd get deadlock, which is much more difficult to debug than getting a panic.

Leave as is? Make the documentation more explicit? "Do not call other methods after calling Close()."?

Copy link
Contributor Author

@sb10 sb10 Aug 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for preventing Unlock() before Close()... again, I can only think the solution is explicit documentation, and allow the panic to happen. If sync.Mutex Unlock() -> Unlock() panics, so should FileMutex.Unlock() -> Close(); they're the same mistake by the user.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The case I'm thinking of is where I have a FileMutex that I repeatedly Lock and Unlock, and then at the end of all this I want to Close it. This means the mutex would be in the unlocked state when I want to close it, and it would feel weird to be required to lock it before closing it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the user has to be aware of the last time they wish to release the lock. The last time they call Close() instead of Unlock().

I did consider the possibility of Close() not doing what Unlock() does, and so strictly correct usage would be that you'd always have to call Unlock() before Close(), but that wasn't nice from my use case of only ever locking once and then wanting to close in a simple defer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @alexflint that we should just drop the mu member. The existing file locking semantics are strictly stronger than those of mu. It would then simplify these things a lot.

@sb10
Copy link
Contributor Author

sb10 commented Aug 7, 2017

Any further feedback? Ultimately it's your choice on how best to go ahead, so if you say how you'd like the final API to be, I'll implement it that way.

@alexflint
Copy link
Owner

@sb10 Sorry for the delay. I'd be happy to merge this if you remove the path field and drop the call to Unlink.

@rakelkar
Copy link

rakelkar commented Oct 21, 2017

hey @sb10 interested in your PR - do you think you could update with review comments so this can be merged?

// using Close().
func (m *FileMutex) Close() {
if err := syscall.Flock(m.fd, syscall.LOCK_UN); err != nil {
panic(err)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please return err instead of panic (other panics have been removed and method signatures have been updated to return err instead)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@sb10
Copy link
Contributor Author

sb10 commented Oct 23, 2017

Should I work on this, or is this now superseded by #9?

@alexflint alexflint merged commit 55ed66a into alexflint:master Oct 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants