-
Notifications
You must be signed in to change notification settings - Fork 260
SGX protected files possible vulnerability #2360
Comments
Could you give a more complete example? It's unclear what your program tries to do next with these two file handles. Do you try to read from them? Write to them? Simply close them immediately? |
Just opening and closing the files is enough to trigger the crash. Here's the complete code we used: For completeness, we configured tmp dir in the manifest to be protected: |
Thanks @shmeni! This is indeed a bug in our code (not sure if this could be exploited). The root cause is that we don't check whether We do have such checks at the LibOS layer (which has its own FD map), but there are no such checks at the PAL layer. This was probably Ok for "simple" handles, but leads to corner cases with Protected Files. I guess we should introduce a map of FDs at the PAL level as well, and return errors when we detect such a case? What do you think @mkow @boryspoplawski @pwmarcz @yamahata ? P.S. To be honest, I didn't debug why it segfaults as Meni mentioned in the top comment. It looks like the |
I'm not that familiar with our pf implementation, but why cannot we allow for two handles to have the very same fd? If we break the underlying file on the host it's fine (since the host is messing with us anyway). To me it doesn't look like the duplicated fd is the culprit... |
I agree that this is hardly the root cause for this issue (looks like some missing checks on Protected Files code?). But shouldn't this be considered a part of sanitization of OCALL return values? An |
Just sounds like an useless thing - it doesn't stop any attack whatsoever, can only hide issues in other places. |
Yup, and the host can always duplicate an FD to create an illusion of separate descriptors which are in fact mapped to the same resource. It's just that our code should work correctly in also such cases. |
@shmeni I cannot reproduce your crash. Here is my diff on the latest Graphene master branch:
Then I build this and run in debug mode: And here is the output:
So as expected, I got a bad write error ( |
I also failed to reproduce it (modified |
Sorry about the lack of details @dimakuv. Let me try again: Here's the strace output for the example (I've modified the same helloworld test as you did, so I got the same fd numbers),
sys_open changed return value from 16 to 15
Just so it'll be easy to reproduce without GDB - I created a dummy hack in the open call that returns the wrong value in this particular case (it should be safe to ignore my commenting out of the /dev/kmsg mount as it was just due to some trouble I faced with the latest master branch). Fyi, tested it with an empty tmp dir such that the files didn't exist.
|
Thanks for the details, I was able to reproduce the crash. |
There might be a possible vulnerability in the implementation of the protected files feature. Specifically, we tested the following code that opens two protected files (configured the tmp dir to be protected according to the very intuitive examples you provide)
int fd1 = open("tmp/sec1.txt", O_CREAT | O_RDWR, 0644);
int fd2 = open("tmp/sec2.txt", O_CREAT | O_RDWR, 0644);
Then we tested what would happen if an attacker changed the return value of open so that the same file descriptor value was used for both files. Interestingly that led to a crash inside the enclave, debugging showed that it was due to accessing invalid memory in the ipf_close function. The specific culprit is the following line:
https://github.com/oscarlab/graphene/blob/1d5dfb4018d865894e0cd959a1c2a91ebfe8749d/Pal/src/host/Linux-SGX/protected-files/protected_files.c#L354
The text was updated successfully, but these errors were encountered: