Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How about sybil attacks? #2

Open
fdietze opened this issue Jul 2, 2020 · 2 comments
Open

How about sybil attacks? #2

fdietze opened this issue Jul 2, 2020 · 2 comments

Comments

@fdietze
Copy link

fdietze commented Jul 2, 2020

Hi, just read your blog post. Great work!

I didn't find an email address to reach out, so I'm creating an issue here.

The first thought I had when reading was that this system seems to be vulnerable to Sybil attacks. I guess you thought about it, but I didn't find any mention about it in your blog post nor in the paper. Is your system resistant by something I don't see?

I'm doing some research in this area as well, especially Sybil attack resistant content curation systems. ;)
If you like to chat about it, please reach out.

@cblgh
Copy link
Owner

cblgh commented Mar 10, 2021

@fdietze Hi! Sorry for the (very) belated reply; I was a bit worked out after publication :)

Would you want to further detail how you consider trustnet to be vulnerable to sybil attacks? Did you also see the note on Appleseed's bottleneck property in the paper? Page 69, excerpt included below for convenience :)

image

@fdietze
Copy link
Author

fdietze commented May 25, 2021

And now apologies from my side for taking so long to reply. 😄

I took the time to read the relevant parts of your thesis. I appreciate your work on that topic.

Furthermore, I think your trust system is safe, because it is used only to "hide" information, not to promote it. (I think I overlooked that fact last time). And in the context of a chat, people actually interact with others they have interacted before.

Interestingly, this is a different use-case than I'm working on: People accepting changes (like PRs) to an argument map / knowledge database. So it is about collaboratively adding information, instead of hiding it, which I haven't found a good solution yet. I still see some attack vectors to it. Here is a writeup of my current state of thinking:

https://felx.me/2021/02/20/collaborative-artifact-creation.html

Do you have any insights in that direction?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants