-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about obtatinging clustering_adj.txt #2
Comments
I use the codes in persona.py below to obtain partitions of egonets egonets = CreateEgonets(graph) for u, egonet in egonets.items(): |
When I first used original persona algorithm to partition the egonets, I found that there are some nodes having too many partitions, which heavily hurt the performance of the program. The size of each partition is, on the other hand, relatively small. It makes no sense to retain the original partitions and thus I modified the original persona algorithm, setting a maximum partition number and merge small partitions together if the number of partitions is larger than the threshold. If you want to process the partitions with different sizes, you can either modify the persona algorithm as described above and set the maximun number large enough, or modify the model so that it is able to dealing with that (I tried it, but it do lost performance because of my poor coding skill. Maybe you can find a more efficient way!). |
thank you for your answer. And could you please share your modifed persona algorithm? Actually, I have used modularity-based method to merge the small partitions together, but I have encountered an error when the node is an isolated point in egonet so that modularity cannot be calculated. |
All right! However since I have graduted, maybe I have lost the script. I will search for the file. You can email me your Wechat ID for further communication if it is convenient. |
Hello, can you provide your email address? I have the same doubts. Hope to communicate with you further. thanks! |
Thanks a lot!
发自我的iPhone
…------------------ Original ------------------
From: Ph0en1xGSeek ***@***.***>
Date: Thu,Oct 21,2021 4:02 PM
To: SoftWiser-group/CFANE ***@***.***>
Cc: Gmrylbx ***@***.***>, Mention ***@***.***>
Subject: Re: [SoftWiser-group/CFANE] question about obtatinging clustering_adj.txt (#2)
When I first used original persona algorithm to partition the egonets, I found that there are some nodes having too many partitions, which heavily hurt the performance of the program. The size of each partition is, on the other hand, relatively small. It makes no sense to retain the original partitions and thus I modified the original persona algorithm, setting a maximum partition number and merge small partitions together if the number of partitions is larger than the threshold.
If you want to process the partitions with different sizes, you can either modify the persona algorithm as described above and set the maximun number large enough, or modify the model so that it is able to dealing with that (I tried it, but it do lost performance because of my poor coding skill. Maybe you can find a more efficient way!).
@Gmrylbx You can refer to this reply first. If you have any problem, email me your wechatID ***@***.***). ^_^
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Hi, I have the same problem, has this problem been solved? orz |
excume me, i want to ask about how to obtain the clustering_adj.txt. I have used the persona.py to parititioning the egonets, but the partitions have different sizes. However, the clustering_adj.txt shows that each node's egonet has 3 partitions.
![Uploading 捕获.PNG…]()
Therefore, i want to ask about how to process the patitions with different sizes.
thank you
The text was updated successfully, but these errors were encountered: