-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The analytical algorithm performance doesn't meet the expectation #2898
Comments
One reason of that is it may includes the compilation time and projection time. |
The first ten lines of edge file:
|
Found that the timing method includes the python codes to assemble the op, and the round trip time of RPC, the dynamic loading of libraries, thus add much over head to querying time (which is less than 1 second in this experiment), so the overhead is huge. Add a new log to print the actual evaluating time of application in the grape_engine, which should as the realiable metrics. |
I tested pagerank and sssp algorithm on GraphScope and Gemini. And found that the Gemini performed much faster than the GraphScope. Here is the result of pagerank test with the dataset of soc-LiveJournal1 . The iterations is 20:
The script of GraphScope
The command of running libgrape-lite
mpirun -n 1 ./run_app --vfile ../../data_set/live_journal/soc-livejournal.vertex.csv --efile ../../data_set/live_journal/soc-livejournal.mtx --application pagerank --out_prefix ./output_pagerank --directed -pr_mr 20
The running command of Gemini
./toolkits/pagerank ./data_set/live_journal/soc-livejournal.binarye 4033137 20
Each of the above three tests sets one partition.
The problem is that the graphscope is 10 times worse than the libgrape-lite. I don't know if my test script is wrong, please advise. Thanks!
The text was updated successfully, but these errors were encountered: