Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve data retrieval performance for large data chunks #2

Closed
maciejlach opened this issue Jun 6, 2014 · 1 comment
Closed

Improve data retrieval performance for large data chunks #2

maciejlach opened this issue Jun 6, 2014 · 1 comment

Comments

@maciejlach
Copy link
Collaborator

Performance of the QReader.Read() doesn't scale linearly with size of data to be retrieved.

Data sample:

gen: {[x] sample::([] ti:09:30:00.0 +x?06:00:00.0 ;a:100 +x?100f ; ap:x?50;b:101 +x?100f ; ab:x?50; id: x?10)}

Sample results:

sample size qSharp [ms] c.cs [ms]
50000 47 47
75000 62 78
100000 140 172
500000 483 390
1000000 1591 687
1500000 3385 1030
@maciejlach maciejlach added this to the qSharp 2.0.2 milestone Jun 6, 2014
@maciejlach
Copy link
Collaborator Author

Sample retrieval results after applying the fix:

sample size qSharp [ms] c.cs [ms]
50000 33 32
75000 47 52
100000 58 64
500000 259 320
1000000 504 638
1500000 782 931

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant