-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for compressed BAI files #542
Comments
do you have a link for this particular discussion? here is another discussion that takes a different approach http://www.hammerlab.org/2015/01/23/faster-pileup-loading-with-bai-indices/ |
I think this approach from BioDalliance is an excellent alternative, especially as there is then no need to cache 20-30 MB of index for every bam track. If JBrowse could be configured to fall back to native bai if the bai.json isn't found it would be very flexible. |
One alternative that was discussed here is actually breaking up the bam file http://gmod.827538.n3.nabble.com/Gmod-ajax-Speed-up-VCF-loads-td4053337.html |
So if we added code to the BAI loading to detect whether the BAI is compressed, and decompress it, that would probably take care of this, right? |
@keiranmraine it kind of sounds like having .csi support pretty much obviates this, do you think we can close it? |
@rbuels I agree |
Woo :) |
HTSGet support is also a possible alternative see #1142 |
Hi,
There is some discussion on the samtools mailing list about setting your nginx server to automatically compress BAI files and then have the receiving tool decompress them when storing locally. This can reduce the time it takes to add a new bam track quite considerably for wholegenome sequence where these files can reach 20-30 MB.
This think this can also be applied to *.tbi index files (assuming not already compressed).
Regards,
Keiran
The text was updated successfully, but these errors were encountered: