You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are often cases where multiple reporters, with different skillsets, will be working on analysis in parallel. Often the common language is a SQL database. It would be cool to be able to replace the disk-based Pickle cache with a cache that is storing the data to SQL tables. This is outside the scope of proof, but parameterizing the cache would let people write their own cache implementations.
The text was updated successfully, but these errors were encountered:
Actually, that use case is a bad idea. It would be better to have an analysis step that does the database refresh. Still, parameterizing the cache doesn't seem like a bad idea.
This sounds like a nifty idea, though I think it might be a little hard to bolt on a backend as radically different as SQL is. The data stored in proof is not, strictly speaking, tables. Its just a dict of whatever. I suppose you could simply say by convention that each key is a table. In that case your custom cache layer could use agate-sql for the heavy lifting.
I think for common usage it probably makes more sense for backends to be generic binary stores. That doesn't rule out alternative implementations, though super-useful ones don't spring immediately to mind.
There are often cases where multiple reporters, with different skillsets, will be working on analysis in parallel. Often the common language is a SQL database. It would be cool to be able to replace the disk-based Pickle cache with a cache that is storing the data to SQL tables. This is outside the scope of proof, but parameterizing the cache would let people write their own cache implementations.
The text was updated successfully, but these errors were encountered: