Skip to content

Releases: ropensci/arkdb

v0.0.14

19 Oct 19:20
Compare
Choose a tag to compare

arkdb 0.0.14

  • Patch for test suite for Solaris. arrow package installs on Solaris, but
    functions do not actually run correctly since the C++ libraries have not
    been set up properly on Solaris.

arkdb 0.0.13

  • Added ability to name output files directly.
  • Add warning when users specify compression for parquet files.
  • Added callback functionality to the ark function. Allowing users to perform
    transformations or recodes before chunked data.frames are saved to disk.
  • Added ability to filter databases by allowing users to specify a "WHERE" clause.
  • Added parquet as an streamable_table format, allowing users to ark to parquet
    instead of a text format.

arkdb: Archive and Unarchive Databases Using Flat Files

16 Mar 15:43
Compare
Choose a tag to compare

arkdb 0.0.11

  • make cached connection opt-out instead of applying only to read_only. This
    allows cache to work on read-write connections by default. This also avoids
    the condition of a connection being garbage-collected when functions call
    local_db internally.

arkdb 0.0.10

  • Better handling of read_only vs read_write connections. Only caches
    read_only connections.
  • includes optional support for MonetDBLite

arkdb 0.0.8

  • Another bugfix for dplyr 2.0.0 release

arkdb 0.0.7

  • bugfix for upcoming dplyr 2.0.0 release

arkdb 0.0.6

  • support vroom as an opt-in streamable table
  • export process_chunks
  • Add mechanism to attempt a bulk importer, when available (#27)
  • Bugfix for case when text contains # characters in base parser (#28)
  • lighten core dependencies. Fully recursive dependencies include only 4
    non-base packages now, as progress is now optional.
  • Use "magic numbers" instead of extensions to guess compression type.
    (NOTE: requires that file is local and not a URL)
  • Now that duckdb is on CRAN and MonetDBLite isn't, drop built-in
    support for MonetDBLite in favor of duckdb alone.

arkdb: Archive and Unarchive Databases Using Flat Files

31 Oct 23:02
Compare
Choose a tag to compare

Travis build status Coverage status Build status CRAN_Status_Badge lifecycle CRAN RStudio mirror downloads DOI

The goal of arkdb is to provide a convenient way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much too large to read into memory all at once.

v0.0.5 Changes

  • ark()'s default keep-open method would cut off header names for Postgres connections (due to variation in the behavior of SQL queries with LIMIT 0.) The issue is now resolved by accessing the header in a more robust, general way.

arkdb: Archive and Unarchive Databases Using Flat Files

27 Sep 16:46
Compare
Choose a tag to compare

Travis build status Coverage status Build status CRAN_Status_Badge lifecycle CRAN RStudio mirror downloads DOI

The goal of arkdb is to provide a convenient way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much too large to read into memory all at once.

v0.0.4 Changes

  • unark() will strip out non-compliant characters in table names by default.
  • unark() gains the optional argument tablenames, allowing the user to specify the corresponding table names manually, rather than enforcing they correspond with the incoming file names. #18
  • unark() gains the argument encoding, allowing users to directly set the encoding of incoming files. Previously this could only be set by setting options(encoding), which will still work as well. See FAO.R example in examples for an illustration.
  • unark() will now attempt to guess which streaming parser to use (e.g csv or tsv) based on the file extension pattern, rather than defaulting to a tsv parser. (ark() still defaults to exporting in the more portable tsv format).

arkdb: Archive and Unarchive Databases Using Flat Files

26 Sep 23:01
Compare
Choose a tag to compare

Travis build status Coverage status AppVeyor Build Status CRAN_Status_Badge lifecycle

The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.

v0.0.3 Changes

  • Remove dependency on utils::askYesNo for backward compatibility, #17

arkdb: Archive and Unarchive Databases Using Flat Files

07 Sep 15:53
Compare
Choose a tag to compare

Travis build status Coverage status AppVeyor Build Status CRAN_Status_Badge lifecycle

The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.

v0.0.2 Changes

  • Initial CRAN release
  • Ensure the suggested dependency MonetDBLite is available before running unit test using it.

arkdb: Archive and Unarchive Databases Using Flat Files

20 Aug 16:43
Compare
Choose a tag to compare

Travis build status Coverage status AppVeyor Build Status CRAN_Status_Badge lifecycle

The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.

arkdb: Archive and Unarchive Databases Using Flat Files

11 Aug 19:57
Compare
Choose a tag to compare

Travis build status Coverage status AppVeyor Build Status CRAN_Status_Badge lifecycle

The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.