Releases: gilbertchen/duplicacy
Releases · gilbertchen/duplicacy
Duplicacy 2.2.3 Command Line Version
Duplicacy 2.2.2 Command Line Version
- Better handling of B2 authorization failures (045be39)
- Fixed a bug that caused 'check -files' to download the same chunk multiple times if shared by multiple small files (4da7f7b)
- Update github.com/gilbertchen/go.dbus to fix a double close bug when accessing keyring on linux (41668d4)
- Don't compare hashes of empty files in the diff command (9d4ac34)
- Retry on broken pipe errors in the Azure backend (6efcd37)
Duplicacy 2.2.1 Command Line Version
This release is mostly a bug fix version for 2.2.0 which has a bug that can't restore individual files due to parent directories not being created correctly: 47c4c25
Other changes:
Duplicacy 2.2.0 Command Line Version
- Allow the filters file to include other filters files (#514)
- On Windows add the \?\ prefix to add paths to support UNC paths in the storage URL (53548a8)
- Support ssh private key files encrypted by passphrases (8aa67c8)
- Add a Sync call before closing a file when uploading a file to local storage (bb652d0)
- Ignore false malware warnings from Google Drive that prevent some chunks to be downloaded (#447)
- Retry on EOF errors in the SFTP backend (#489)
- Replace special characters in environment variable name with underscores (#495)
- Fixed a webdav compatibility issue with rclone (2b56d57)
- Set the content length for upload in the webdav backend (d16273f)
- Fixed a bug where a wrong variable is used as the number of threads causing incorrect rate limits (43a5ffe)
- Fixed a bug where filenames starting with i or e are mistakenly interpreted as regex (abcb4d7)
- Fix a memory issue that causes check -tabular uses too much memory with hundreds of revisions (4b69c11)
- Add an additional lookup for a chunk not in the known chunk list (1da151f)
- Fixed a MoveFile bug in Wasabi when the storage is at the root of a bucket (a6fe3d7)
- Retry on 408 errors from Google Drive (#529)
- The cat command doesn't need to load the entire file into memory (458687d)
- Rework the Backblaze B2 backend (57a408a)
- All APIs include UploadFile are done via the call() function
- New retry mechanism limiting the maximum backoff each time to 1 minute
- Add an env var DUPLICACY_B2_RETRIES to specify the number of retries
- Handle special/unicode characters in repositor ids
- Allow a directory in a bucket to be used as the storage destination
Duplicacy 2.1.2 Command Line Version
- Fixed a bug in calculating the padding size during chunk encryption: 21b3d9e
- Print the number of files if available in the snapshot file before downloading the file list: 244b797
- Don't list snapshots whose tags don't match the given one when the
-tag
is provided: 0732920 - Show more statistics in the check command (for the new web-based GUI): 15f15aa
- In some backends the benchmark command may incorrectly list the chunks directory when looking for previous temporary files: d8e13d8
- Optimizing restore to avoid reading newly created sparse file: bfb4b44
- Align snapshot times to the beginning of days when calculating the time differences so that prune operations running on the same day will prune the same set of old snapshots: 22a0b22
- Make B2 backend work with application keys (based on #475 by @bekriebel): 674d35e
- Restore UID and GID of symlinks: a7d2a94
- Fixed a divide by zero bug when the repository has only zero-byte files: 39d71a3
- Do not update the Windows keyring file if the password remains unchanged: 9d10cc7
- Continue to check other snapshots when one snapshot has missing chunks: e8b8922
- Record deleted snapshots in the fossil collection and if any deleted snapshot still exists then nuke the fossil collection: 93cc632
- Add Git commit numbers to version info: 48cc5ea
- Removed a redundant call to manager.chunkOperator.Resurrect (which can cause a crash): f304b64
- Remove extra newline in the PRUNE_NEWSNAPSHOT log message: 8ae7d2a
- Fix crashes on 32 bit machines caused by misaligned 64 bit integers: fce4234
- Fix "Failed to fossilize chunk" errors in wasabi backend: #459 (by @jtackaberry)
- Add a -storage option to the benchmark comman: 89769f3
Duplicacy 2.1.1 Command Line Version
- Fixed a bug causing certain new snapshots to be not counted when deciding which fossils can be deleted (72dfaa8)
- Added a benchmark command to test disk and transfer performance (#449)
- Support multi-threaded pruning (#441)
- Fixed restoration of basic UNIX file permissions (#417)
- Added macOS APFS snapshot support (#415)
- Fixed a crashing bug when showing the history of excluded files (0e585e4)
- Add unreferenced fossils to the fossil collection instead of deleting them immediately (e03cd2a)
- Added an -enum-only option to the backup command to enumerate the repository only (aadd2aa)
- Added a -repository option to the init and add command to specify an alternate repository path (72239a3)
- Implemented the WebDAV backend (#394)
- Added a -nobackup-file option to the set command to skip directories containing the specified file (#392)
- Add an environment variable DUPLICACY_DECRYPT_WITH_HMACSHA256 to force compatibility with Vertical Backup (b1c1b47)
- Skipped chunks should not be counted when calculating downloading percentage during restore (23a2d91)
- Added a global option -commend to allow Duplicacy processed to be identified by arguments (#391)
- Follow symlinks that point to UNC paths on Windows (b99f4bf)
- Added a -vss-timeout option to set VSS creation timeout (be2856e)
- Reduced memory consumption for prune operation (#329)
- Added a new Wasabi storage backend largely based on S3 but optimized to reduce storage cost for deleted objects (#322)
- Print git commit number in version string (48cc5ea)
- Record deleted snapshots in the fossil collection and if any deleted snapshot still exist nuke the fossil collection (93cc632)
- Continue to check other snapshots when one snapshot has missing chunks (e8b8922)
Duplicacy 2.1.0 Command Line Version
- Retry on temporary network errors in the Azure backend
- Added an OpenStack Swift backend based on github.com/ncw/swift
- Fixed a bug when both tag and the retention policy are specified for the prune command
- Fixed bugs in restoring extended attributes
- Unload the extended attributes from last snapshot in order to save memory
- Limited derivation keys to 64 bytes since snapshot file paths used as keys may be longer
- Fixed a bug that caused -hash to have no effect
- Correctly handle spaces in file paths for the B2 backend
- Improved the Hubic backend to retry on various errors
- Fixed a bug that caused -hash to have no effect
- Don't download a fossil directly; turn it back to a chunk and download the chunk instead
- Add a -storage-name option to the init command to specify the storage name
- Add the global -profile option to enable profiling via http
- Allow the -bit-identical option to the add command to copy the config file as it is
- Disable caching when restoring files in SnapshotManager
- Removed aes128-cbc from the supported ciphers by HiDrive
- Refresh expired tokens unconditionally on authorization errors for Hubic and OneDrive
- Fixed a bug that prevents the file specifying the chunk nesting levels from being loaded and parsed
- Fix the GCD directory creating bug; only save directories in the id cache
- Remove existing config and save a local copy when changing the storage password
- Create the storage folder on gcd storage if it doesn't exist
Duplicacy 2.0.10 Command Line Version
- Optimize the copy command to skip chunks existing on the destination storage
- Add the -dry-run option to the backup command
- Include storage name when looking up passwords for non-default storages
- Fix prune bug when last snapshot is removed for issue
- Add regex matching to include/exclude filters
- Add the -tabular option to the check command to show tabular usage and deduplication statistics
- Improve the backoff algorithm for the Google Drive storage
- Use b2_download_file_by_name rather than b2_list_file_names to check file existence
- Retry downloads with corrupted content up to three times
- Use random salt and make the number of iterations configurable for storage key derivation
- Add an -ignore-owner option to skip setting uid/gid on restored files
- Unify the chunk nesting level to 1 for all storages
- Fix a bug in splitting the existing file that caused all chunks to be redownloaded
- Add a -bit-identical option to the add command to make a bit-identical copy of the config file
- Increase the timeout for shadow copy creation on Windows
- Various changes to improve password management
Duplicacy 2.0.9 Command Line Version
- Fixed OneDrive 503 errors by sending GET requests with a nil body
- Fixed symbolic link handling on Windows
- The copy command now skips chunks already on destination
- Fixed a bug in setting the upload/download rate limit for the copy command
- Fixed a bug in setting the number of threads for the copy command
- Update aws/aws-sdk-go to version 1.10.41
- Don't save passwords/credentials to keyring if they are retrieved from environment/preference
- Don't ask for ssh password if a ssh key file is available
- Fixed a bug in retrieving passwords from gnome-keyring
- In GCD backend each thread should have its own backoff value
- Fixed a bug in storage passwords in preferences for non-default storages
Duplicacy 2.0.7 Command Line Version
- Updated Azure storage backend to support retrying on temporary errors
- The restore command now preserves empty directories
- Fixed a chunk not found error caused by Windows drives with data deduplication on
- Fixed a bug that caused truncated files not to be restored correctly
- Added a flat:// storage backend that can take a flat chunks directory on local or networked drives
- Added a samba:// storage backend that is basically a local drive backend but with caching enabled (for networked drives)