Skip to content

Releases: mmuckley/torchkbnufft

Support PyTorch 1.13.0 + new build system

23 Nov 18:15
606212b
Compare
Choose a tag to compare

This release supports the latest version of PyTorch (1.13.0) and adds a new build and versioning system. A list of some work on the current release:

  • Changed default normalization for Toeplitz to "ortho" (PR #43)
  • From @ckolbPTB, fixed size of weights in calc_toeplitz_kernel (PR #47)
  • From @ckolbPTB, allowed use of single smap for batches for Toeplitz NUFFT (PR #48)
  • Updates for internal type system (PR #77)
  • Updates for package dependencies and build system (PR #78, #79)

Fixes for Toeplitz NUFFT scaling

09 Nov 19:40
Compare
Choose a tag to compare
  • Fixes Toeplitz NUFFT scaling issue (PR #39)
  • Updates for requirements to current PyTorch (PR #40)

Documentation Formatting Update

13 Jul 16:55
Compare
Choose a tag to compare

Fixes formatting for a reference on the main documentation page.

Documentation Update

25 May 01:34
336cfab
Compare
Choose a tag to compare

This pure-documentation release fixes an issue with how documentation headers were rendered on the ReadTheDocs website. See PR #29.

Update for PyTorch 1.8

09 Apr 23:22
4d98054
Compare
Choose a tag to compare

This updates torchkbnufft for PyTorch version 1.8. It uses a new version of index_add that operates natively on complex tensors. The update also fixes a performance regression that arose due to thread management as identified in Issue #25.

Most changes came from PR #27, which has the below list of modifications:

  • Update requirements.txt and dev-requirements.txt to latest packages.
  • Remove calc_split_sizes - we can now use tensor_split.
  • Removed some calls to tensor attributes - these can be expensive.
  • Removal of kwarg usage for some torch.jit.script functions - these can behave strangely with scripted functions.
  • Removal of index_put for accumulation. We now only use index_add.

Batched NUFFT

16 Feb 21:43
abca3f2
Compare
Choose a tag to compare

This adds support for a new batched NUFFT, which is substantially faster than using a Python for loop over the batch dimension when applying a NUFFT with many small k-space trajectories. It also updates the documentation and includes a new page for performance tips. See PR #24 and Issue #24 for details and testing.

Bug fixes and documentation updates

04 Feb 23:50
b5f6581
Compare
Choose a tag to compare
  • Fixes an incredibly weird bug in the autograd for forward NUFFTs caused by an unsqueeze operation (PR #23).
  • Remove references to batch dimension in notebooks (PR #20).
  • Remove unnecessary adjoint NUFFT objects in docs (PR #19).
  • Add a test for CPU/GPU forward matching (PR #17).

Major Revision - Complex Number Support, Speed Improvements, Updated Documentation

27 Jan 20:48
be0cd5b
Compare
Choose a tag to compare

This release includes a complete package rewrite, featuring complex tensor support, a 4-fold speed-up on the CPU, a 2-fold speed-up on the GPU, an updated API, and rewritten documentation. The release includes many backwards-compatibility-breaking changing, hence the version increment to 1.0.

A summary of changes follows:

  • Support for PyTorch complex tensors. The user is now expected to pass in tensors of a shape [batch_size, num_chans, height, width] for a 2D imaging problem. It's sill possible to pass in real tensors - just use [batch_size, num_chans, height, width, 2]. The backend uses complex values for efficiency.
  • A 4-fold speed-up on the CPU and a 2-fold speed-up on the GPU for table interpolation. The primary mechanism is process forking via torch.jit.fork - see interp.py for details.
  • The backend has been substantially rewritten to a higher code quality, adding type annotations and compiling performant-critical functions with torch.jit.script to get rid of the Python GIL.
  • A much improved density compensation function, calc_density_compensation_function, thanks to a contribution of @chaithyagr on the suggestion of @zaccharieramzi.
  • Simplified utility functions for calc_toeplitz_kernel and calc_tensor_spmatrix.
  • The documentation has been completely rewritten, upgrading to the Read the Docs template, an improved table of contents, adding mathematical descriptions of core operators, and having a dedicated basic usage section.
  • Dedicated SENSE-NUFFT operators have been removed. Wrapping these with torch.autograd.Function didn't give us any benefits, so there's no need to have them. Users will now pass their sensitivity maps into the forward function of KbNufft and KbNufftAdjoint directly.
  • Rewritten notebooks and README files.
  • New CONTRIBUTING.md.
  • Removed mrisensesim.py as it is not a core part of the package.

Small compatibility fixes

14 Jan 18:29
568a859
Compare
Choose a tag to compare

This fixes a few compatibility issues that could have possibly arisen in new versions of PyTorch mentioned in Issue #7. Specifically:

  • A NumPy array was converted to a tensor without a copy - this has been modified to explicitly copy.
  • A new fft_compatibility.py file to handle modifications to new versions of torch.fft (see here). Basically, use of torch.fft was going to be deprecated in the future in favor of torch.fft.fft. We now check the PyTorch version to make figure out which one to do to make sure the code can still run on older versions of PyTorch.

Documentation updates, 3D radial density compensation

14 Jan 00:37
Compare
Choose a tag to compare

This includes a few minor improvements that haven't been released to PyPI. This release also tests the new GitHub action that should automatically handle this.