Releases: mmuckley/torchkbnufft
Support PyTorch 1.13.0 + new build system
This release supports the latest version of PyTorch (1.13.0) and adds a new build and versioning system. A list of some work on the current release:
- Changed default normalization for Toeplitz to
"ortho"
(PR #43) - From @ckolbPTB, fixed size of weights in
calc_toeplitz_kernel
(PR #47) - From @ckolbPTB, allowed use of single smap for batches for Toeplitz NUFFT (PR #48)
- Updates for internal type system (PR #77)
- Updates for package dependencies and build system (PR #78, #79)
Fixes for Toeplitz NUFFT scaling
Documentation Formatting Update
Fixes formatting for a reference on the main documentation page.
Documentation Update
This pure-documentation release fixes an issue with how documentation headers were rendered on the ReadTheDocs website. See PR #29.
Update for PyTorch 1.8
This updates torchkbnufft
for PyTorch version 1.8. It uses a new version of index_add
that operates natively on complex tensors. The update also fixes a performance regression that arose due to thread management as identified in Issue #25.
Most changes came from PR #27, which has the below list of modifications:
- Update
requirements.txt
anddev-requirements.txt
to latest packages. - Remove
calc_split_sizes
- we can now usetensor_split
. - Removed some calls to tensor attributes - these can be expensive.
- Removal of
kwarg
usage for sometorch.jit.script
functions - these can behave strangely with scripted functions. - Removal of
index_put
for accumulation. We now only useindex_add
.
Batched NUFFT
This adds support for a new batched NUFFT, which is substantially faster than using a Python for loop over the batch dimension when applying a NUFFT with many small k-space trajectories. It also updates the documentation and includes a new page for performance tips. See PR #24 and Issue #24 for details and testing.
Bug fixes and documentation updates
Major Revision - Complex Number Support, Speed Improvements, Updated Documentation
This release includes a complete package rewrite, featuring complex tensor support, a 4-fold speed-up on the CPU, a 2-fold speed-up on the GPU, an updated API, and rewritten documentation. The release includes many backwards-compatibility-breaking changing, hence the version increment to 1.0.
A summary of changes follows:
- Support for PyTorch complex tensors. The user is now expected to pass in tensors of a shape
[batch_size, num_chans, height, width]
for a 2D imaging problem. It's sill possible to pass in real tensors - just use[batch_size, num_chans, height, width, 2]
. The backend uses complex values for efficiency. - A 4-fold speed-up on the CPU and a 2-fold speed-up on the GPU for table interpolation. The primary mechanism is process forking via
torch.jit.fork
- see interp.py for details. - The backend has been substantially rewritten to a higher code quality, adding type annotations and compiling performant-critical functions with
torch.jit.script
to get rid of the Python GIL. - A much improved density compensation function,
calc_density_compensation_function
, thanks to a contribution of @chaithyagr on the suggestion of @zaccharieramzi. - Simplified utility functions for
calc_toeplitz_kernel
andcalc_tensor_spmatrix
. - The documentation has been completely rewritten, upgrading to the Read the Docs template, an improved table of contents, adding mathematical descriptions of core operators, and having a dedicated basic usage section.
- Dedicated SENSE-NUFFT operators have been removed. Wrapping these with
torch.autograd.Function
didn't give us any benefits, so there's no need to have them. Users will now pass their sensitivity maps into theforward
function ofKbNufft
andKbNufftAdjoint
directly. - Rewritten notebooks and README files.
- New
CONTRIBUTING.md
. - Removed
mrisensesim.py
as it is not a core part of the package.
Small compatibility fixes
This fixes a few compatibility issues that could have possibly arisen in new versions of PyTorch mentioned in Issue #7. Specifically:
- A NumPy array was converted to a tensor without a copy - this has been modified to explicitly copy.
- A new
fft_compatibility.py
file to handle modifications to new versions oftorch.fft
(see here). Basically, use oftorch.fft
was going to be deprecated in the future in favor oftorch.fft.fft
. We now check the PyTorch version to make figure out which one to do to make sure the code can still run on older versions of PyTorch.
Documentation updates, 3D radial density compensation
This includes a few minor improvements that haven't been released to PyPI. This release also tests the new GitHub action that should automatically handle this.
- Documentation and package install for profiling (PR #3)
- 3D radial density compensation and stack of spirals density compensation (f9ac098c8f122026e8e8866828cb5957118a5679)