Fix ffnet build for Python 3.13+ and mamba 2.x compatibility#41
Fix ffnet build for Python 3.13+ and mamba 2.x compatibility#41
Conversation
- Remove --no-use-pep517 flag (dropped in pip 23.1) - Add --no-build-isolation for Python 3.13+ so numpy is visible to pip's build backend during the ffnet Fortran compilation - Fix misleading echo message in ffnet install block - Remove --show-channel-urls from mamba list calls (unsupported in mamba 2.x) - Update example Miniforge version to 26.1.0-0 - Update CHANGELOG
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 16 minutes and 17 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
WalkthroughInstaller and docs updated to Miniforge/Python 3.14, ffnet build flags and mamba listing flags adjusted, basemap moved to conda and python-docx added, TensorFlow install made conditional for Python 3.14+, and new CUDA/MPS-aware PyTorch test scripts added. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Update example versions to Python 3.14 and Miniforge 26.1.0-0 - Fix install path format (prefix/version/date, not prefix/version_pyX.Y/date) - Add missing --ffnet-hack option to usage in README.md - Fix NOTE 1 in README.md (path is no longer based on Python version) - Update README.GMAO, README.NAS, README.NCCS to use install_miniforge.bash (replace stale references to install_miniconda.bash)
There was a problem hiding this comment.
🧹 Nitpick comments (3)
tests/torch_example.py (1)
8-18: Minor redundancy in GPU info output.Lines 10 and 15 both print the device name via
torch.cuda.get_device_name(0). Consider removing the duplicate.💡 Remove duplicate device name print
if torch.cuda.is_available(): device = torch.device("cuda:0") print(f"Using GPU: {torch.cuda.get_device_name(0)}") print(f"GPU memory allocated: {torch.cuda.memory_allocated(0) / 1024**2:.2f} MB") print(f"GPU memory cached: {torch.cuda.memory_reserved(0) / 1024**2:.2f} MB") print(f"CUDA device count: {torch.cuda.device_count()}") print(f"Current CUDA device: {torch.cuda.current_device()}") - print(f"CUDA device name: {torch.cuda.get_device_name(0)}") else: device = torch.device("cpu") print("CUDA not available, using CPU")Otherwise, the CUDA detection and fallback pattern is correct and follows standard PyTorch idioms.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example.py` around lines 8 - 18, Remove the duplicate GPU device-name print by keeping a single call to torch.cuda.get_device_name(0); update the block that sets device (variable device) so it prints the device name only once (remove one of the two print(...) lines calling torch.cuda.get_device_name(0)), leaving the other diagnostic prints (memory_allocated, memory_reserved, device_count, current_device) intact.install_miniforge.bash (1)
683-683: Minor: Echo message could be more specific.The message "We have a Fortran compiler. Installing ffnet" is generic. Consider making it more informative by including the Python version context since the build flags differ based on Python version.
💡 Optional improvement
- echo "We have a Fortran compiler. Installing ffnet" + echo "We have a Fortran compiler. Installing ffnet (Python $PYTHON_VER)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@install_miniforge.bash` at line 683, Update the echo message that announces installing ffnet to include the Python version context so it's clearer which build flags are being used; locate the echo statement (echo "We have a Fortran compiler. Installing ffnet") in install_miniforge.bash and change it to include the output of the Python version command (e.g., python --version or $(python -c 'import sys; print(sys.version.split()[0])')) so the log shows both the Fortran compiler presence and the specific Python version being targeted.tests/torch_example_like_tflow.py (1)
50-50: Rename unused loop variable per Python convention.The
batch_idxvariable is not used within the loop body. Per Python convention and the static analysis hint, rename it to_batch_idxor just_.💡 Rename unused variable
- for batch_idx, (data, target) in enumerate(train_loader): + for _, (data, target) in enumerate(train_loader):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example_like_tflow.py` at line 50, The loop in the training code uses an unused variable `batch_idx` in the `for batch_idx, (data, target) in enumerate(train_loader):` line; rename it to `_` or `_batch_idx` to follow Python convention and silence linter warnings (update the `for` header only—no other changes needed since `batch_idx` is not referenced elsewhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@install_miniforge.bash`:
- Line 683: Update the echo message that announces installing ffnet to include
the Python version context so it's clearer which build flags are being used;
locate the echo statement (echo "We have a Fortran compiler. Installing ffnet")
in install_miniforge.bash and change it to include the output of the Python
version command (e.g., python --version or $(python -c 'import sys;
print(sys.version.split()[0])')) so the log shows both the Fortran compiler
presence and the specific Python version being targeted.
In `@tests/torch_example_like_tflow.py`:
- Line 50: The loop in the training code uses an unused variable `batch_idx` in
the `for batch_idx, (data, target) in enumerate(train_loader):` line; rename it
to `_` or `_batch_idx` to follow Python convention and silence linter warnings
(update the `for` header only—no other changes needed since `batch_idx` is not
referenced elsewhere).
In `@tests/torch_example.py`:
- Around line 8-18: Remove the duplicate GPU device-name print by keeping a
single call to torch.cuda.get_device_name(0); update the block that sets device
(variable device) so it prints the device name only once (remove one of the two
print(...) lines calling torch.cuda.get_device_name(0)), leaving the other
diagnostic prints (memory_allocated, memory_reserved, device_count,
current_device) intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 8254ffd9-bbba-4345-b43f-8fccdd9ef7d5
📒 Files selected for processing (4)
CHANGELOG.mdinstall_miniforge.bashtests/torch_example.pytests/torch_example_like_tflow.py
There was a problem hiding this comment.
🧹 Nitpick comments (1)
README.md (1)
17-23: LGTM on the content updates!The example command and resulting path format correctly reflect the actual script behavior. The path structure
/opt/GEOSpyD/26.1.0-0/YYYY-MM-DDmatches the implementation ininstall_miniforge.bash(lines 370-374) which constructs the path as$MINIFORGE_DIR/${MINIFORGE_VER}/$DATE.Minor nit: The code block at lines 21-22 is missing a language specifier (static analysis hint MD040). Consider adding a language identifier for consistency.
,
📝 Optional: Add language specifier to code block
will create an install at: -``` +```text /opt/GEOSpyD/26.1.0-0/YYYY-MM-DD</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@README.mdaround lines 17 - 23, Add a language specifier to the fenced code
block that shows the resulting path to satisfy MD040; update the block that
contains "/opt/GEOSpyD/26.1.0-0/YYYY-MM-DD" (the code block following the
example command "./install_miniforge.bash --python_version 3.14
--miniforge_version 26.1.0-0 --prefix /opt/GEOSpyD") to use a language tag like
text (orbash) instead of just ``` so the markdown linter stops flagging
MD040.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In@README.md:
- Around line 17-23: Add a language specifier to the fenced code block that
shows the resulting path to satisfy MD040; update the block that contains
"/opt/GEOSpyD/26.1.0-0/YYYY-MM-DD" (the code block following the example command
"./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.0-0
--prefix /opt/GEOSpyD") to use a language tag liketext (orbash) instead
of just ``` so the markdown linter stops flagging MD040.</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Repository UI **Review profile**: CHILL **Plan**: Pro **Run ID**: `27a56af4-203d-4c8b-89d2-a99e0a08bb0a` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between f1fd3ff82db6d24933ce3a3332631f4a9d610792 and 563698067e201fc9522ab18abf9820de07aca98f. </details> <details> <summary>📒 Files selected for processing (4)</summary> * `README.GMAO` * `README.NAS` * `README.NCCS` * `README.md` </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
tests/torch_example_like_tflow.py (3)
10-31: Move dataset setup and execution undermain()to avoid import-time side effects.Lines 10-31 and 120-129 currently run on import (including download and training). Wrap runtime flow in
main()and invoke viaif __name__ == "__main__":.Suggested refactor shape
-def run_mnist(device_name): +def run_mnist(device_name, train_loader, test_loader): ... -# Run CPU -run_mnist("cpu") - -# Determine and run accelerated device -if torch.cuda.is_available(): - run_mnist("cuda:0") -elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available(): - run_mnist("mps") -else: - print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.") +def main(): + print("PyTorch version:", torch.__version__) + print("CUDA available:", torch.cuda.is_available()) + if hasattr(torch.backends, "mps"): + print("MPS available:", torch.backends.mps.is_available()) + + transform = transforms.Compose([transforms.ToTensor()]) + print("\nDownloading/Loading MNIST dataset...") + train_dataset = datasets.MNIST(root="./data", train=True, download=True, transform=transform) + test_dataset = datasets.MNIST(root="./data", train=False, download=True, transform=transform) + train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True) + test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False) + + run_mnist("cpu", train_loader, test_loader) + if torch.cuda.is_available(): + run_mnist("cuda:0", train_loader, test_loader) + elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available(): + run_mnist("mps", train_loader, test_loader) + else: + print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.") + +if __name__ == "__main__": + main()Also applies to: 120-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example_like_tflow.py` around lines 10 - 31, The MNIST dataset download/loader creation and training/testing flow are executed at import time (train_dataset, test_dataset, train_loader, test_loader and any code around lines 120-129); wrap all runtime setup and execution into a main() function and move the dataset creation, DataLoader setup, and any training/evaluation calls into that main(), then call main() under if __name__ == "__main__": so imports no longer trigger downloads or training. Locate the DataLoader and datasets.MNIST usage and the top-level prints (e.g., torch version/CUDA/MPS checks) and move them inside main() while leaving only function/class definitions at module scope.
61-61: Remove unusedbatch_idxat Line 61.
batch_idxis unused; iterate directly over the loader to satisfy Ruff B007.Suggested change
- for batch_idx, (data, target) in enumerate(train_loader): + for data, target in train_loader:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example_like_tflow.py` at line 61, The loop currently uses enumerate over train_loader and binds an unused variable batch_idx (for batch_idx, (data, target) in enumerate(train_loader)); remove enumerate and the batch_idx binding and iterate directly over train_loader (for (data, target) in train_loader) to eliminate the unused variable and satisfy Ruff B007 while keeping the same data/target iteration semantics; update any references to batch_idx in the surrounding scope if present.
77-77: Avoid.datafor predictions; useargmax()for cleaner, idiomatic PyTorch.At lines 77 and 100, using
.datais non-idiomatic in modern PyTorch. Since you only need the predicted class indices, replacetorch.max(output.data, 1)withoutput.argmax(dim=1). This is cleaner and avoids unnecessary detachment from the computation graph.Suggested change
- _, predicted = torch.max(output.data, 1) + predicted = output.argmax(dim=1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example_like_tflow.py` at line 77, Replace the use of .data with argmax to get predicted class indices: locate the assignments that set predicted via torch.max(output.data, 1) (e.g., the line creating "_, predicted = torch.max(output.data, 1)" and the similar occurrence later around line 100) and change them to use output.argmax(dim=1) so "predicted" receives output.argmax(dim=1); this removes .data and uses the idiomatic, detached-free way to compute class predictions from the tensor named output.tests/torch_example.py (2)
26-26: Rename unused loop variable at Line 26.
tis not used in the loop body; rename to_t(or_) to satisfy Ruff B007 and reduce lint noise.Suggested change
- for t in range(2000): + for _t in range(2000):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example.py` at line 26, The for-loop in tests/torch_example.py uses an unused loop variable named "t" (for t in range(2000)); rename it to "_t" (or simply "_") to silence Ruff B007 and reduce lint noise. Update the loop header so the rest of the loop body remains unchanged but references no new variable, ensuring no behavioral change.
55-68: Guard top-level execution behind__main__.Lines 55-68 execute immediately on import. Wrap these calls to avoid accidental runtime side effects when this module is imported by tooling/tests.
Suggested change
-print(f"PyTorch version: {torch.__version__}") - -# Always run CPU first -run_polynomial_regression("cpu") - -# Determine and run accelerated device -if torch.cuda.is_available(): - print(f"\nFound CUDA: {torch.cuda.get_device_name(0)}") - run_polynomial_regression("cuda:0") -elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available(): - print("\nFound Apple Metal Performance Shaders (MPS)") - run_polynomial_regression("mps") -else: - print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.") +def main(): + print(f"PyTorch version: {torch.__version__}") + run_polynomial_regression("cpu") + + if torch.cuda.is_available(): + print(f"\nFound CUDA: {torch.cuda.get_device_name(0)}") + run_polynomial_regression("cuda:0") + elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available(): + print("\nFound Apple Metal Performance Shaders (MPS)") + run_polynomial_regression("mps") + else: + print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.") + +if __name__ == "__main__": + main()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/torch_example.py` around lines 55 - 68, The top-level execution in tests/torch_example.py prints the PyTorch version and calls run_polynomial_regression immediately on import; wrap that behavior in a if __name__ == "__main__": guard so importing the module doesn't run the CPU/CUDA/MPS runs. Move the print(f"PyTorch version: {torch.__version__}") and the subsequent device detection and run_polynomial_regression("cpu"/"cuda:0"/"mps") calls into a main block (or a small main() function that you call from that guard) and leave run_polynomial_regression and any helper definitions at module level.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@install_miniforge.bash`:
- Around line 774-775: The two mamba list invocations use unquoted variable
expansions ($MINIFORGE_BINDIR and $MINIFORGE_ENVNAME) which can break when paths
or environment names contain spaces; update both commands (the lines invoking
mamba list -n $MINIFORGE_ENVNAME --explicit > distribution_spec_file.txt and
mamba list -n $MINIFORGE_ENVNAME > mamba_list_packages.txt) to quote the
variables (e.g. "$MINIFORGE_BINDIR/mamba" and -n "$MINIFORGE_ENVNAME") so the
binder path and env name are handled safely.
In `@README.md`:
- Around line 21-23: The fenced code block containing the path
"/opt/GEOSpyD/26.1.1-3/YYYY-MM-DD" lacks a language identifier; update the
opening fence from ``` to a language tag (e.g., ```text) so markdownlint MD040
is satisfied and the snippet is properly highlighted; locate the triple-backtick
block in README.md and add the identifier to the opening fence.
---
Nitpick comments:
In `@tests/torch_example_like_tflow.py`:
- Around line 10-31: The MNIST dataset download/loader creation and
training/testing flow are executed at import time (train_dataset, test_dataset,
train_loader, test_loader and any code around lines 120-129); wrap all runtime
setup and execution into a main() function and move the dataset creation,
DataLoader setup, and any training/evaluation calls into that main(), then call
main() under if __name__ == "__main__": so imports no longer trigger downloads
or training. Locate the DataLoader and datasets.MNIST usage and the top-level
prints (e.g., torch version/CUDA/MPS checks) and move them inside main() while
leaving only function/class definitions at module scope.
- Line 61: The loop currently uses enumerate over train_loader and binds an
unused variable batch_idx (for batch_idx, (data, target) in
enumerate(train_loader)); remove enumerate and the batch_idx binding and iterate
directly over train_loader (for (data, target) in train_loader) to eliminate the
unused variable and satisfy Ruff B007 while keeping the same data/target
iteration semantics; update any references to batch_idx in the surrounding scope
if present.
- Line 77: Replace the use of .data with argmax to get predicted class indices:
locate the assignments that set predicted via torch.max(output.data, 1) (e.g.,
the line creating "_, predicted = torch.max(output.data, 1)" and the similar
occurrence later around line 100) and change them to use output.argmax(dim=1) so
"predicted" receives output.argmax(dim=1); this removes .data and uses the
idiomatic, detached-free way to compute class predictions from the tensor named
output.
In `@tests/torch_example.py`:
- Line 26: The for-loop in tests/torch_example.py uses an unused loop variable
named "t" (for t in range(2000)); rename it to "_t" (or simply "_") to silence
Ruff B007 and reduce lint noise. Update the loop header so the rest of the loop
body remains unchanged but references no new variable, ensuring no behavioral
change.
- Around line 55-68: The top-level execution in tests/torch_example.py prints
the PyTorch version and calls run_polynomial_regression immediately on import;
wrap that behavior in a if __name__ == "__main__": guard so importing the module
doesn't run the CPU/CUDA/MPS runs. Move the print(f"PyTorch version:
{torch.__version__}") and the subsequent device detection and
run_polynomial_regression("cpu"/"cuda:0"/"mps") calls into a main block (or a
small main() function that you call from that guard) and leave
run_polynomial_regression and any helper definitions at module level.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: cbac2571-f94d-41f9-a995-ebd0f676b584
📒 Files selected for processing (9)
.gitignoreCHANGELOG.mdREADME.GMAOREADME.NASREADME.NCCSREADME.mdinstall_miniforge.bashtests/torch_example.pytests/torch_example_like_tflow.py
✅ Files skipped from review due to trivial changes (2)
- .gitignore
- README.NAS
🚧 Files skipped from review as they are similar to previous changes (3)
- README.NCCS
- README.GMAO
- CHANGELOG.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary
This PR prepares the
26.1.1-3release and fixes theinstall_miniforge.bashscript when run with Python 3.14 and Miniforge 26.1.1-3, where the install would fail during theffnetbuild step.Changes
Bug Fixes
Remove
--no-use-pep517: This pip flag was dropped in pip 23.1. With modern pip (26.x), passing it caused an immediate fatal error:no such option: --no-use-pep517.Add
--no-build-isolationfor Python 3.13+: When pip builds a PEP 517 package in its default isolated sandbox, it does not inherit packages from the conda environment. TheffnetFortran build requiresnumpyat compile time viaf2py, but numpy was invisible inside the sandbox, causingModuleNotFoundError: No module named 'numpy'. Passing--no-build-isolationtells pip to use the already-installed conda env packages instead.Remove
--show-channel-urlsfrommamba list: This flag is not supported by mamba 2.x and caused the end-of-install package listing (mamba_list_packages.txt,distribution_spec_file.txt) to be skipped with a spurious warning.Features & Updates
tests/torch_example.pyandtests/torch_example_like_tflow.pyto test both CPU and accelerated backend (MPS/CUDA) side-by-side, including timing output.torchvision: Included in thepip installlist ininstall_miniforge.bash26.1.1-3and Python version to3.14in scripts/READMEsCHANGELOG.mdfor the26.1.1-3releaseTesting
Verified by running the full install command end-to-end on macOS arm64:
ffnet-0.8.6built and installed successfully (ffnet-0.8.6-cp314-cp314-macosx_15_0_arm64.whl). All other packages installed without error.torch_exampleruns successfully execute and identify the Apple MPS backend.