Skip to content

Fix ffnet build for Python 3.13+ and mamba 2.x compatibility#41

Merged
mathomp4 merged 9 commits intomainfrom
update-2025Dec04
Apr 6, 2026
Merged

Fix ffnet build for Python 3.13+ and mamba 2.x compatibility#41
mathomp4 merged 9 commits intomainfrom
update-2025Dec04

Conversation

@mathomp4
Copy link
Copy Markdown
Member

@mathomp4 mathomp4 commented Mar 11, 2026

Summary

This PR prepares the 26.1.1-3 release and fixes the install_miniforge.bash script when run with Python 3.14 and Miniforge 26.1.1-3, where the install would fail during the ffnet build step.

Changes

Bug Fixes

  • Remove --no-use-pep517: This pip flag was dropped in pip 23.1. With modern pip (26.x), passing it caused an immediate fatal error: no such option: --no-use-pep517.

  • Add --no-build-isolation for Python 3.13+: When pip builds a PEP 517 package in its default isolated sandbox, it does not inherit packages from the conda environment. The ffnet Fortran build requires numpy at compile time via f2py, but numpy was invisible inside the sandbox, causing ModuleNotFoundError: No module named 'numpy'. Passing --no-build-isolation tells pip to use the already-installed conda env packages instead.

  • Remove --show-channel-urls from mamba list: This flag is not supported by mamba 2.x and caused the end-of-install package listing (mamba_list_packages.txt, distribution_spec_file.txt) to be skipped with a spurious warning.

Features & Updates

  • Add PyTorch Apple MPS Support: Updated tests/torch_example.py and tests/torch_example_like_tflow.py to test both CPU and accelerated backend (MPS/CUDA) side-by-side, including timing output.
  • Add torchvision: Included in the pip install list in install_miniforge.bash
  • Update example Miniforge version to 26.1.1-3 and Python version to 3.14 in scripts/READMEs
  • Update CHANGELOG.md for the 26.1.1-3 release

Testing

Verified by running the full install command end-to-end on macOS arm64:

./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.1-3 --prefix /Users/mathomp4/installed/Core/GEOSpyD

ffnet-0.8.6 built and installed successfully (ffnet-0.8.6-cp314-cp314-macosx_15_0_arm64.whl). All other packages installed without error. torch_example runs successfully execute and identify the Apple MPS backend.

- Remove --no-use-pep517 flag (dropped in pip 23.1)
- Add --no-build-isolation for Python 3.13+ so numpy is visible to
  pip's build backend during the ffnet Fortran compilation
- Fix misleading echo message in ffnet install block
- Remove --show-channel-urls from mamba list calls (unsupported in mamba 2.x)
- Update example Miniforge version to 26.1.0-0
- Update CHANGELOG
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 11, 2026

Warning

Rate limit exceeded

@mathomp4 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 16 minutes and 17 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 16 minutes and 17 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2d3b3691-80f4-4de6-a5ce-02a4465bbdac

📥 Commits

Reviewing files that changed from the base of the PR and between 23738db and 50be101.

📒 Files selected for processing (4)
  • README.md
  • install_miniforge.bash
  • tests/torch_example.py
  • tests/torch_example_like_tflow.py

Walkthrough

Installer and docs updated to Miniforge/Python 3.14, ffnet build flags and mamba listing flags adjusted, basemap moved to conda and python-docx added, TensorFlow install made conditional for Python 3.14+, and new CUDA/MPS-aware PyTorch test scripts added.

Changes

Cohort / File(s) Summary
Changelog & Installer
CHANGELOG.md, install_miniforge.bash
Bumped example Python to 3.14 and Miniforge to 26.1.x; removed --show-channel-urls from a mamba list call; replaced ffnet --no-use-pep517 with --no-build-isolation for Python ≥3.13; moved basemap to conda-forge and removed pip basemap; added python-docx; conditional TensorFlow installs for Python 3.14+.
Tests
tests/torch_example.py, tests/torch_example_like_tflow.py
tests/torch_example.py: refactored into run_polynomial_regression(device_name) and runs on CPU and optionally on accelerator with timing. New tests/torch_example_like_tflow.py: MNIST training/eval script, device-aware (CPU/CUDA/MPS), simple FFN, 5 epochs, timing and sample softmax outputs.
Documentation & READMEs
README.md, README.GMAO, README.NAS, README.NCCS
Replaced Miniconda examples with Miniforge installer usage; updated example Python and Miniforge versions to 3.14 / 26.1.x; adjusted install paths/dates; added --ffnet-hack CLI flag and reflowed usage docs.
Repository config
.gitignore
Added data/ to ignore rules.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

bug, add package

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely captures the main bug fixes: ffnet build compatibility for Python 3.13+ and mamba 2.x compatibility issues.
Description check ✅ Passed The description comprehensively explains the changes, bug fixes, features, and testing verification, all directly related to the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch update-2025Dec04

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

- Update example versions to Python 3.14 and Miniforge 26.1.0-0
- Fix install path format (prefix/version/date, not prefix/version_pyX.Y/date)
- Add missing --ffnet-hack option to usage in README.md
- Fix NOTE 1 in README.md (path is no longer based on Python version)
- Update README.GMAO, README.NAS, README.NCCS to use install_miniforge.bash
  (replace stale references to install_miniconda.bash)
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
tests/torch_example.py (1)

8-18: Minor redundancy in GPU info output.

Lines 10 and 15 both print the device name via torch.cuda.get_device_name(0). Consider removing the duplicate.

💡 Remove duplicate device name print
 if torch.cuda.is_available():
     device = torch.device("cuda:0")
     print(f"Using GPU: {torch.cuda.get_device_name(0)}")
     print(f"GPU memory allocated: {torch.cuda.memory_allocated(0) / 1024**2:.2f} MB")
     print(f"GPU memory cached: {torch.cuda.memory_reserved(0) / 1024**2:.2f} MB")
     print(f"CUDA device count: {torch.cuda.device_count()}")
     print(f"Current CUDA device: {torch.cuda.current_device()}")
-    print(f"CUDA device name: {torch.cuda.get_device_name(0)}")
 else:
     device = torch.device("cpu")
     print("CUDA not available, using CPU")

Otherwise, the CUDA detection and fallback pattern is correct and follows standard PyTorch idioms.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example.py` around lines 8 - 18, Remove the duplicate GPU
device-name print by keeping a single call to torch.cuda.get_device_name(0);
update the block that sets device (variable device) so it prints the device name
only once (remove one of the two print(...) lines calling
torch.cuda.get_device_name(0)), leaving the other diagnostic prints
(memory_allocated, memory_reserved, device_count, current_device) intact.
install_miniforge.bash (1)

683-683: Minor: Echo message could be more specific.

The message "We have a Fortran compiler. Installing ffnet" is generic. Consider making it more informative by including the Python version context since the build flags differ based on Python version.

💡 Optional improvement
-   echo "We have a Fortran compiler. Installing ffnet"
+   echo "We have a Fortran compiler. Installing ffnet (Python $PYTHON_VER)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@install_miniforge.bash` at line 683, Update the echo message that announces
installing ffnet to include the Python version context so it's clearer which
build flags are being used; locate the echo statement (echo "We have a Fortran
compiler. Installing ffnet") in install_miniforge.bash and change it to include
the output of the Python version command (e.g., python --version or $(python -c
'import sys; print(sys.version.split()[0])')) so the log shows both the Fortran
compiler presence and the specific Python version being targeted.
tests/torch_example_like_tflow.py (1)

50-50: Rename unused loop variable per Python convention.

The batch_idx variable is not used within the loop body. Per Python convention and the static analysis hint, rename it to _batch_idx or just _.

💡 Rename unused variable
-    for batch_idx, (data, target) in enumerate(train_loader):
+    for _, (data, target) in enumerate(train_loader):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example_like_tflow.py` at line 50, The loop in the training code
uses an unused variable `batch_idx` in the `for batch_idx, (data, target) in
enumerate(train_loader):` line; rename it to `_` or `_batch_idx` to follow
Python convention and silence linter warnings (update the `for` header only—no
other changes needed since `batch_idx` is not referenced elsewhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@install_miniforge.bash`:
- Line 683: Update the echo message that announces installing ffnet to include
the Python version context so it's clearer which build flags are being used;
locate the echo statement (echo "We have a Fortran compiler. Installing ffnet")
in install_miniforge.bash and change it to include the output of the Python
version command (e.g., python --version or $(python -c 'import sys;
print(sys.version.split()[0])')) so the log shows both the Fortran compiler
presence and the specific Python version being targeted.

In `@tests/torch_example_like_tflow.py`:
- Line 50: The loop in the training code uses an unused variable `batch_idx` in
the `for batch_idx, (data, target) in enumerate(train_loader):` line; rename it
to `_` or `_batch_idx` to follow Python convention and silence linter warnings
(update the `for` header only—no other changes needed since `batch_idx` is not
referenced elsewhere).

In `@tests/torch_example.py`:
- Around line 8-18: Remove the duplicate GPU device-name print by keeping a
single call to torch.cuda.get_device_name(0); update the block that sets device
(variable device) so it prints the device name only once (remove one of the two
print(...) lines calling torch.cuda.get_device_name(0)), leaving the other
diagnostic prints (memory_allocated, memory_reserved, device_count,
current_device) intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 8254ffd9-bbba-4345-b43f-8fccdd9ef7d5

📥 Commits

Reviewing files that changed from the base of the PR and between 39545ce and f1fd3ff.

📒 Files selected for processing (4)
  • CHANGELOG.md
  • install_miniforge.bash
  • tests/torch_example.py
  • tests/torch_example_like_tflow.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
README.md (1)

17-23: LGTM on the content updates!

The example command and resulting path format correctly reflect the actual script behavior. The path structure /opt/GEOSpyD/26.1.0-0/YYYY-MM-DD matches the implementation in install_miniforge.bash (lines 370-374) which constructs the path as $MINIFORGE_DIR/${MINIFORGE_VER}/$DATE.

Minor nit: The code block at lines 21-22 is missing a language specifier (static analysis hint MD040). Consider adding a language identifier for consistency.

,

📝 Optional: Add language specifier to code block
 will create an install at:
-```
+```text
 /opt/GEOSpyD/26.1.0-0/YYYY-MM-DD
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @README.md around lines 17 - 23, Add a language specifier to the fenced code
block that shows the resulting path to satisfy MD040; update the block that
contains "/opt/GEOSpyD/26.1.0-0/YYYY-MM-DD" (the code block following the
example command "./install_miniforge.bash --python_version 3.14
--miniforge_version 26.1.0-0 --prefix /opt/GEOSpyD") to use a language tag like
text (or bash) instead of just ``` so the markdown linter stops flagging
MD040.


</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @README.md:

  • Around line 17-23: Add a language specifier to the fenced code block that
    shows the resulting path to satisfy MD040; update the block that contains
    "/opt/GEOSpyD/26.1.0-0/YYYY-MM-DD" (the code block following the example command
    "./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.0-0
    --prefix /opt/GEOSpyD") to use a language tag like text (or bash) instead
    of just ``` so the markdown linter stops flagging MD040.

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: Repository UI

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `27a56af4-203d-4c8b-89d2-a99e0a08bb0a`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between f1fd3ff82db6d24933ce3a3332631f4a9d610792 and 563698067e201fc9522ab18abf9820de07aca98f.

</details>

<details>
<summary>📒 Files selected for processing (4)</summary>

* `README.GMAO`
* `README.NAS`
* `README.NCCS`
* `README.md`

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
tests/torch_example_like_tflow.py (3)

10-31: Move dataset setup and execution under main() to avoid import-time side effects.

Lines 10-31 and 120-129 currently run on import (including download and training). Wrap runtime flow in main() and invoke via if __name__ == "__main__":.

Suggested refactor shape
-def run_mnist(device_name):
+def run_mnist(device_name, train_loader, test_loader):
     ...
-# Run CPU
-run_mnist("cpu")
-
-# Determine and run accelerated device
-if torch.cuda.is_available():
-    run_mnist("cuda:0")
-elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
-    run_mnist("mps")
-else:
-    print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.")
+def main():
+    print("PyTorch version:", torch.__version__)
+    print("CUDA available:", torch.cuda.is_available())
+    if hasattr(torch.backends, "mps"):
+        print("MPS available:", torch.backends.mps.is_available())
+
+    transform = transforms.Compose([transforms.ToTensor()])
+    print("\nDownloading/Loading MNIST dataset...")
+    train_dataset = datasets.MNIST(root="./data", train=True, download=True, transform=transform)
+    test_dataset = datasets.MNIST(root="./data", train=False, download=True, transform=transform)
+    train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
+    test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
+
+    run_mnist("cpu", train_loader, test_loader)
+    if torch.cuda.is_available():
+        run_mnist("cuda:0", train_loader, test_loader)
+    elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
+        run_mnist("mps", train_loader, test_loader)
+    else:
+        print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.")
+
+if __name__ == "__main__":
+    main()

Also applies to: 120-129

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example_like_tflow.py` around lines 10 - 31, The MNIST dataset
download/loader creation and training/testing flow are executed at import time
(train_dataset, test_dataset, train_loader, test_loader and any code around
lines 120-129); wrap all runtime setup and execution into a main() function and
move the dataset creation, DataLoader setup, and any training/evaluation calls
into that main(), then call main() under if __name__ == "__main__": so imports
no longer trigger downloads or training. Locate the DataLoader and
datasets.MNIST usage and the top-level prints (e.g., torch version/CUDA/MPS
checks) and move them inside main() while leaving only function/class
definitions at module scope.

61-61: Remove unused batch_idx at Line 61.

batch_idx is unused; iterate directly over the loader to satisfy Ruff B007.

Suggested change
-        for batch_idx, (data, target) in enumerate(train_loader):
+        for data, target in train_loader:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example_like_tflow.py` at line 61, The loop currently uses
enumerate over train_loader and binds an unused variable batch_idx (for
batch_idx, (data, target) in enumerate(train_loader)); remove enumerate and the
batch_idx binding and iterate directly over train_loader (for (data, target) in
train_loader) to eliminate the unused variable and satisfy Ruff B007 while
keeping the same data/target iteration semantics; update any references to
batch_idx in the surrounding scope if present.

77-77: Avoid .data for predictions; use argmax() for cleaner, idiomatic PyTorch.

At lines 77 and 100, using .data is non-idiomatic in modern PyTorch. Since you only need the predicted class indices, replace torch.max(output.data, 1) with output.argmax(dim=1). This is cleaner and avoids unnecessary detachment from the computation graph.

Suggested change
-            _, predicted = torch.max(output.data, 1)
+            predicted = output.argmax(dim=1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example_like_tflow.py` at line 77, Replace the use of .data with
argmax to get predicted class indices: locate the assignments that set predicted
via torch.max(output.data, 1) (e.g., the line creating "_, predicted =
torch.max(output.data, 1)" and the similar occurrence later around line 100) and
change them to use output.argmax(dim=1) so "predicted" receives
output.argmax(dim=1); this removes .data and uses the idiomatic, detached-free
way to compute class predictions from the tensor named output.
tests/torch_example.py (2)

26-26: Rename unused loop variable at Line 26.

t is not used in the loop body; rename to _t (or _) to satisfy Ruff B007 and reduce lint noise.

Suggested change
-    for t in range(2000):
+    for _t in range(2000):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example.py` at line 26, The for-loop in tests/torch_example.py
uses an unused loop variable named "t" (for t in range(2000)); rename it to "_t"
(or simply "_") to silence Ruff B007 and reduce lint noise. Update the loop
header so the rest of the loop body remains unchanged but references no new
variable, ensuring no behavioral change.

55-68: Guard top-level execution behind __main__.

Lines 55-68 execute immediately on import. Wrap these calls to avoid accidental runtime side effects when this module is imported by tooling/tests.

Suggested change
-print(f"PyTorch version: {torch.__version__}")
-
-# Always run CPU first
-run_polynomial_regression("cpu")
-
-# Determine and run accelerated device
-if torch.cuda.is_available():
-    print(f"\nFound CUDA: {torch.cuda.get_device_name(0)}")
-    run_polynomial_regression("cuda:0")
-elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
-    print("\nFound Apple Metal Performance Shaders (MPS)")
-    run_polynomial_regression("mps")
-else:
-    print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.")
+def main():
+    print(f"PyTorch version: {torch.__version__}")
+    run_polynomial_regression("cpu")
+
+    if torch.cuda.is_available():
+        print(f"\nFound CUDA: {torch.cuda.get_device_name(0)}")
+        run_polynomial_regression("cuda:0")
+    elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
+        print("\nFound Apple Metal Performance Shaders (MPS)")
+        run_polynomial_regression("mps")
+    else:
+        print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.")
+
+if __name__ == "__main__":
+    main()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/torch_example.py` around lines 55 - 68, The top-level execution in
tests/torch_example.py prints the PyTorch version and calls
run_polynomial_regression immediately on import; wrap that behavior in a if
__name__ == "__main__": guard so importing the module doesn't run the
CPU/CUDA/MPS runs. Move the print(f"PyTorch version: {torch.__version__}") and
the subsequent device detection and
run_polynomial_regression("cpu"/"cuda:0"/"mps") calls into a main block (or a
small main() function that you call from that guard) and leave
run_polynomial_regression and any helper definitions at module level.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@install_miniforge.bash`:
- Around line 774-775: The two mamba list invocations use unquoted variable
expansions ($MINIFORGE_BINDIR and $MINIFORGE_ENVNAME) which can break when paths
or environment names contain spaces; update both commands (the lines invoking
mamba list -n $MINIFORGE_ENVNAME --explicit > distribution_spec_file.txt and
mamba list -n $MINIFORGE_ENVNAME > mamba_list_packages.txt) to quote the
variables (e.g. "$MINIFORGE_BINDIR/mamba" and -n "$MINIFORGE_ENVNAME") so the
binder path and env name are handled safely.

In `@README.md`:
- Around line 21-23: The fenced code block containing the path
"/opt/GEOSpyD/26.1.1-3/YYYY-MM-DD" lacks a language identifier; update the
opening fence from ``` to a language tag (e.g., ```text) so markdownlint MD040
is satisfied and the snippet is properly highlighted; locate the triple-backtick
block in README.md and add the identifier to the opening fence.

---

Nitpick comments:
In `@tests/torch_example_like_tflow.py`:
- Around line 10-31: The MNIST dataset download/loader creation and
training/testing flow are executed at import time (train_dataset, test_dataset,
train_loader, test_loader and any code around lines 120-129); wrap all runtime
setup and execution into a main() function and move the dataset creation,
DataLoader setup, and any training/evaluation calls into that main(), then call
main() under if __name__ == "__main__": so imports no longer trigger downloads
or training. Locate the DataLoader and datasets.MNIST usage and the top-level
prints (e.g., torch version/CUDA/MPS checks) and move them inside main() while
leaving only function/class definitions at module scope.
- Line 61: The loop currently uses enumerate over train_loader and binds an
unused variable batch_idx (for batch_idx, (data, target) in
enumerate(train_loader)); remove enumerate and the batch_idx binding and iterate
directly over train_loader (for (data, target) in train_loader) to eliminate the
unused variable and satisfy Ruff B007 while keeping the same data/target
iteration semantics; update any references to batch_idx in the surrounding scope
if present.
- Line 77: Replace the use of .data with argmax to get predicted class indices:
locate the assignments that set predicted via torch.max(output.data, 1) (e.g.,
the line creating "_, predicted = torch.max(output.data, 1)" and the similar
occurrence later around line 100) and change them to use output.argmax(dim=1) so
"predicted" receives output.argmax(dim=1); this removes .data and uses the
idiomatic, detached-free way to compute class predictions from the tensor named
output.

In `@tests/torch_example.py`:
- Line 26: The for-loop in tests/torch_example.py uses an unused loop variable
named "t" (for t in range(2000)); rename it to "_t" (or simply "_") to silence
Ruff B007 and reduce lint noise. Update the loop header so the rest of the loop
body remains unchanged but references no new variable, ensuring no behavioral
change.
- Around line 55-68: The top-level execution in tests/torch_example.py prints
the PyTorch version and calls run_polynomial_regression immediately on import;
wrap that behavior in a if __name__ == "__main__": guard so importing the module
doesn't run the CPU/CUDA/MPS runs. Move the print(f"PyTorch version:
{torch.__version__}") and the subsequent device detection and
run_polynomial_regression("cpu"/"cuda:0"/"mps") calls into a main block (or a
small main() function that you call from that guard) and leave
run_polynomial_regression and any helper definitions at module level.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: cbac2571-f94d-41f9-a995-ebd0f676b584

📥 Commits

Reviewing files that changed from the base of the PR and between 5636980 and 4593e93.

📒 Files selected for processing (9)
  • .gitignore
  • CHANGELOG.md
  • README.GMAO
  • README.NAS
  • README.NCCS
  • README.md
  • install_miniforge.bash
  • tests/torch_example.py
  • tests/torch_example_like_tflow.py
✅ Files skipped from review due to trivial changes (2)
  • .gitignore
  • README.NAS
🚧 Files skipped from review as they are similar to previous changes (3)
  • README.NCCS
  • README.GMAO
  • CHANGELOG.md

mathomp4 and others added 2 commits April 3, 2026 09:19
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@mathomp4 mathomp4 merged commit d9b86b2 into main Apr 6, 2026
2 checks passed
@mathomp4 mathomp4 deleted the update-2025Dec04 branch April 6, 2026 16:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant