Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ Miniconda2/
xor*
tmp-for-ffnet/
*.png

data/
37 changes: 36 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,50 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Added

### Changed

### Deprecated

### Removed

### Fixed

## [26.1.1-3] - 2026-04-03

### Fixed

- Fix `ffnet` installation for Python 3.13+ and modern pip (26.x):
- Remove `--no-use-pep517` flag, which was dropped in pip 23.1
- Add `--no-build-isolation` for Python 3.13+ so that `numpy` (required at
Fortran compile time) is visible to pip's build backend instead of being
hidden inside an isolated sandbox
- Simplify misleading echo message in the ffnet install block
- Remove `--show-channel-urls` flag from `mamba list` calls; the flag is not
supported by mamba 2.x and caused the end-of-install package listing to be
skipped with a spurious warning

### Changed

- Update PyTorch examples to support Apple Metal Performance Shaders (MPS)
- Update example Python version to 3.14
- Update example Miniforge version to 26.1.1-3
- Disable TensorFlow installation for Python 3.14 (see https://github.com/tensorflow/tensorflow/issues/102890)

### Added

- Add `torchvision` to the `pip install` list in `install_miniforge.bash`
- Add CPU vs. Accelerated (CUDA/MPS) comparisons to PyTorch tests (`tests/torch_example.py` and `tests/torch_example_like_tflow.py`)
- Explicit Conda Packages
- basemap (latest version for Python 3.14 support only on conda-forge)
- Explicit Pip Packages
- python-docx

### Removed

### Deprecated
- Explicit Pip Packages
- basemap (latest version for Python 3.14 support only on conda-forge)

## [25.3.1] - 2025-10-02

Expand Down
2 changes: 1 addition & 1 deletion README.GMAO
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
The command for installing on ford1 is:

```
$ ./install_miniconda.bash --python_version 3.12 --miniconda_version 24.5.0-0 --prefix /ford1/share/gmao_SIteam/GEOSpyD
$ ./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.1-3 --prefix /ford1/share/gmao_SIteam/GEOSpyD
```
2 changes: 1 addition & 1 deletion README.NAS
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
The command for installing on NAS systems is:

```
$ ./install_miniconda.bash --python_version 3.12 --miniconda_version 24.5.0-0 --prefix /nobackup/gmao_SIteam/GEOSpyD
$ ./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.1-3 --prefix /nobackup/gmao_SIteam/GEOSpyD
```
2 changes: 1 addition & 1 deletion README.NCCS
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
The command for installing on NCCS system is:

```
$ ./install_miniconda.bash --python_version 3.12 --miniconda_version 24.5.0-0 --prefix /usr/local/other/python/GEOSpyD/
$ ./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.1-3 --prefix /usr/local/other/python/GEOSpyD/
```
22 changes: 12 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ to prevent infection from the Anaconda `defaults` channel, the script at the end
In order to use the install script, you can run:

```
./install_miniforge.bash --python_version 3.12 --miniforge_version 24.5.0-0 --prefix /opt/GEOSpyD
./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.1-3 --prefix /opt/GEOSpyD
```

will create an install at:
```
/opt/GEOSpyD/24.5.0-0_py3.12/YYYY-MM-DD
```text
/opt/GEOSpyD/26.1.1-3/YYYY-MM-DD
```

where YYYY-MM-DD is the date of the install. We use a date so that if
Expand All @@ -28,17 +28,19 @@ the stack is re-installed, the previous install is not overwritten.
## Usage

```
Usage: ./install_miniforge.bash --python_version <python version> --miniforge_version <miniforge> --prefix <prefix> [--micromamba | --mamba] [--blas <blas>]
Usage: ./install_miniforge.bash --python_version <python version> --miniforge_version <miniforge> --prefix <prefix>
[--micromamba | --mamba] [--blas <blas>] [--ffnet-hack]

Required arguments:
--python_version <python version> (e.g., 3.12)
--miniforge_version <miniforge_version version> (e.g., 24.5.0-0)
--python_version <python version> (e.g., 3.14)
--miniforge_version <miniforge_version version> (e.g., 26.1.1-3)
--prefix <full path to installation directory> (e.g, /opt/GEOSpyD)

Optional arguments:
--blas <blas> (default: accelerate, options: mkl, openblas, accelerate, blis)
--micromamba: Use micromamba installer (default)
--mamba: Use mamba installer
--ffnet-hack: Install ffnet from fork (used on Bucy due to odd issue not finding gfortran)
--help: Print this message

By default we use the micromamba installer on both Linux and macOS
Expand All @@ -47,17 +49,17 @@ Usage: ./install_miniforge.bash --python_version <python version> --miniforge_ve
NOTE 1: This script installs within /opt/GEOSpyD with a path based on:

1. The Miniforge version
2. The Python version
3. The date of the installation
2. The date of the installation

For example: ./install_miniforge.bash --python_version 3.12 --miniforge_version 24.5.0-0 --prefix /opt/GEOSpyD
For example: ./install_miniforge.bash --python_version 3.14 --miniforge_version 26.1.1-3 --prefix /opt/GEOSpyD

will create an install at:
/opt/GEOSpyD/24.5.0-0_py3.12/2024-08-29
/opt/GEOSpyD/26.1.1-3/2026-03-11

NOTE 2: This script will create or substitute a .mambarc
and .condarc file in the user's home directory. If you
have an existing .mambarc and/or .condarc file, it will be
restored after installation. We do this to ensure that the
installation uses conda-forge as the default channel.
```

36 changes: 26 additions & 10 deletions install_miniforge.bash
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,8 @@ fi
# Usage
# -----

EXAMPLE_PY_VERSION="3.13"
EXAMPLE_MINI_VERSION="25.3.1-0"
EXAMPLE_PY_VERSION="3.14"
EXAMPLE_MINI_VERSION="26.1.1-3"
EXAMPLE_INSTALLDIR="/opt/GEOSpyD"
EXAMPLE_DATE=$(date +%F)
usage() {
Expand Down Expand Up @@ -605,6 +605,8 @@ $PACKAGE_INSTALL uxarray

$PACKAGE_INSTALL rasterio contextily

$PACKAGE_INSTALL basemap

# Only install pythran on linux. On mac it brings in an old clang
if [[ $MINIFORGE_ARCH == Linux ]]
then
Expand Down Expand Up @@ -651,26 +653,34 @@ $PIP_INSTALL PyRTF3 pipenv pymp-pypi h5py
$PIP_INSTALL pycircleci metpy siphon questionary xgrads
$PIP_INSTALL ruamel.yaml
$PIP_INSTALL xgboost
$PIP_INSTALL tensorflow evidential-deep-learning silence_tensorflow
$PIP_INSTALL torch

# Tensorflow does not support Python 3.14 yet
# https://github.com/tensorflow/tensorflow/issues/102890
if [[ $PYTHON_VER_WITHOUT_DOT -ge 314 ]]
then
echo "Skipping tensorflow installation as Python $PYTHON_VER is 3.14 or higher"
else
$PIP_INSTALL tensorflow evidential-deep-learning silence_tensorflow
fi
$PIP_INSTALL torch torchvision
$PIP_INSTALL yaplon
$PIP_INSTALL lxml
$PIP_INSTALL juliandate
$PIP_INSTALL pybufrkit
$PIP_INSTALL pyephem
$PIP_INSTALL basemap
$PIP_INSTALL redis
$PIP_INSTALL Flask
$PIP_INSTALL goes2go
$PIP_INSTALL nco
$PIP_INSTALL cdo
$PIP_INSTALL ecmwf-opendata
$PIP_INSTALL python-docx

# some packages require a Fortran compiler. This sometimes isn't available
# on macs (though usually is)
if [[ $FORTRAN_AVAILABLE == TRUE ]]
then
echo "We have a Fortran compiler and are Python 3.12 or older. Installing ffnet"
echo "We have a Fortran compiler. Installing ffnet (Python $PYTHON_VER)"
# we need to install ffnet from https://github.com/mrkwjc/ffnet.git
# This is because the version in PyPI is not compatible with Python 3
# and latest scipy
Expand All @@ -682,8 +692,14 @@ then
if [[ $PYTHON_VER_WITHOUT_DOT -ge 313 ]]
then
$PIP_INSTALL setuptools wheel
# We also need a new flag for Python 3.13
EXTRA_PIP_FLAGS='--no-use-pep517'
fi
# For Python 3.13+, pip's isolated build environment does not inherit the
# conda env packages (e.g. numpy), which ffnet needs at build time. Passing
# --no-build-isolation tells pip to use the already-installed packages from
# the conda env instead of creating a fresh isolated sandbox.
if [[ $PYTHON_VER_WITHOUT_DOT -ge 313 ]]
then
EXTRA_PIP_FLAGS='--no-build-isolation'
else
EXTRA_PIP_FLAGS=''
fi
Expand Down Expand Up @@ -755,8 +771,8 @@ $PIP_INSTALL prompt_toolkit
# Use mamba to output list of packages installed
# ----------------------------------------------
cd $MINIFORGE_ENVDIR
$MINIFORGE_BINDIR/mamba list -n $MINIFORGE_ENVNAME --show-channel-urls --explicit > distribution_spec_file.txt
$MINIFORGE_BINDIR/mamba list -n $MINIFORGE_ENVNAME --show-channel-urls > mamba_list_packages.txt
"$MINIFORGE_BINDIR"/mamba list -n "$MINIFORGE_ENVNAME" --explicit > distribution_spec_file.txt
"$MINIFORGE_BINDIR"/mamba list -n "$MINIFORGE_ENVNAME" > mamba_list_packages.txt
./bin/pip freeze > pip_freeze_packages.txt

# Restore User's .mambarc and .condarc using cleanup function
Expand Down
90 changes: 59 additions & 31 deletions tests/torch_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,44 +2,72 @@

import torch
import math
import time


dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU
def run_polynomial_regression(device_name):
device = torch.device(device_name)
print(f"\n========== Running on {device} ==========")
dtype = torch.float

# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)

learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
learning_rate = 1e-6

# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
start_time = time.time()
for _t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x**2 + d * x**3

# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()

# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x**2).sum()
grad_d = (grad_y_pred * x**3).sum()

# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
end_time = time.time()

print(f"Time taken: {end_time - start_time:.4f} seconds")
print(f"Final loss: {loss:.4f}")
print(
f"Result eq: y = {a.item():.4f} + {b.item():.4f} x + {c.item():.4f} x^2 + {d.item():.4f} x^3"
)


def main():
print(f"PyTorch version: {torch.__version__}")

# Always run CPU first
run_polynomial_regression("cpu")

# Determine and run accelerated device
if torch.cuda.is_available():
print(f"\nFound CUDA: {torch.cuda.get_device_name(0)}")
run_polynomial_regression("cuda:0")
elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
print("\nFound Apple Metal Performance Shaders (MPS)")
run_polynomial_regression("mps")
else:
print("\nNo accelerated device (CUDA/MPS) found. Skipping accelerated run.")


if __name__ == "__main__":
main()
Loading
Loading