'/\‚ /\" ` /\ /¯¯¯¯\‚ /\’ "' /\‚ /\‚ ' `' /\‚ ` /\' °
/ \‚ ' / \‚' / \| |:'\ \` / \’ "' / \‚ / \‚ ' `' / \‚` / \' °
|\ \‚ ' / ‚‚/| |\ |:::| | '|\ \' ° |\ \' ` / \‚ ' /´ °/|'' /¯¯¯¯¯¯ \ |\ '\‚ ’
/¯¯¯\|:| |' | |:'| /¯¯¯\‚"|:| |::/ /| |\¯¯¯¯¯¯\| | |:| | /'\' ' |\ \° °' | |:'|° |\ /|\ \ |:| |' ‚
/ /|\ \/ /|‚ ‚ |\ '\/ ’/|’ °| |:| |/ /::| |:| |\____/| |/ /| / \ /\|::\ '\‚ | |:'|° |::\ /::|::\ \ |/ /|' '
/ /::|::\__/::|‚ ‚ |::\___/:/ '/| \| (:::'| |:| ¯¯¯|::|:|° ' / /::| |\ \ / \:::\ \° |\ \/ |::::\/::::|::::\ \/ /::|' '
/ /::::|::'|::|::°|‚ ‚ /¯¯¯¯¯\/ '/::| | |\ \:/´' \| |¯/¯¯¯¯¯`\' / /::::| |:| | / \:::\ '\‚ |::\ \ \::::|:::'/\:::/ `/::::|' '
| |::::'/\::|::|:::/ / /| '/::::| | |::\ \`' | |/`/|\ \' / /:::::/ |/ /| |\ \::| '| |:::| |"’ \::|::/ \/ /|\ \:::'/ `/\' '
|\ \::/'/¯¯¯¯¯¯`\' / /:/ '/:::::/’ | |::::\ \‚°' /_____ /:|:| |"' / /:::::/' / `/::| |:| |\ '\/ /| \:::| |"’ \|/ / /::|::\ \/ / \ "
|::\ \/ /|\ \°`|' |/ |:::::/ /' /|\::::| `|°'|::::::::::|:::|/ /| | |:::::/ / /::::| |/ /|::\____/:'| \/ /| | |:::|::::\ \/ \''
|::::\___/::|::\ '/| |\_____/|\ \::/´° |\ /::| \/ /| |::::::::::|::/|\ /::| |\ \::/_/ (:::::/' / /::|:::|:::::::|:'| / /::| |\ \·/\:::::\______/|
\::::|::::|:::|::::\ `/°|°'|:|::::::::|:|::\ /|’ |::\/::::| |\ /::| |::::::::::|/ |::\ /::::| |::\______/|\ '/|::/" |\ /::::|\::|:::::::|·/`''|\ /::::| |::\ /| \::::|::::::::::|°|
\::|::::|:‚'/\::::‚\ /::·|` |:|::::::::|:|::::\/::|’ |::'|::::/ |::\/:::·|` ¯¯¯¯¯ |::::\/:::::/’ |:::|::::::::::|·|"\/°|/" |::\ /:::::/´ '\|:::::::|/’ |::\/:::::/ |::::\/::| \::|::::::::::|°|
\|::::|/ \:::·|:::'/‚ ’\|::::::::|/ \::::|::|'" \::'|::/ |::'|::::/’’ `\::::|::::/´ ° \::|::::::::::|·|:'|::|" |:::|::::/ '¯¯¯¯ |:::|:::'/'' \::::|::| \|::::::::::|,/°
¯¯ "‚ \::|::/° ¯¯¯¯ \::'|:'/´ '\|/ \:'|::/’’ `\::|::/´ ° \|::::::::::|'/\·|:'/° \::|::/' ° ° \::|::/ \::|:'/ ¯¯¯¯¯’’
\|/´ " ` °\|/' ¨ `\|/' `\|/´ ° ¯¯¯¯¯ ’\|/´ \|/" " \|/ \|/’ ’’ ’
This project restores full CUDA compute support on macOS High Sierra using NVIDIA CUDA 10.2, cuDNN 7.6.5, and a custom‑built PyTorch 1.7.0.
A completely discontinued ecosystem — resurrected and preserved.
- Installation:
docs/installation.md - Troubleshooting:
docs/troubleshooting.md - Compatibility Matrix:
docs/compatibility_matrix.md - Architecture Overview:
docs/architecture_overview.md - Rebuild From Source:
docs/rebuild_from_source.md - Performance Benchmarks:
docs/performance_benchmarks.md - Historical Notes:
docs/historical_notes.md - Environment Specs:
docs/environment_specs.md - Examples:
examples/
CUDA on macOS is dead.
Apple killed NVIDIA support.
NVIDIA discontinued macOS drivers.
PyTorch removed CUDA for macOS entirely.
But High Sierra (10.13.6) is the last remaining macOS that can still run:
- CUDA 10.2
- cuDNN 7.6.5
- NVIDIA WebDrivers
- Pascal GPUs at full power
This repository preserves the only functioning deep learning GPU environment for macOS.
This is not just a wheel —
It is a technical museum piece that still works.
| Component | Version | Status |
|---|---|---|
| macOS | 10.13.6 High Sierra | ✔ Supported |
| NVIDIA Drivers | WebDriver 387/387+ | ✔ Required |
| CUDA Toolkit | 10.2 | ✔ Fully Compatible |
| cuDNN | 7.6.5 | ✔ Final Version for macOS |
| GPU Architecture | Pascal (SM_61) | ✔ Optimal |
| PyTorch | 1.7.0a0 (Custom CUDA Build) | ✔ Working |
| Python | 3.8.x | ✔ Compatible |
- GeForce GTX 1050 / 1050 Ti
- GeForce GTX 1060
- GeForce GTX 1070
- GeForce GTX 1080
- GeForce GTX 1080 Ti
- Titan X / Titan Xp
- GT 710 / GT 730
- GTX 770 / 780 / 780 Ti
- GTX 680
- GTX 285
- GT 120
- 8800 GT
- Quadro K5000 (Mac)
- Quadro 4000 (Mac)
- FX 4800
- FX 5600
If WebDriver works → CUDA PyTorch works.
- Architecture overview:
docs/architecture_overview.md - Troubleshooting (WebDriver, CUDA, cuDNN, sm_61, nvcc, SIP):
docs/troubleshooting.md - Performance benchmarks (matmul, transformer, kernels):
docs/performance_benchmarks.md - Compatibility matrix (GPU list & tiers):
docs/compatibility_matrix.md - Rebuild from source (wheel reproduction):
docs/rebuild_from_source.md - Historical notes (macOS CUDA deprecation timeline):
docs/historical_notes.md
Preservation artifacts: archive/ (build logs, NVCC flags, CMake cache, wheel hashes, CUDA lib hashes, test outputs)
Environment reproducibility: environment/ (pip JSON, YAML, full CUDA/cuDNN dumps, system specs)
Schemas for CI validation: schemas/
Direct wheel download:
https://github.com/careunix/PyTorch-HighSierra-CUDA-Revival/releases/download/v0.1.0/torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whl
Option A — install directly from URL:
pip install "https://github.com/careunix/PyTorch-HighSierra-CUDA-Revival/releases/download/v0.1.0/torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whl"Option B — download then install:
curl -L -o torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whl \
https://github.com/careunix/PyTorch-HighSierra-CUDA-Revival/releases/download/v0.1.0/torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whl
pip install ./torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whlSHA256 should be:
38da4acfe780a041b1f73f67c66efcdb37e9773615446f6a02ed2586f3cff9c7
Check locally:
shasum -a 256 torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whl | awk '{print $1}'Verify GPG signature (public key is in repo):
# Import public key (from repo)
gpg --import archive/keys/careunix.pub.asc
# Or fetch raw key without cloning
curl -L -o careunix.pub.asc \
https://raw.githubusercontent.com/careunix/PyTorch-HighSierra-CUDA-Revival/main/archive/keys/careunix.pub.asc
gpg --import careunix.pub.asc
# Verify detached signature (from repo)
curl -L -o SIGNATURE.asc \
https://raw.githubusercontent.com/careunix/PyTorch-HighSierra-CUDA-Revival/main/wheel/SIGNATURE.asc
gpg --verify SIGNATURE.asc torch-1.7.0a0-cp38-cp38-macosx_10_13_x86_64.whlimport torch
print("CUDA:", torch.cuda.is_available())
print("GPU:", torch.cuda.get_device_name(0))Expected:
CUDA: True
GPU: GeForce GTX 1060
$ nvcc --version
Cuda compilation tools, release 10.2, V10.2.89
$ python --version
Python 3.8.20
$ conda --version
conda 23.5.2
$ python -m pip --version
pip 24.2 (Python 3.8)
This wheel was built & validated inside the environment above. Any system matching these versions (or close variants) will work.
This section documents the exact Python package list used during the successful compilation of:
- PyTorch 1.7.0a0 (custom CUDA build)
- CUDA 10.2
- cuDNN 7.6.5
- Python 3.8.20
- Conda 23.5.2
Environment name: gpt2env
Primary developer: careunix
These versions are confirmed 100% compatible and form a fully reproducible build environment.
You can generate the same format with:
python -m pip list --format=json > pip_environment.json(Direct JSON output from careunix’s functional build environment)
[
{"name": "accelerate", "version": "1.0.1"},
{"name": "autocommand", "version": "2.2.2"},
{"name": "backports.tarfile", "version": "1.2.0"},
{"name": "Brotli", "version": "1.0.9"},
{"name": "certifi", "version": "2024.8.30"},
{"name": "cffi", "version": "1.17.1"},
{"name": "charset-normalizer", "version": "3.3.2"},
{"name": "click", "version": "8.1.8"},
{"name": "filelock", "version": "3.16.1"},
{"name": "future", "version": "0.18.3"},
{"name": "huggingface-hub", "version": "0.0.12"},
{"name": "idna", "version": "3.7"},
{"name": "importlib_metadata", "version": "8.0.0"},
{"name": "importlib_resources", "version": "6.4.0"},
{"name": "inflect", "version": "7.3.1"},
{"name": "jaraco.collections", "version": "5.1.0"},
{"name": "jaraco.context", "version": "5.3.0"},
{"name": "jaraco.functools", "version": "4.0.1"},
{"name": "jaraco.text", "version": "3.12.1"},
{"name": "joblib", "version": "1.4.2"},
{"name": "mkl-fft", "version": "1.3.8"},
{"name": "mkl-random", "version": "1.2.4"},
{"name": "mkl-service", "version": "2.4.0"},
{"name": "more-itertools", "version": "10.3.0"},
{"name": "numpy", "version": "1.23.5"},
{"name": "packaging", "version": "25.0"},
{"name": "pip", "version": "24.2"},
{"name": "platformdirs", "version": "4.2.2"},
{"name": "pycparser", "version": "2.21"},
{"name": "PySocks", "version": "1.7.1"},
{"name": "PyYAML", "version": "6.0.2"},
{"name": "regex", "version": "2024.11.6"},
{"name": "requests", "version": "2.32.3"},
{"name": "sacremoses", "version": "0.1.1"},
{"name": "setuptools", "version": "75.1.0"},
{"name": "six", "version": "1.16.0"},
{"name": "tokenizers", "version": "0.10.3"},
{"name": "tomli", "version": "2.0.1"},
{
"name": "torch",
"version": "1.7.0a0",
"editable_project_location": "/Users/careunix/pytorch"
},
{"name": "tqdm", "version": "4.67.1"},
{"name": "transformers", "version": "4.8.2"},
{"name": "typeguard", "version": "4.3.0"},
{"name": "typing_extensions", "version": "4.11.0"},
{"name": "urllib3", "version": "2.2.3"},
{"name": "wheel", "version": "0.44.0"},
{"name": "zipp", "version": "3.19.2"}
]- This list ensures your build can be identically recreated.
- This project is part of the CUDA Revival effort led by careunix.
- If you fork the main repo, keep this section for scientific reproducibility.
- Run all tests and capture outputs:
bash scripts/run_all_tests.sh - Export environment to JSON + YAML:
bash scripts/export_env.sh - System probe (GPU, driver, CUDA, cuDNN → JSON):
python scripts/system_probe.py --out environment/system_specs.json - Validate wheel integrity and tags:
python scripts/validate_whl.py
CI pipelines: link checks, style (black + flake8), CPU fallback tests, JSON schema validation.
import torch
a = torch.randn(10000, 10000, device="cuda")
b = torch.randn(10000, 10000, device="cuda")
c = torch.mm(a, b)
torch.cuda.synchronize()
print("OK:", c.shape)Confirms:
- GPU kernels
- Full BLAS acceleration
- CUDA stability
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
device = "cuda"
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")
model = GPT2LMHeadModel.from_pretrained("gpt2-medium").to(device)
inp = tokenizer.encode("Once upon a time", return_tensors="pt").to(device)
out = model.generate(inp, max_length=60)
print(tokenizer.decode(out[0]))- libcudart
- libcublas
- libcufft
- libcurand
- libcusolver
- libcusparse
- nvrtc
- nvToolsExt
- cuDNN 7.6.5
- MKLDNN
- QNNPACK
- NNPACK
CUDA version: 10.2
cuDNN version: 7.6.5
GPU arch: sm_61
USE_CUDA: ON
USE_CUDNN: ON
USE_MKLDNN: ON
USE_NNPACK: ON
USE_QNNPACK: ON
Build Type: Release
If CUDA is not available:
- The wheel still imports normally
- PyTorch works fully (CPU backend)
- No configuration needed
This makes the package safe for any High Sierra installation.
- Archive & preserve the last NVIDIA macOS CUDA stack
- Keep High Sierra CUDA systems alive
- Support researchers using Pascal GPUs
- Provide a fully reproducible environment
- Document a disappeared ecosystem
This repository is digital preservation.
You may contribute via:
- GPU compatibility reports
- Benchmarks
- Documentation improvements
- Mirrors / backups / alternative builds
MIT License
Preserving the final NVIDIA‑powered AI stack ever possible on macOS.
Pascal lives. care unix;)


