From e115c24d455ccff1da74e40732369e0a23c2b70e Mon Sep 17 00:00:00 2001 From: Vivek Bharadwaj Date: Mon, 23 Feb 2026 23:47:49 -0800 Subject: [PATCH 1/2] One more attempt at modifying the license and readme. --- CHANGELOG.md | 4 +- openequivariance/LICENSE | 29 ++++- openequivariance/README.md | 221 ++++++++++++++++++++++++++++++++++++- 3 files changed, 250 insertions(+), 4 deletions(-) mode change 120000 => 100644 openequivariance/LICENSE mode change 120000 => 100644 openequivariance/README.md diff --git a/CHANGELOG.md b/CHANGELOG.md index dd37f051..dfe58c69 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,9 +1,9 @@ ## Latest Changes ### v0.6.1 (2025-02-23) -OpenEquivariance v0.6.1 brings long-needed improvements to the +OpenEquivariance v0.6.2 brings long-needed improvements to the PyTorch frontend. We strongly encourage all users to upgrade -to PyTorch 2.10 and OEQ v0.6.1. +to PyTorch 2.10 and OEQ v0.6.2. **Added**: - OpenEquivariance triggers a build of the CUDA extension module diff --git a/openequivariance/LICENSE b/openequivariance/LICENSE deleted file mode 120000 index ea5b6064..00000000 --- a/openequivariance/LICENSE +++ /dev/null @@ -1 +0,0 @@ -../LICENSE \ No newline at end of file diff --git a/openequivariance/LICENSE b/openequivariance/LICENSE new file mode 100644 index 00000000..2b293818 --- /dev/null +++ b/openequivariance/LICENSE @@ -0,0 +1,28 @@ +BSD 3-Clause License + +Copyright (c) 2025, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/openequivariance/README.md b/openequivariance/README.md deleted file mode 120000 index 32d46ee8..00000000 --- a/openequivariance/README.md +++ /dev/null @@ -1 +0,0 @@ -../README.md \ No newline at end of file diff --git a/openequivariance/README.md b/openequivariance/README.md new file mode 100644 index 00000000..3efe59a7 --- /dev/null +++ b/openequivariance/README.md @@ -0,0 +1,220 @@ +# OpenEquivariance +[![OEQ C++ Extension Build Verification](https://github.com/PASSIONLab/OpenEquivariance/actions/workflows/verify_extension_build.yml/badge.svg?event=push)](https://github.com/PASSIONLab/OpenEquivariance/actions/workflows/verify_extension_build.yml) +[![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause) + +[[PyTorch Examples]](#pytorch-examples) +[[JAX Examples]](#jax-examples) +[[Citation and Acknowledgements]](#citation-and-acknowledgements) + +OpenEquivariance is a CUDA and HIP kernel generator for the Clebsch-Gordon tensor product, +a key kernel in rotation-equivariant deep neural networks. +It implements some of the tensor products +that [e3nn](https://e3nn.org/) supports +commonly found in graph neural networks +(e.g. [Nequip](https://github.com/mir-group/nequip) or +[MACE](https://github.com/ACEsuit/mace)). To get +started with PyTorch, ensure that you have PyTorch +and GCC 9+ available before installing our package via + +```bash +pip install openequivariance +``` + +We provide up to an order of magnitude acceleration over e3nn perform on par with the latest +version of [NVIDIA cuEquivariance](https://github.com/NVIDIA/cuEquivariance), +which has a closed-source kernel package. +We also offer fused equivariant graph +convolutions that can reduce +computation and memory consumption significantly. + +For detailed instructions on tests, benchmarks, MACE / Nequip, and our API, +check out the [documentation](https://passionlab.github.io/OpenEquivariance). + +⭐️ **JAX**: Our latest update brings +support for JAX. For NVIDIA GPUs, +install it (after installing JAX) +with the following two commands strictly in order: + +``` bash +pip install openequivariance[jax] +pip install openequivariance_extjax --no-build-isolation +``` + +For AMD GPUs: +``` bash +pip install openequivariance[jax] +JAX_HIP=1 pip install openequivariance_extjax --no-build-isolation +``` + +See the section below for example usage and +our [API page](https://passionlab.github.io/OpenEquivariance/api/) for more details. + +## PyTorch Examples +Here's a CG tensor product implemented by e3nn: + +```python +import torch +import e3nn.o3 as o3 + +gen = torch.Generator(device='cuda') + +batch_size = 1000 +X_ir, Y_ir, Z_ir = o3.Irreps("1x2e"), o3.Irreps("1x3e"), o3.Irreps("1x2e") +X = torch.rand(batch_size, X_ir.dim, device='cuda', generator=gen) +Y = torch.rand(batch_size, Y_ir.dim, device='cuda', generator=gen) + +instructions=[(0, 0, 0, "uvu", True)] + +tp_e3nn = o3.TensorProduct(X_ir, Y_ir, Z_ir, instructions, + shared_weights=False, internal_weights=False).to('cuda') +W = torch.rand(batch_size, tp_e3nn.weight_numel, device='cuda', generator=gen) + +Z = tp_e3nn(X, Y, W) +print(torch.norm(Z)) +``` + +And here's the same tensor product using openequivariance. We require that your +tensors are stored on a CUDA device for this to work: + +```python +import openequivariance as oeq + +problem = oeq.TPProblem(X_ir, Y_ir, Z_ir, instructions, shared_weights=False, internal_weights=False) +tp_fast = oeq.TensorProduct(problem, torch_op=True) + +Z = tp_fast(X, Y, W) # Reuse X, Y, W from earlier +print(torch.norm(Z)) +``` + +Our interface for `oeq.TPProblem` is almost a strict superset of +`o3.TensorProduct` (two key differences: we +impose `internal_weights=False` and add support for multiple datatypes). +You can pass e3nn `Irreps` instances directly or +use `oeq.Irreps`, which is identical. + +We recommend reading the [e3nn documentation and API reference](https://docs.e3nn.org/en/latest/) first, then using our kernels +as drop-in replacements. We support most "uvu" and "uvw" tensor products; +see [this section](#tensor-products-we-accelerate) for an up-to-date list of supported configurations. + +**Important**: For many configurations, our kernels return results identical to +e3nn up to floating point roundoff (this includes all "uvu" problems with +multiplicity 1 for all irreps in the second input). For other configurations +(e.g. any "uvw" connection modes), we return identical +results up to a well-defined reordering of the weights relative to e3nn. + +If you're executing tensor products as part of a message passing graph +neural network, we offer fused kernels that save both memory and compute time: + +```python +from torch_geometric import EdgeIndex + +node_ct, nonzero_ct = 3, 4 + +# Receiver, sender indices for message passing GNN +edge_index = EdgeIndex( + [[0, 1, 1, 2], # Receiver + [1, 0, 2, 1]], # Sender + device='cuda', + dtype=torch.long) + +X = torch.rand(node_ct, X_ir.dim, device='cuda', generator=gen) +Y = torch.rand(nonzero_ct, Y_ir.dim, device='cuda', generator=gen) +W = torch.rand(nonzero_ct, problem.weight_numel, device='cuda', generator=gen) + +tp_conv = oeq.TensorProductConv(problem, torch_op=True, deterministic=False) # Reuse problem from earlier +Z = tp_conv.forward(X, Y, W, edge_index[0], edge_index[1]) # Z has shape [node_ct, z_ir.dim] +print(torch.norm(Z)) +``` + +If you can guarantee `EdgeIndex` is sorted by receiver index and supply the transpose +permutation, we can provide even greater speedup (and deterministic results) +by avoiding atomics: + +```python +_, sender_perm = edge_index.sort_by("col") # Sort by sender index +edge_index, receiver_perm = edge_index.sort_by("row") # Sort by receiver index + +# Now we can use the faster deterministic algorithm +tp_conv = oeq.TensorProductConv(problem, torch_op=True, deterministic=True) +Z = tp_conv.forward(X, Y[receiver_perm], W[receiver_perm], edge_index[0], edge_index[1], sender_perm) +print(torch.norm(Z)) +``` +**Note**: you don't need Pytorch geometric to use our kernels. When +`deterministic=False`, the `sender` and `receiver` indices can have +arbitrary order. + +## JAX Examples +After installation, use the library +as follows. Set `OEQ_NOTORCH=1` +in your environment to avoid the PyTorch import in +the regular `openequivariance` package. +```python +import jax +import os + +os.environ["OEQ_NOTORCH"] = "1" +import openequivariance as oeq + +seed = 42 +key = jax.random.PRNGKey(seed) + +batch_size = 1000 +X_ir, Y_ir, Z_ir = oeq.Irreps("1x2e"), oeq.Irreps("1x3e"), oeq.Irreps("1x2e") +problem = oeq.TPProblem(X_ir, Y_ir, Z_ir, [(0, 0, 0, "uvu", True)], shared_weights=False, internal_weights=False) + + +node_ct, nonzero_ct = 3, 4 +edge_index = jax.numpy.array( + [ + [0, 1, 1, 2], + [1, 0, 2, 1], + ], + dtype=jax.numpy.int32, # NOTE: This int32, not int64 +) + +X = jax.random.uniform(key, shape=(node_ct, X_ir.dim), minval=0.0, maxval=1.0, dtype=jax.numpy.float32) +Y = jax.random.uniform(key, shape=(nonzero_ct, Y_ir.dim), + minval=0.0, maxval=1.0, dtype=jax.numpy.float32) +W = jax.random.uniform(key, shape=(nonzero_ct, problem.weight_numel), + minval=0.0, maxval=1.0, dtype=jax.numpy.float32) + +tp_conv = oeq.jax.TensorProductConv(problem, deterministic=False) +Z = tp_conv.forward( + X, Y, W, edge_index[0], edge_index[1] +) +print(jax.numpy.linalg.norm(Z)) + +# Test JAX JIT +jitted = jax.jit(lambda X, Y, W, e1, e2: tp_conv.forward(X, Y, W, e1, e2)) +print(jax.numpy.linalg.norm(jitted(X, Y, W, edge_index[0], edge_index[1]))) +``` + +## Citation and Acknowledgements +If you find this code useful, please cite our paper: + +```bibtex +@inbook{openequivariance, +author={Vivek Bharadwaj and Austin Glover and Aydin Buluc and James Demmel}, +title={An Efficient Sparse Kernel Generator for O(3)-Equivariant Deep Networks}, +booktitle = {SIAM Conference on Applied and Computational Discrete Algorithms (ACDA25)}, +chapter = {}, +url={https://arxiv.org/abs/2501.13986}, +publisher={Society for Industrial and Applied Mathematics}, +year={2025} +} +``` + +Our codebase includes a lightweight clone of +[e3nn](https://e3nn.org/)'s frontend interface (in particular, the +`TensorProduct` and `Irreps` classes). We removed references to Pytorch +and separated the implementation from the problem description (for future +frontend support outside of torch). We also extracted the Wigner 3j tensor generating code from QuTiP. Thank you to the current +developers and maintainers! + +## Copyright + +Copyright (c) 2025, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved. + +If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office at IPO@lbl.gov. + +NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit others to do so. \ No newline at end of file From d0fb5a386488bb5154bb1817f170c0ed4498777e Mon Sep 17 00:00:00 2001 From: Vivek Bharadwaj Date: Mon, 23 Feb 2026 23:54:22 -0800 Subject: [PATCH 2/2] Updated release.yaml. --- .github/workflows/release.yaml | 4 ---- CHANGELOG.md | 2 +- 2 files changed, 1 insertion(+), 5 deletions(-) diff --git a/.github/workflows/release.yaml b/.github/workflows/release.yaml index a88e2f5c..0a8d3315 100644 --- a/.github/workflows/release.yaml +++ b/.github/workflows/release.yaml @@ -16,10 +16,6 @@ jobs: - name: install dependencies, then build source tarball run: | cd openequivariance - rm LICENSE - rm README.md - cp ../LICENSE . - cp ../README.md . python3 -m pip install build python3 -m build --sdist - name: store the distribution packages diff --git a/CHANGELOG.md b/CHANGELOG.md index dfe58c69..00d3156b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,6 @@ ## Latest Changes -### v0.6.1 (2025-02-23) +### v0.6.2 (2025-02-23) OpenEquivariance v0.6.2 brings long-needed improvements to the PyTorch frontend. We strongly encourage all users to upgrade to PyTorch 2.10 and OEQ v0.6.2.