Skip to content

v2.0.0

Latest

Choose a tag to compare

@ktangsali ktangsali released this 10 Mar 23:47
1ca85d6

PhysicsNeMo General Release v2.0.0

📝 NVIDIA PhysicsNeMo v2.0 contains significant reorganization of all the features, with easier installation and integration to external packages. See the migration guide for more details!

Added

  • Refactored diffusion preconditioners in
    physicsnemo.diffusion.preconditioners relying on a new abstract base class
    BaseAffinePreconditioner for preconditioning schemes using affine
    transformations. Existing preconditioners (VPPrecond, VEPrecond,
    iDDPMPrecond, EDMPrecond) reimplemented based on this new interface.
  • New physicsnemo.experimental.nn.symmetry module that implements building
    blocks that preserve 2D and 3D rotational equivariance using a
    grid-based layout for efficient GPU parallelization, and an emphasis on
    compact einsum operations.

Changed

  • PhysicsNemo v2.0 contains significant reorganization of tools. Please see
    the v2.0-MIGRATION-GUIDE.md to understand what has changed and why.
  • DiT (Diffusion Transformer) has been moved from physicsnemo.experimental.models.dit
    to physicsnemo.models.dit.

Fixed

  • Shape mistmatch bug in the Lennard Jones example

Dependencies

  • CUDA backend is now selected via orthogonal cu12 / cu13 extras rather
    than being hardcoded to CUDA 13. Feature extras (nn-extras, utils-extras,
    etc.) are now CUDA-agnostic and can be combined with either backend, e.g.
    pip install "nvidia-physicsnemo[cu13,nn-extras]". When neither cu12 nor
    cu13 is specified, PyTorch is installed from PyPI using its default build
    (currently CUDA 12.8 on Linux). For development with uv, use
    uv sync --extra cu13 (or --extra cu12) to select the backend.

Contributors

We’re grateful to everyone who contributed issues, feature ideas, fixes, and documentation updates — your input is what helps us continuously improve PhysicsNeMo for the whole community!
A special shout-out to the authors of the pull requests listed above, in no particular order:

@jleinonen @dran-dev @aayushg55 @saikrishnanc-nv @jeis4wpi @albertocarpentieri @paveltomin @weilr @giprayogo @tonishi-nv @younes-abid @dakhare-creator @Alexey-Kamenev

Thank you ❤️ — we truly appreciate your contributions and hope to see more from you in the future!