-
Notifications
You must be signed in to change notification settings - Fork 66
feat(tidy3d): FXC-3693-triangle-mesh-support-for-adjoint #2962
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
feat(tidy3d): FXC-3693-triangle-mesh-support-for-adjoint #2962
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5 files reviewed, 4 comments
862e0c2 to
427fe09
Compare
Diff CoverageDiff: origin/develop...HEAD, staged and unstaged changes
Summary
tidy3d/components/data/data_array.pytidy3d/components/geometry/mesh.pytidy3d/components/geometry/primitives.py |
0f34a50 to
3a8d240
Compare
26bbd4e to
e124a1e
Compare
e124a1e to
ac9b7f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 files reviewed, 4 comments
yaugenst-flex
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @marcorudolphflex this is awesome!
| @staticmethod | ||
| def _maybe_convert_object_boxes(data): | ||
| """Convert object arrays of autograd boxes into ArrayBox instances.""" | ||
|
|
||
| if isinstance(data, np.ndarray) and data.dtype == np.object_ and data.size: | ||
| # only convert if at least one element is an autograd tracer | ||
| if any(isbox(item) for item in data.flat): | ||
| return anp.array(data.tolist()) | ||
| return data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't really get it either. In the context of hashing in run(), this happens with the old code:
tidy3d/components/base.py:922: in add_data_to_file
value.to_hdf5(fname=f_handle, group_path=subpath)
tidy3d/components/data/data_array.py:237: in to_hdf5
self.to_hdf5_handle(f_handle=fname, group_path=group_path)
tidy3d/components/data/data_array.py:243: in to_hdf5_handle
sub_group[DATA_ARRAY_VALUE_NAME] = get_static(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../.cache/pypoetry/virtualenvs/tidy3d-fpwg7McD-py3.12/lib/python3.12/site-packages/h5py/_hl/group.py:493: in __setitem__
ds = self.create_dataset(None, data=obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../.cache/pypoetry/virtualenvs/tidy3d-fpwg7McD-py3.12/lib/python3.12/site-packages/h5py/_hl/group.py:193: in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../.cache/pypoetry/virtualenvs/tidy3d-fpwg7McD-py3.12/lib/python3.12/site-packages/h5py/_hl/dataset.py:90: in make_new_dset
tid = h5t.py_create(dtype, logical=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
h5py/h5t.pyx:1674: in h5py.h5t.py_create
???
h5py/h5t.pyx:1698: in h5py.h5t.py_create
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E TypeError: Object dtype dtype('O') has no native HDF5 equivalent
h5py/h5t.pyx:1765: TypeError
| # geometry parameters. ``trimesh`` expects plain ``float`` values, so strip any | ||
| # tracing information before constructing the mesh. | ||
| triangles = np.array(triangles) | ||
| if triangles.dtype == np.object_: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we checking against np.object_ type here instead of for ArrayBox? Isn't this too broad?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I'm wondering when this happens. Generally we shouldn't even be dealing with object arrays in the first place...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the strange pydantic bug I mentioned one time. It only happens on validation of the simulation, not if only validating the mesh itself. At that point, this function get's a np.ndarray and validation would fail with a test on ArrayBox.
test_autograd_box_polyslab_numerical.py:232: in run_parameter_simulations
sim = base_sim.updated_copy(structures=[structure])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../../../tidy3d/components/base.py:378: in updated_copy
return self._updated_copy(**kwargs, deep=deep, validate=validate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../../../tidy3d/components/base.py:423: in _updated_copy
return self.copy(update=kwargs, deep=deep, validate=validate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../../../tidy3d/components/base.py:354: in copy
return self.validate(new_copy.dict())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
virtualenvs/tidy3d-fpwg7McD-py3.12/lib/python3.12/site-packages/pydantic/v1/main.py:717: in validate
return cls(**value)
^^^^^^^^^^^^
../../../../tidy3d/components/base.py:187: in __init__
super().__init__(**kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
__pydantic_self__ = Simulation()
data = {'attrs': {}, 'boundary_spec': {'attrs': {}, 'type': 'BoundarySpec', 'x': {'attrs': {}, 'minus': {'attrs': {}, 'name':...rs': {}, 'name': None, 'type': 'Periodic'}, 'type': 'Boundary'}, ...}, 'center': (0.0, 0.0, 0.0), 'courant': 0.99, ...}
values = {'attrs': {}, 'boundary_spec': BoundarySpec(attrs={}, x=Boundary(attrs={}, plus=Periodic(attrs={}, name=None, type='Pe...lpha_min=0.0, alpha_max=0.0)), type='Boundary'), type='BoundarySpec'), 'center': (0.0, 0.0, 0.0), 'courant': 0.99, ...}
fields_set = {'attrs', 'boundary_spec', 'center', 'courant', 'grid_spec', 'internal_absorbers', ...}
validation_error = ValidationError(model='Simulation', errors=[{'loc': ('structures',), 'msg': 'setting an array element with a sequence.', 'type': 'value_error'}])
def __init__(__pydantic_self__, **data: Any) -> None:
"""
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
"""
# Uses something other than `self` the first arg to allow "self" as a settable attribute
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
if validation_error:
> raise validation_error
E pydantic.v1.error_wrappers.ValidationError: 1 validation error for Simulation
E structures
E setting an array element with a sequence. (type=value_error)
virtualenvs/tidy3d-fpwg7McD-py3.12/lib/python3.12/site-packages/pydantic/v1/main.py:347: ValidationError
| def _get_barycentric_samples(self, subdivisions: int, dtype: np.dtype) -> np.ndarray: | ||
| """Return barycentric sample coordinates for a subdivision level.""" | ||
|
|
||
| cache = self._barycentric_samples |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to use a cached property here instead of a new private model field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't work with subdivisions and dtype argument, right? Alternatively, do we want to build a cache variant with arguments?
| vjps[("mesh_dataset", "surface_mesh")] = triangle_grads | ||
| return vjps | ||
|
|
||
| def _collect_surface_samples( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is preeetty complex, can we split it up? At least it should be possible to split out the 2d and 3d sampling?
ac9b7f5 to
253795f
Compare
253795f to
a052631
Compare
Greptile Overview
Greptile Summary
This PR implements autograd support for
TriangleMeshgeometries, enabling gradient-based optimization of arbitrary mesh surfaces in inverse design workflows.Key changes:
_compute_derivativesinTriangleMeshthat samples triangle surfaces adaptively, evaluates boundary gradients via the existing permittivity-gradient infrastructure, and accumulates sensitivities onto per-vertex data arrays using barycentric interpolationDataArray.__init__to convert numpy object arrays containing autograd boxes, enabling traced geometry parameters to flow through mesh constructionto_stlexport method for TriangleMesh with binary/ASCII format supportIssues found:
np.iscloseor<=) instead of==Confidence Score: 4/5
Important Files Changed
File Analysis
Sequence Diagram
sequenceDiagram participant User participant Autograd participant TriangleMesh participant DerivativeInfo participant Sampling participant Interpolator participant GradAccum User->>Autograd: Define objective with TriangleMesh parameters Autograd->>TriangleMesh: Forward pass (construct geometry) TriangleMesh->>TriangleMesh: _triangles_to_trimesh (strip autograd boxes) Note over TriangleMesh: Convert ArrayBox to static arrays<br/>for trimesh compatibility Autograd->>TriangleMesh: Backward pass (_compute_derivatives) TriangleMesh->>TriangleMesh: Validate mesh_dataset exists TriangleMesh->>DerivativeInfo: Check simulation_bounds alt Mesh outside simulation TriangleMesh-->>Autograd: Return zero gradients else Mesh intersects simulation TriangleMesh->>Sampling: _collect_surface_samples(triangles, spacing, bounds) loop For each triangle face Sampling->>Sampling: Compute area and normal alt Collapsed axis (2D simulation) Sampling->>Sampling: _triangle_plane_segments (intersect with plane) Sampling->>Sampling: Sample along intersection segments else Full 3D Sampling->>Sampling: _subdivision_count (adaptive based on area + edges) Sampling->>Sampling: _get_barycentric_samples (cached patterns) end Sampling->>Sampling: Filter samples inside simulation bounds end Sampling-->>TriangleMesh: Return {points, normals, perps, weights, faces, barycentric} TriangleMesh->>DerivativeInfo: create_interpolators if needed DerivativeInfo-->>Interpolator: Build field interpolators TriangleMesh->>DerivativeInfo: evaluate_gradient_at_points(samples, interpolators) DerivativeInfo->>Interpolator: Interpolate fields at sample points Interpolator-->>DerivativeInfo: Return gradient values g DerivativeInfo-->>TriangleMesh: Boundary gradient g TriangleMesh->>GradAccum: Accumulate per-vertex contributions loop For each vertex (0, 1, 2) GradAccum->>GradAccum: scaled = (weights * g * normals) * barycentric[:, vertex] GradAccum->>GradAccum: np.add.at(triangle_grads[:, vertex], faces, scaled) end GradAccum-->>TriangleMesh: triangle_grads array TriangleMesh-->>Autograd: vjps[("mesh_dataset", "surface_mesh")] end Autograd-->>User: Final gradients w.r.t. mesh verticesContext used:
dashboard- Use a tolerance-based comparison (e.g., np.isclose) for floating-point numbers instead of direct equ... (source)dashboard- In changelogs, enclose code identifiers (class, function names) in backticks and use specific names ... (source)