Skip to content

autograd compatible s-matrix calculation #2572

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 8, 2025
Merged

Conversation

tylerflex
Copy link
Collaborator

@tylerflex tylerflex commented Jun 13, 2025

Objective functions involving s-matrix calculations using tidy3d.plugins.smatrix.ComponentModeler are now differentiable with autograd.

Here is an example:

import autograd as ag
import autograd.numpy as np
from tidy3d.plugins.smatrix import ComponentModeler, Port

port_in = Port(
    size=sim.sources[0].size,
    center=sim.sources[0].center,
    direction='+',
    name='in',
)

port_out = Port(
    size=sim.monitors[0].size,
    center=sim.monitors[0].center,
    direction='-',
    name='out',
)

modeler = ComponentModeler(
    simulation=sim.updated_copy(monitors=[], sources=[]),
    ports=[port_in, port_out],
    freqs=[freq0]
)

# ...

def power(center: tuple, size: tuple, eps: float) -> float:
    """Compute power transmitted into 0th order mode given a set of scatterer parameters."""
    sim = make_simulation(center=center, size=size, eps=eps)
    traced_modeler = modeler.updated_copy(simulation=sim.updated_copy(monitors=[], sources=[]))
    smatrix = traced_modeler.run()
    amp = smatrix.sel(port_in='in', port_out='out')
    return np.sum(np.abs(amp)**2).item()

grad = ag.grad(power)(...)

Greptile Summary

Added autograd compatibility to S-matrix calculations in ComponentModeler, enabling gradient-based optimization for photonic component design through automatic differentiation.

  • Modified plugins/smatrix/component_modelers/base.py to use autograd.numpy and run_async instead of non-differentiable batch.run
  • Updated plugins/smatrix/component_modelers/modal.py to use mask-based indexing for S-matrix construction instead of loc indexing
  • Added example in plugins/autograd/README.md demonstrating gradient computation for power transmission objectives
  • Documented feature in CHANGELOG.md under [Unreleased] section

@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch from 4951e4c to 44cd50b Compare June 13, 2025 18:50
@tylerflex
Copy link
Collaborator Author

NOTE: this doesn't cover the terminal component modeler, that will require some modifications inspired by this. I didnt want to mess with it in case I broke something.

@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch from 44cd50b to ed68d39 Compare June 13, 2025 18:56
@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch from ed68d39 to bfc34ba Compare June 13, 2025 19:22
Copy link
Contributor

github-actions bot commented Jun 13, 2025

Diff Coverage

Diff: origin/develop...HEAD, staged and unstaged changes

  • tidy3d/components/data/data_array.py (100%)
  • tidy3d/plugins/smatrix/component_modelers/base.py (50.0%): Missing lines 260-261,272
  • tidy3d/plugins/smatrix/component_modelers/modal.py (100%)
  • tidy3d/plugins/smatrix/component_modelers/terminal.py (100%)

Summary

  • Total: 38 lines
  • Missing: 3 lines
  • Coverage: 92%

tidy3d/plugins/smatrix/component_modelers/base.py

Lines 256-265

  256     def batch_data(self) -> BatchData:
  257         """The :class:`.BatchData` associated with the simulations run for this component modeler."""
  258 
  259         # NOTE: uses run_async because Batch is not differentiable.
! 260         batch = self.batch
! 261         run_async_kwargs = batch.dict(
  262             exclude={
  263                 "type",
  264                 "path_dir",
  265                 "attrs",

Lines 268-276

  268                 "num_workers",
  269                 "simulations",
  270             }
  271         )
! 272         return run_async(
  273             batch.simulations,
  274             **run_async_kwargs,
  275             local_gradient=LOCAL_GRADIENT,
  276             path_dir=self.path_dir,

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

4 files reviewed, no comments
Edit PR Review Bot Settings | Greptile

@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch 2 times, most recently from 9915820 to 69bbee5 Compare June 19, 2025 19:43
@yaugenst-flex yaugenst-flex force-pushed the tyler/autograd_/smatrix branch from 4d65bb6 to 548953f Compare June 23, 2025 15:45
@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch 4 times, most recently from 79ccc3a to 9449fe4 Compare July 17, 2025 18:43
@tylerflex tylerflex requested a review from yaugenst-flex July 17, 2025 18:43
@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch 3 times, most recently from a4d1737 to 4c6732a Compare July 17, 2025 18:53
@groberts-flex
Copy link
Contributor

groberts-flex commented Aug 5, 2025

Testing with terminal in my patch antenna optimization, and one other spot we need a change is on line 319/320 in terminal.py, to replace

V_matrix.loc[indexer] = V_out
I_matrix.loc[indexer] = I_out

with

V_matrix = V_matrix._with_updated_data(data=V_out.data, coords=indexer)
I_matrix = I_matrix._with_updated_data(data=I_out.data, coords=indexer)

I'm re-running the whole optimization currently, but so far it looks like it is working with this change!

Copy link
Contributor

@groberts-flex groberts-flex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was able to verify with my optimization that uses the terminal component modeler after locally adding the other _with_updated_data call. Initially, though, the optimization wasn't working and I realized it was because I need local_gradient=True on so that the gradient gets computed through my branch (for PEC changes). I'm not sure the best way to enable this, but I added another field to AbstractComponentModeler that was local_gradient with a default of False, and then when the run_async call gets made, I pass in local_gradient=self.local_gradient.

@tylerflex
Copy link
Collaborator Author

@groberts-flex thanks I'll revisit your other comments after my meetings this morning, but I added the initial fix in

44ff766

@tylerflex
Copy link
Collaborator Author

I think we could consider either adding local_gradient : bool = False as a ComponentModeler field or as a kwarg to ComponentModeler.run(local_gradient=False) could make sense. We could also just add **run_kwargs there. Unless @yaugenst-flex objects. (basically all extra kwargs get passed to run_async)

@yaugenst-flex
Copy link
Collaborator

i think it definitely should not be a model field but rather an argument to the run function. the default can be set via config soon. but it should not be part of the schema

@tylerflex
Copy link
Collaborator Author

i think it definitely should not be a model field but rather an argument to the run function. the default can be set via config soon. but it should not be part of the schema

@yaugenst-flex unfortunately the way things are organized right now, it seems like it needs to be a model field unless we want to break backwards compatibility. Maybe in light of the RF changes to the TerminalComponentModeler, we could consider waiting, but here's my stab at it 47130cf

Basically the .batch and .batch_data are currently @properties of the modeler, so we can't pass arguments to them without changing the syntax or messing around with mutability. as far as I can tell.

@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch from 3463a9f to 81a39e4 Compare August 6, 2025 14:03
@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch 4 times, most recently from 5a6647f to ce3fe38 Compare August 7, 2025 18:16
@tylerflex
Copy link
Collaborator Author

@yaugenst-flex want to take a final look?

Copy link
Collaborator

@yaugenst-flex yaugenst-flex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@tylerflex tylerflex enabled auto-merge August 8, 2025 13:21
@tylerflex tylerflex force-pushed the tyler/autograd_/smatrix branch from ce3fe38 to 207d118 Compare August 8, 2025 13:23
@tylerflex tylerflex added this pull request to the merge queue Aug 8, 2025
Merged via the queue into develop with commit 9a9f1f5 Aug 8, 2025
10 checks passed
@tylerflex tylerflex deleted the tyler/autograd_/smatrix branch August 8, 2025 14:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants