Welcome and one-sentence summary: unified GPU-ready differentiable path tracing for reflection and diffraction sequences.
Motivate the paradigm shift and stress why differentiability matters for inverse localization and material calibration demos.
Motivation bullet
Motivation bullet
Motivation bullet
Motivation bullet
Motivation bullet
We observe a paradigm shift: RT is becoming differentiable and GPU-friendly, unlocking new applications but also requiring new methods.
to...
Quick map of the presentation and pacing.
Recall prior work from the paper and highlight Fermat-based path formulation as the unifying physical principle.
State of the art bullet
State of the art bullet
State of the art bullet
State of the art bullet
Qualitative comparison of the different methods in terms of generality and speed.
Explain why a general formulation removes branching and mention this is where your contribution starts.
Limits and approach bullet
Limits and approach bullet
Limits and approach bullet
Limits and approach bullet
Limits and approach bullet
First method slide: focus on the optimization problem and the unified parameterization.
Methodology (1) bullet
Methodology (1) bullet
Methodology (1) bullet
Methodology (1) bullet
Methodology (1) bullet
Key equation: path as the solution of a convex optimization problem. Emphasize the unified formulation and how it enables batching.
Short parenthesis: mention this extension as a strong direction without overloading details.
Apart bullet
Apart bullet
Equations: the same formulation holds with a weighted sum of segment lengths, where weights are the refractive indices.
Second method slide: summarize the BFGS solver and why it is more robust than mixed Newton/GD when Hessians are ill-conditioned.
Methodology (2) bullet
Methodology (2) bullet
Methodology (2) bullet
Methodology (2) bullet
Methodology (2) bullet
Right panel: explain CA sensitivity to ill-conditioning from zero padding, and emphasize BFGS avoids Hessian inversion while enabling better line-search.
Introduce reverse-mode AD on this toy graph and display the two-output function definition.
Forward pass: computation graph flows from left to right.
Reverse pass: adjoint flow propagates from right to left.
After reverse-mode AD, explain why unrolling iterative solvers is expensive in memory and backward-time.
Implicit differentiation motivation bullet
Implicit differentiation motivation bullet
Implicit differentiation motivation bullet
Implicit differentiation motivation bullet
Implicit differentiation motivation bullet
Use the optimality condition and implicit function theorem to compute gradients without storing all iterations.
Results setup slide to make the benchmark conditions explicit before the plots.
Setup bullet
Setup bullet
Setup bullet
Setup bullet
Main benchmark figure, split into two panels: reflection-only on the left and diffraction-only on the right.
Draw GD for n=1 on both panels.
Draw CA for n=1 on both panels.
Draw L-BFGS for n=1 on both panels.
Draw ours for n=1 on both panels.
Draw ours-64 for n=1 on both panels.
Update both panels to n=2 while preserving solver ordering and style.
Update both panels to n=3 while preserving solver ordering and style.
Update both panels to n=4 while preserving solver ordering and style.
Update both panels to n=5 while preserving solver ordering and style.
End with a balanced message: method works now, but solver ecosystem is the next frontier.
Future bullet
Future bullet
Future bullet
Future bullet
Final note on the solver bottleneck and the need for more open implementations to bridge theory and practice.
Closing slide with thanks, and QR codes for the paper and code repository.