In Reality Composer Pro (for visionOS), you can create material shader nodes that hold a geometry modifier model-to-view transform. This transform will be different for the eye during each render pass, correct? Also, one could infer the pupillary distance from this (at least in the shader). Eye tracking itself does not modify this, I assume.
You're absolutely right – model-to-view transforms for each eye must be different. But as strange as it may sound, Shader Graph's nodal composite of RCP 2.0 doesn't yet take the Inter-Pupillary Distance
(IPD) into account. So, it's obvious that 4x4 transform matrices for the left and right eyes are automatically calculated based on the known distance between the two main cameras. For a more comfortable perception of visual information in Vision Pro headset, there is the crucial setting that adjusts the mutual X-axis location of the left and right displays:
Settings
> Eyes & Hands
> Realign Displays
.