In Vision Pro / RealityKit API I can programmatically add a texture map to a mesh:
let planeMeshResource = MeshResource.generatePlane(width: W, height: H))
var planeMaterial = UnlitMaterial(color: .white)
let texture = try! await TextureResource(named: "LeftEyeImage")
planeMaterial.baseColor = MaterialColorParameter.texture(texture)
let viewPlaneEntity = ModelEntity(mesh: planeMeshResource,
materials: [planeMaterial])
How can I programmatically specify a different texture map for each eye for stereo vision? I could imagine something like the following where I specify a different material based on which render pass:
planeMaterial.leftEyeBaseColor = MaterialColorParameter.texture(LeftTexture)
planeMaterial.rightEyeBaseColor = MaterialColorParameter.texture(RightTexture)
Then in the left eye render pass the left texture would be used and similarly for the right eye render pass. How can I do this? I am hoping I can do this without custom material/shader.
Update: I did find Camera Index Switch (RealityKit) under RealityKit Nodes at this random site. Maybe this is something that can be used in a surface shader or Reality Composer.
The shader graph found at https://developer.apple.com/forums/thread/733813 works perfectly for me. Sample code for that approach is at https://github.com/halmueller/ShaderGraphStereo.
Late edit: use an UnlitSurface node, not a PreviewSurface, and disable Tone Mapping. This gives a much better render.
Define a shader with that graph, give it a name, and assign that name as the Material
of an object in your RCP Scene
. You'll see your stereo pair rendered onto that object.
The thing that tripped me up is that, in the Reality Composer Pro editor, the shader preview is solid black. In the RCP edit window, the surface of the object is almost solid black (although you can barely see what's behind the object with the material).
Preview On Device renders the scene very nicely on the AVP. If you don't have an actual Apple Vision Pro to test with, you're stuck. The Mono
input of the Camera Index Switch renders its image correctly in the RCP editor (if the Left
and Right
inputs aren't set). So some programmatic switching might be needed while you're developing.
I've filed feedback FB13688396 requesting a more obvious placeholder in the editor, and requesting documentation.
Edit: I just tripped over this documentation for ShaderGraphNodes, which a friend tells me was released as part of the last visionOS 1.1/Xcode 15.3 beta. It documents every ShaderGraphNode, with illustrated examples for many of them.