I have an app that displays 3D content using SceneKit. It is not an AR app; the 3D content is completely created programmatically.
I want to port it to VisionOS. The basic step is simple. I can display the app in a window of VisionOS.
Now I want to display the 3D scene in a full immersive space.
Apparently, it is recommended to use RealityKit to render VisionOS scenes. However, the docu says "RealityKit is an AR-first 3D framework that leverages ARKit to seamlessly integrate virtual objects into the real world." Thus, I am not sure if RealityKit is the right choice for me, since my app is not AR. But maybe it is required or recommended, to use RealityKit anyway.
The problem then seems to be to convert the SceneKit content to a RealityKit content. I found this post that seems to indicate that SceneKit and RealityKit are completely different technologies that cannot be converted to each other easily.
So what is the right way to go: Should I leave the iOS app based on SceneKit, and reprogram the VisionOS app in RealityKit?
The SceneKit app can only run as a 2D window on visionOS. The iOS RealityKit app has a .nonAR
option that allows you to completely disable AR
/MR
capabilities, turning your app into a pure VR
app. The visionOS RealityKit app, however, doesn't have a .nonAR
option (and most likely will not have it in the future), thus it cannot be a full-fledged VR
app due to the fact that Vision Pro must understand user's position and orientation in a real-world environment (for user's safety).
So the answer is obvious: you'll need to write your app from scratch, using RealityKit framework.