swiftuicamerapositionvirtual-realityrealitykit

How to place and orient RealityKit camera


I'm really struggling to get the camera to behave even remotely as expected in a RealityKit app (where I have set RealityViewCameraContent.camera to .virtual).

I have a programmatically generated model that is all generated and positioned correctly. From the SceneKit version of this code base I simply placed a (player) node at a location within the model, with a camera node as a child of that node and it just worked. I could then animate the position of the player node and the camera would give the illusion of the player moving through the model.

In the RealityKit version, the model is there, in the same coordinates, however no matter what I do, I cannot get the camera to be positioned in the same way.

I've tried:

  1. An Entity representing the player (a'la what I did in SceneKit) and another Entity with a PerspectiveCameraComponent as a child of that player Entity. Positioning the player Entity does not affect the view from the camera which seems to be at the world origin ([0,0,0]).

  2. Similar to 1, but without a child Entity of the player, and just setting a PerspectiveCameraComponent on the player itself.

  3. Abandoning any direct relationship between the player Entity and the camera and just using the template code that Xcode gave me for this RealityKit target. This looks like:

let camera = Entity()
camera.components.set(PerspectiveCameraComponent())
content.add(camera)

// Set the camera position and orientation.
let cameraLocation: RKVector = [84.0, 0, 56.0]
camera.position = cameraLocation

No matter what I do, even though I know the correct position should be [84, 0, 56], the camera seems to show me a view from [0, 0, 56] or somewhere else far far away.

I've also tried to use something like:

let camera = PerspectiveCamera()
camera.camera.fieldOfViewInDegrees = 60

let cameraAnchor = AnchorEntity(world: .zero)
cameraAnchor.addChild(camera)

content.add(cameraAnchor)

camera.transform.translation = [84, 0, 56]

camera.look(at: [84,0,57], from: [84, 0, 56], relativeTo: nil)

in an attempt to place the camera in the correct position, and make it look "south" (positive Z). But all this gets me is a view from a camera towards [84,0,57] from what appears to be a position like [-84,0,-56].

I strongly suspect there is something stupidly simple I am missing.

Really, all I want is to have a camera associated with my player Entity that will follow it around showing a view of the model as if viewed by the player Entity.

In SceneKit there was also a SCNLookatConstraint, but RealityKit also seems to be lacking this obvious function. Which means, I think, that I have to manually update the camera view via an update of some sort.


Solution

  • This code allows you to implement a simple RealityKit camera "constraint" similar to SceneKit's SCNLookatConstraint. As you can see, when you launch the app, a virtual camera tracks (looks at) the cube's movement (keeping the object in the center of the frame), while the animation, performed by the move(to:) method, lasts.

    Regular implementation using subscribe(to:) method:

    import SwiftUI
    import RealityKit
    
    struct ContentView : View {
        var body: some View {
            RealityView { rvcc in
                self.sceneSetup(rvcc)
            }
            .background(.black)
        }
        
        func sceneSetup(_ rvcc: RealityViewCameraContent) {
            let box = ModelEntity(mesh: .generateBox(size: .one * 0.2))
            box.position = [84, 0, 57]
            rvcc.add(box)
            
            var transform: Transform = box.transform
            transform.translation += [-5, 4.0,-1.0]
            box.move(to: transform, relativeTo: nil, duration: 9)
            
            let cam = PerspectiveCamera()
            cam.camera.fieldOfViewInDegrees = 60.0
            cam.position = [84, 0, 56]
            rvcc.add(cam)
    
            let _ = rvcc.subscribe(to: SceneEvents.Update.self) { _ in
                cam.look(
                    at: box.position,
                    from: cam.position,
                    relativeTo: nil
                )
            }
        }
    }
    

    Custom implementation using Timer object:

    import SwiftUI
    import RealityKit
    
    struct ContentView : View {
        let duration: Double = 5.0
        
        var body: some View {
            RealityView { rvc in
                let box = ModelEntity(mesh: .generateBox(size: .one * 0.2))
                box.position = [84, 0, 57]
                rvc.add(box)
        
                let cam = PerspectiveCamera()
                cam.camera.fieldOfViewInDegrees = 60.0
                cam.position = [84, 0, 56]
                cam.look(
                    at: box.position, from: cam.position, relativeTo: nil
                )
                rvc.add(cam)
                
                var transform: Transform = box.transform
                transform.translation += [-2.0, 1.0, 0.0]
                box.move(to: transform, relativeTo: nil, duration: duration)
                
                let timer = Timer.scheduledTimer(
                    withTimeInterval: 0.01, repeats: true
                ) { _ in
                    cam.look(
                        at: box.position, from: cam.position, relativeTo: nil
                    )
                }
                Task(priority: .high) {
                    try await Task.sleep(
                        nanoseconds: UInt64(duration) * UInt64(1e9)
                    )
                    timer.invalidate()
                }
            }
            .background(.black)
        }
    }