swiftarkitarcorerealitykitreality-composer

What is ARAnchor exactly?


I'm trying to understand and use ARKit. But there is one thing that I cannot fully understand.

Apple said about ARAnchor:

A real-world position and orientation that can be used for placing objects in an AR scene.

But that's not enough. So my questions are:


Solution

  • Updated: June 11, 2024.

    TL;DR



    ARAnchor (iOS)

    ARAnchor is an invisible trackable object that holds a 3D model at anchor's position. Think of ARAnchor as a parent transform node of your model that you can translate, rotate and scale like any other node in SceneKit or RealityKit. Every 3D model has a pivot point, right? So, this pivot point must match a location of an ARAnchor in AR app.

    If you're not using anchors in ARKit or ARCore app (in RealityKit iOS, however, it's impossible not to use anchors because they are integral part of a scene), your 3D models may drift from where they were placed, and this will dramatically impact app’s realism and user experience. Hence, anchors are crucial elements of any AR scene.

    enter image description here

    According to ARKit 2017 documentation:

    ARAnchor is a real-world position and orientation that can be used for placing objects in AR Scene. Adding an anchor to the session helps ARKit to optimize world-tracking accuracy in the area around that anchor, so that virtual objects appear to stay in place relative to the real world. If a virtual object moves, remove the corresponding anchor from the old position and add one at the new position.

    ARAnchor is a parent class of other 10 anchors' types in ARKit, hence all those subclasses inherit from ARAnchor. Usually you do not use ARAnchor directly. I must also say that ARAnchor and Feature Points have nothing in common. Feature Points are rather special visual elements for tracking and debugging.

    ARAnchor doesn't automatically track a real world target. When you need automation, you have to use renderer() or session() delegate methods that can be implemented in case you comformed to ARSCNViewDelegate or ARSessionDelegate protocols, respectively.

    Here's an image with visual representation of plane anchor. Keep in mind: you can neither see a detected plane nor its corresponding ARPlaneAnchor, by default. So, if you want to see the anchor in a scene, you must "visualize" it using three thin SCNCylinder primitives. Each color of the cylinder represents a particular axis: so RGB is XYZ.


    enter image description here


    In ARKit you can automatically add ARAnchors to your scene using different scenarios:

    enter image description here


    There are also other regular approaches to create anchors in AR session:


    This code snippet shows you how to use an ARPlaneAnchor in a delegate's method: renderer(_:didAdd:for:):

    func renderer(_ renderer: SCNSceneRenderer, 
                 didAdd node: SCNNode, 
                  for anchor: ARAnchor) {
        
        guard let planeAnchor = anchor as? ARPlaneAnchor 
        else { return }
    
        let grid = Grid(anchor: planeAnchor)
        node.addChildNode(grid)
    }
    


    Anchor (visionOS)

    ARKit 2023-2024 API has been significantly modified for use on visionOS. The main innovation is the creation of three protocols (composition over inheritance, remember?): Anchor, its offspring TrackableAnchor, and DataProvider. Here are ten types of visionOS ARKit anchors that are structs now, not classes. Each anchor type has a corresponding provider object, for example: ImageAnchor has an ImageTrackingProvider, HandAnchor has a HandTrackingProvider, and so on.


    @available(visionOS 1.0, *)
    public struct WorldAnchor : TrackableAnchor, @unchecked Sendable
    

    @available(visionOS 1.0, *)
    // The position and orientation of Apple Vision Pro headset.
    public struct DeviceAnchor : TrackableAnchor, @unchecked Sendable
    

    @available(visionOS 1.0, *)
    public struct PlaneAnchor : Anchor, @unchecked Sendable
    

    @available(visionOS 1.0, *)
    public struct HandAnchor : TrackableAnchor, @unchecked Sendable
    

    @available(visionOS 1.0, *)
    public struct MeshAnchor : Anchor, @unchecked Sendable
    

    @available(visionOS 1.0, *)
    public struct ImageAnchor : TrackableAnchor, @unchecked Sendable
    

    @available(visionOS 2.0, *)
    // Represents a tracked room.
    public struct RoomAnchor : Anchor, @unchecked Sendable, Equatable
    

    @available(visionOS 2.0, *)
    // Represents a detected barcode or QR-code.
    public struct BarcodeAnchor : Anchor, @unchecked Sendable
    

    @available(visionOS 2.0, *)
    // Represents a tracked reference object (like ARObjectAnchor in iOS).
    public struct ObjectAnchor : TrackableAnchor, @unchecked Sendable, Equatable
    

    @available(visionOS 2.0, *)
    // Represents an environment probe in the world.
    public struct EnvironmentProbeAnchor : Anchor, @unchecked Sendable, Equatable
    

    Among the innovations worth noting in ARKit under visionOS, are the appearance of the ARKitSession object that is capable of running several providers and the Pose structure (XYZ position and XYZ rotation), an analogue of which has long been at the disposal of ARCore developers. visionOS ARKit API has a new level of automation – now WorldAnchors include an auto-persistence feature.


    AnchorEntity (iOS and visionOS)

    AnchorEntity is alpha and omega in RealityKit. According to RealityKit documentation 2019:

    AnchorEntity is an anchor that tethers virtual content to a real-world object in an AR session.

    RealityKit framework and Reality Composer app were announced at WWDC'19. They have a new class named AnchorEntity. You can use AnchorEntity as the root point of any entities' hierarchy, and you must add it to the Scene anchors collection. AnchorEntity automatically tracks real world target. In RealityKit and Reality Composer AnchorEntity is at the top of hierarchy. This anchor is able to hold a hundred of models and in this case it's more stable than if you use 100 personal anchors for each model.

    Let's see how it looks in a code:

    func makeUIView(context: Context) -> ARView {
        
        let arView = ARView(frame: .zero)
        let modelAnchor = try! Experience.loadModel()
        arView.scene.anchors.append(modelAnchor)
        return arView
    }
    

    AnchorEntity has three components:

    To find out the difference between ARAnchor and AnchorEntity look at THIS POST.

    Here are nine AnchorEntity's cases available in RealityKit for iOS:

    // Fixed position in the AR scene
    AnchorEntity(.world(transform: mtx)) 
    
    // For body tracking (a.k.a. Motion Capture)
    AnchorEntity(.body)
    
    // Pinned to the tracking camera
    AnchorEntity(.camera)
    
    // For face tracking (Selfie Camera config)
    AnchorEntity(.face)
    
    // For image tracking config
    AnchorEntity(.image(group: "GroupName", name: "forModel"))
    
    // For object tracking config
    AnchorEntity(.object(group: "GroupName", name: "forObject"))
    
    // For plane detection with surface classification
    AnchorEntity(.plane([.any], classification: [.seat], minimumBounds: [1, 1]))
    
    // When you use ray-casting
    AnchorEntity(raycastResult: myRaycastResult)
    
    // When you use ARAnchor with a given identifier
    AnchorEntity(.anchor(identifier: uuid))
    
    // Creates anchor entity on a basis of ARAnchor
    AnchorEntity(anchor: arAnchor) 
    

    And here are only two AnchorEntity's cases available in RealityKit for macOS:

    // Fixed world position in VR scene
    AnchorEntity(.world(transform: mtx))
    
    // Camera transform
    AnchorEntity(.camera)
    

    👓 In addition to the above, visionOS allows you to use three more anchors: 👓

    // Camera Position anchor
    AnchorEntity(.head)
    
    // User's hand anchor, taking into account a chirality
    AnchorEntity(.hand(.right, location: .thumbTip))
    
    // Real-world's Object anchor
    AnchorEntity(.referenceObject(from: .init(name: "...")))
    

    Usually, a chirality term means the lack of symmetry with respect to the right and left sides. However, in RealityKit, chirality is about using the three cases: .either, .left and .right. Also, there are five cases describing anchor's location: .wrist, .palm, .thumbTip, .indexFingerTip, .aboveHand.

    If you want to get access to all 27 joints of left/right hand's skeletal structure, then you should use visionOS ARKit's trackable HandAnchor.



    You can use any ARAnchor's subclass for AnchorEntity needs:

    var anchor = AnchorEntity()
    
    func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
    
        guard let faceAnchor = anchors.first as? ARFaceAnchor 
        else { return }
    
        arView.session.add(anchor: faceAnchor)           // ARKit Session
    
        self.anchor = AnchorEntity(anchor: faceAnchor)
        anchor.addChild(model)        
        arView.scene.anchors.append(self.anchor)         // RealityKit Scene
    }
    

    RealityKit gives you the ability to reanchor your model. Imagine the scenario where you started your scene with image or body tracking but need to continue with world tracking.


    iOS Reality Composer 1.5 anchors

    At the moment (June 2024) iOS Reality Composer has just 4 types of AnchorEntities:

    enter image description here

    // 1a
    AnchorEntity(plane: .horizontal)
    
    // 1b
    AnchorEntity(plane: .vertical)
    
    // 2
    AnchorEntity(.image(group: "GroupName", name: "forModel"))
    
    // 3
    AnchorEntity(.face)
    
    // 4
    AnchorEntity(.object(group: "GroupName", name: "forObject"))
    


    visionOS Reality Composer Pro 2.0 anchors

    The paradox of using anchors in RealityKit for visionOS is that anchors are an option, not a required element of the scene, as they are in iOS. This works the same way as in an ARKit + SceneKit powered app. So, there is no rule without an exception.

    At the moment (June 2024) visionOS Reality Composer Pro has just 5 types of AnchorEntities:

    enter image description here

    // 1
    AnchorEntity(.world(transform: .init()))
    
    // 2a
    AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.1,0.1]))
                
    // 2b
    AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [0.1,0.1]))
    
    // 3
    AnchorEntity(.head)
    
    // 4a
    AnchorEntity(.hand(.left, location: .palm))
                
    // 4b
    AnchorEntity(.hand(.right, location: .thumbTip))
    
    // 5
    AnchorEntity(.referenceObject(from: .init(name: "string")))
    


    AR USD Schemas

    And of course, I should say a few words about preliminary anchors. There are 3 preliminary anchoring types (July 2022) for those who prefer Python scripting for USDZ models – these are plane, image and face preliminary anchors. Look at this code snippet to find out how to implement a schema pythonically.

    def Cube "ImageAnchoredBox"(prepend apiSchemas = ["Preliminary_AnchoringAPI"])
    {
        uniform token preliminary:anchoring:type = "image"
        rel preliminary: imageAnchoring:referenceImage = <ImageReference>
    
        def Preliminary_ReferenceImage "ImageReference"
        {
            uniform asset image = @somePicture.jpg@
            uniform double physicalWidth = 45
        }
    }
    

    If you want to know more about AR USD Schemas, read this story on Meduim.


    Visualizing AnchorEntity

    Here's an example of how to visualize anchors in RealityKit (mac version).

    import AppKit
    import RealityKit
    
    class ViewController: NSViewController {
        
        @IBOutlet var arView: ARView!
        var model = Entity()
        let anchor = AnchorEntity()
    
        fileprivate func visualAnchor() -> Entity {
    
            let colors: [SimpleMaterial.Color] = [.red, .green, .blue]
    
            for index in 0...2 {
                
                let box: MeshResource = .generateBox(size: [0.20, 0.005, 0.005])
                let material = UnlitMaterial(color: colors[index])              
                let entity = ModelEntity(mesh: box, materials: [material])
    
                if index == 0 {
                    entity.position.x += 0.1
    
                } else if index == 1 {
                    entity.transform = Transform(pitch: 0, yaw: 0, roll: .pi/2)
                    entity.position.y += 0.1
    
                } else if index == 2 {
                    entity.transform = Transform(pitch: 0, yaw: -.pi/2, roll: 0)
                    entity.position.z += 0.1
                }
                model.scale *= 1.5
                self.model.addChild(entity)
            }
            return self.model
        }
    
        override func awakeFromNib() {
            anchor.addChild(self.visualAnchor())
            arView.scene.addAnchor(anchor)
        }
    }
    

    enter image description here


    About ArAnchors in ARCore

    At the end of my post, I would like to talk about five types of anchors that are used in ARCore 1.40+. Google's official documentation says the following about anchors: "ArAnchor describes a fixed location and orientation in the real world". ARCore anchors work similarly to ARKit anchors in many aspects.

    Let's take a look at ArAnchors' types:


    These Kotlin code snippets show you how to use a Geospatial anchors and Rooftop anchors.

    Geospatial anchors

    fun configureSession(session: Session) {
        session.configure(
            session.config.apply {
                geospatialMode = Config.GeospatialMode.ENABLED
            }
        )
    }
    

    val earth = session?.earth ?: return
    
    if (earth.trackingState != TrackingState.TRACKING) { return }
    

    earthAnchor?.detach()
    
    val altitude = earth.cameraGeospatialPose.altitude - 1
    val qx = 0f; val qy = 0f; val qz = 0f; val qw = 1f
    
    earthAnchor = earth.createAnchor(latLng.latitude, 
                                     latLng.longitude, 
                                     altitude, 
                                     qx, qy, qz, qw)
    

    Rooftop anchors

    streetscapeGeometryMode = Config.StreetscapeGeometryMode.ENABLED
    val streetscapeGeo = session.getAllTrackables(StreetscapeGeometry::class.java)
    streetscapeGeometryRenderer.render(render, streetscapeGeo)
    

    val centerHits = frame.hitTest(centerCoords[0], centerCoords[1])
    
    val hit = centerHits.firstOrNull {
        val trackable = it.trackable
        trackable is StreetscapeGeometry && 
                     trackable.type == StreetscapeGeometry.Type.BUILDING
    } ?: return
    
    val transformedPose = ObjectPlacementHelper.createStarPose(hit.hitPose)
    val anchor = hit.trackable.createAnchor(transformedPose)
    starAnchors.add(anchor)
    

    val earth = session?.earth ?: return
    val geospatialPose = earth.getGeospatialPose(transformedPose)
    
    earth.resolveAnchorOnRooftopAsync(geospatialPose.latitude, 
                                      geospatialPose.longitude,
                                      0.0,
                                      transformedPose.qx(), 
                                      transformedPose.qy(), 
                                      transformedPose.qz(), 
                                      transformedPose.qw() ) { anchor, state ->
    
        if (!state.isError) {
            balloonAnchors.add(anchor)
        }
    }