8thwall-xr

Use 8th Wall XR image detection size to scale target object in Unity3D


Is it possible to adjust the size of the placed model based on the size of the real world detected image? I have a painting that I'm augmenting with an AR model that replaces the painting once the image is detected. It should perfectly overlay the painting. It is 45 centimeters in width and this is provided to the XRImageDetectionController script. When I run my application where the target image is visible in its true dimensions (45cm X 28cm) the effect is as expected. Ideally, I'd like to be able to demo this augmented painting in a variety of settings where the real world image may be of a different size (keeping the aspect ratio the same). My particular device is an ARCore compatible android phone.


Solution

  • I've started using 8th Wall lately, but I haven't created my own project yet (just toyed with the demo ones and checked out the source code), so I don't know 100% if this will work, but here goes:

    If you look in the 8th Wall XRDataTypes.cs file, you can find the data types XRDetectionTexture, XRDetectionImage, and XRDetectedImageTarget. Each of these data types have some instance of dimensional fields.

    XRDetectionTexture:

    /**
     * A unity Texture2D that can be used as a source for image-target detection.
     */
    [Serializable] public struct XRDetectionTexture {
    
    [...]
    
      /**
       * The expected physical width of the image-target, in meters.
       */
      public float widthInMeters;
    
    [...]
    }
    

    XRDetectionImage:

    /**
     * Source image data for a image-target to detect. This can either be constructed manually, or
     * from a Unity Texture2d.
     */
    public struct XRDetectionImage {
      /**
       * The width of the source binary image-target, in pixels.
       */
      public readonly int widthInPixels;
    
      /**
       * The height of the source binary image-target, in pixels.
       */
      public readonly int heightInPixels;
    
      /**
       * The expected physical width of the image-target, in meters.
       */
      public readonly float targetWidthInMeters;
      [...]
      }
    }
    

    XRDetectedImageTarget:

    /**
     * An image-target that was detected by an AR Engine.
     */
    public struct XRDetectedImageTarget {
      [...]
    
      /**
       * Width of the detected image-target, in unity units.
       */
      public readonly float width;
    
      /**
       * Height of the detected image-target, in unity units.
       */
      public readonly float height;
      [...]
    }
    

    Not having done this myself, I can't give you working code examples, but the 8th Wall documentation on the basics of image detection seems to be pretty decent and does in fact indicate that an instance of the XRDetectedImageTarget is passed into the callback method specified on the detected model (image copied from 8th Wall documentation, 2019-01-18):

    enter image description here

    So if you know the ratio of the model-to-image that you want (i.e. "the width of the model should be half the width of the detected image") then in the callback you should be able to do something like:

    //calculating the size ratio might be more difficult than this, assume this is pseudocode
    var sizeRatio = xrDetectedImageTarget.width / xrDetectionImage.targetWidthInMeters;
    
    var placedModel = Instantiate(prefabModel, newPosition, newRotation, parentTransform);
    placedModel.transform.localScale = this.transform.localScale * sizeRatio;
    

    Hope that works/helps!