c++3dirrlicht

Irrlicht: draw 2D image in 3D space based on four corner coordinates


I would like to create a function to position a free-floating 2D raster image in space with the Irrlicht engine. The inspiration for this is the function rgl::show2d in the R package rgl. An example implementation in R can be found here.

chewie in space

The input data should be limited to the path to the image and a table with the four corner coordinates of the respective plot rectangle.

My first, pretty primitive and finally unsuccessful approach to realize this with irrlicht:

Create a cube:

ISceneNode * picturenode = scenemgr->addCubeSceneNode();

Flatten one side:

picturenode->setScale(vector3df(1, 0.001, 1));

Add image as texture:

picturenode->setMaterialTexture(0, driver->getTexture("path/to/image.png"));

Place flattened cube at the center position of the four corner coordinates. I just calculate the mean coordinates on all three axes with a small function position_calc().

vector3df position = position_calc(rcdf); picturenode->setPosition(position);

Determine the object rotation by calculating the normal of the plane defined by the four corner coordinates, normalizing the result and trying to somehow translate the resulting vector to rotation angles.

vector3df normal = normal_calc(rcdf);
vector3df angles = (normal.normalize()).getSphericalCoordinateAngles();
picturenode->setRotation(angles);

This solution doesn't produce the expected result. The rotation calculation is wrong. With this approach I'm also not able to scale the image correctly to it's corner coordinates.

How can I fix my workflow? Or is there a much better way to achieve this with Irrlicht that I'm not aware of?


Edit: Thanks to @spug I believe I'm almost there. I tried to implement his method 2, because quaternions are already available in Irrlicht. Here's what I came up with to calculate the rotation:

#include <Rcpp.h>
#include <irrlicht.h>
#include <math.h>

using namespace Rcpp;

core::vector3df rotation_calc(DataFrame rcdf) {

  NumericVector x = rcdf["x"];
  NumericVector y = rcdf["y"];
  NumericVector z = rcdf["z"];

  // Z-axis
  core::vector3df zaxis(0, 0, 1);
  // resulting image's normal
  core::vector3df normal = normal_calc(rcdf);

  // calculate the rotation from the original image's normal (i.e. the Z-axis) 
  // to the resulting image's normal => quaternion P.
  core::quaternion p;
  p.rotationFromTo(zaxis, normal);

  // take the midpoint of AB from the diagram in method 1, and rotate it with 
  // the quaternion P => vector U.
  core::vector3df MAB(0, 0.5, 0);
  core::quaternion m(MAB.X, MAB.Y, MAB.Z, 0);
  core::quaternion rot = p * m * p.makeInverse();
  core::vector3df u(rot.X, rot.Y, rot.Z);

  // calculate the rotation from U to the midpoint of DE => quaternion Q
  core::vector3df MDE(
      (x(0) + x(1)) / 2,
      (y(0) + y(1)) / 2,
      (z(0) + z(1)) / 2
  );
  core::quaternion q;
  q.rotationFromTo(u, MDE);

  // multiply in the order Q * P, and convert to Euler angles
  core::quaternion f = q * p;
  core::vector3df euler;
  f.toEuler(euler);

  // to degrees
  core::vector3df degrees(
    euler.X * (180.0 / M_PI),
    euler.Y * (180.0 / M_PI),
    euler.Z * (180.0 / M_PI)
  );

  Rcout << "degrees: " <<  degrees.X << ", " << degrees.Y << ", " << degrees.Z << std::endl;

  return degrees;
}

The result is almost correct, but the rotation on one axis is wrong. Is there a way to fix this or is my implementation inherently flawed?

That's what the result looks like now. The points mark the expected corner points.

berries in space


Solution

  • I've thought of two ways to do this; neither are very graceful - not helped by Irrlicht restricting us to spherical polars.

    NB. the below assumes rcdf is centered at the origin; this is to make the rotation calculation a bit more straightforward. Easy to fix though:

    1. Compute the center point (the translational offset) of rcdf
    2. Subtract this from all the points of rcdf
    3. Perform the procedures below
    4. Add the offset back to the result points.

    Pre-requisite: scaling

    This is easy; simply calculate the ratios of width and height in your rcdf to your original image, then call setScaling.

    enter image description here


    Method 1: matrix inversion

    For this we need an external library which supports 3x3 matrices, since Irrlicht only has 4x4 (I believe).

    We need to solve the matrix equation which rotates the image from X-Y to rcdf. For this we need 3 points in each frame of reference. Two of these we can immediately set to adjacent corners of the image; the third must point out of the plane of the image (since we need data in all three dimensions to form a complete basis) - so to calculate it, simply multiply the normal of each image by some offset constant (say 1).

    enter image description here

    (Note the points on the original image have been scaled)

    The equation to solve is therefore:

    enter image description here

    (Using column notation). The Eigen library offers an implementation for 3x3 matrices and inverse.

    Then convert this matrix to spherical polar angles: https://www.learnopencv.com/rotation-matrix-to-euler-angles/


    Method 2:

    To calculate the quaternion to rotate from direction vector A to B: Finding quaternion representing the rotation from one vector to another

    1. Calculate the rotation from the original image's normal (i.e. the Z-axis) to rcdf's normal => quaternion P.

    2. Take the midpoint of AB from the diagram in method 1, and rotate it with the quaternion P (http://www.geeks3d.com/20141201/how-to-rotate-a-vertex-by-a-quaternion-in-glsl/) => vector U.

    3. Calculate the rotation from U to the midpoint of DE => quaternion Q

    4. Multiply in the order Q * P, and convert to Euler angles: https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles

    (Not sure if Irrlicht has support for quaternions)