python3dcamera2dhud

Have a 3d point move on HUD depending the direction of the camera


My problem is the following:

Imagine I'm in the (x, y, z) position and I have several points (xn, yn, zn) and depending on my view direction, assuming I have the angle values for vertical, horizontal and roll, I want my HUD to identify said points, if they are in my view angle, and move around if any angle changes. Basically turning it to a (x, y) coordinates on the screen.

Like the quest point following behavior in the following game: https://www.youtube.com/watch?v=I_LlEC-xB50

How would I do this?

Edit: I get the coordinates using:

def convert_to_xyz(point):
  # Lat / Lon / Alt -> point[0] / point[1] / point[2]
  # Note: point[2] = earth circumference + altitude

  point[2] += _earth_radius
  x = math.cos(point[0]) * math.cos(point[1]) * point[2]
  y = math.cos(point[0]) * math.sin(point[1]) * point[2]
  z = math.sin(point[0]) * point[2]  # z is 'up'

  return numpy.array([x, y, z])

Getting the Camera Matrix:

def get_camera_matrix(fovx, fovy, height, width):
  # FOVX is the horizontal FOV angle of the camera
  # FOVY is the vertical FOV angle of the camera
  x = width / 2
  y = height / 2
  fx = x / math.tan(fovx)
  fy = y / math.tan(fovy)
  return np.array([[fx, 0, x],
                   [0, fy, y],
                   [0, 0, 1]])

Transform to camera space:

def transform_to_camera_space(point, camera_matrix):
  return np.dot(point, camera_matrix)

And then I use the @spug answer and I get values like:

array([ 133.99847154,  399.15007301])

Solution

  • Step 1:

    Transform the point from world space to camera space, by multiplying it by the camera matrix. You should read up on constructing this - there are untold many web resources. In (pitch, yaw, roll) coordinates the rotations must happen in the order roll -> pitch -> yaw, which corresponds to:

    1. Rotation around X-axis through angle roll -> matrix R

    2. Rotation about Y-axis through angle pitch -> matrix P

    3. Rotation about Z-axis through angle yaw -> matrix Y

    The rotational part of the camera matrix is thus given by (YPR)T, in that order of multiplication. The XYZ rotation matrices are given on this page: https://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations.

    The point in camera space is given by q = transpose(YPR) * (p - c), where p = (xn, yn, zn) is the point in world space, and c = (x, y, z) is your camera position. The alternative is to construct a 4x4 matrix and fill the 4th column with -(YPR)*c - again, available on the internet.

    At this point, discard the point q if its X-value is below some limit (called the near clipping plane - set this to some positive value). This ensures points behind the camera are not shown.


    Step 2:

    Below is a diagram illustrating the process behind perspective projection:

    enter image description here

    enter image description here

    Similarly for Y:

    enter image description here