Following is my case
left=0,right=1824,top=0,bottom=989
with aspect ratio 1.84f, with uniform z axis = -3f
-1,84 to 1.84 in x axis
and 1 to -1 in y axis
and all the small rectangles lie in those rangeThen to render those rectangles on VR, I simply applied
view = perspective ( z near 0.1f, far 100 f) * eye.getEyeVew
model = Orthographic transformation to come in range ( -1.84f to 1.84 f : X , 1 to 1f :Y )
modelview = view * model
To emulate the controller, I used a simple circle with ( centerX=0, centerY=0, radius =0.04f, zaxis = -3f)
Applied following transformation to the circle with controller rotation matrix (https://developers.google.com/vr/reference/android/com/google/vr/sdk/controller/Orientation.html#toRotationMatrix(float[]))
view = perspective ( z near 0.1f, far 100 f) * eye.getEyeVew
controllerMatrix = controller.toRotationMatrix
modelView = view * controllerMatrix
My assumption was following,
So, my question is for this purpose should I go for RayCasting or there is a simple solution for selecting the small rectangles with VR controller.
I am using GVR controller ( Google VR version 1.80.0 )
and using Android for app development.
Thank you
The Google VR SDK comes with some useful example projects which demonstrate how to achieve this. If you inspect the Treasure Hunt project, the TreasureHuntActivity
class has a method isLookingAtObject()
:
private boolean isLookingAtObject() {
// Convert object space to camera space. Use the headView from onNewFrame.
Matrix.multiplyMM(modelView, 0, headView, 0, modelCube, 0);
Matrix.multiplyMV(tempPosition, 0, modelView, 0, POS_MATRIX_MULTIPLY_VEC, 0);
float pitch = (float) Math.atan2(tempPosition[1], -tempPosition[2]);
float yaw = (float) Math.atan2(tempPosition[0], -tempPosition[2]);
return Math.abs(pitch) < PITCH_LIMIT && Math.abs(yaw) < YAW_LIMIT;
}
This will calculate the angle between the vector in the direction of the cameras line of sight and modelCube
matrix. The modelCube
matrix is a 4x4 matrix defining the scale, rotation and translation for the centre point of an object in the 3D scene. In your case this will be the centre point of the rectangle in 3D space.
You must also calculate the angles where the vector that defines the pointing direction of the camera intersects the rectangle. This is depicted in the following diagram:
Here the origin is your camera position, the point C is the centroid of the rectangle rendered in 3D space and the vector OC is the direction vector your camera is pointing in when it is focused directly on the centre of the rectangle. Ꝋ is the angle from the X-axis to the point C in the X-Z plane. Points P1-4 are are vertices of your rectangle in 3D space.
The pitch ɸ is defined in the following diagram:
Here ɸ is the angle from the centre point C to the new axis e.
The PITCH_LIMIT
is the angle in the Y-e plane between two vectors. the vector from the camera position to the bottom edge of the bounding box of the object and the vector from the camera position to the upper edge of the bounding box of the object. In this case it can be calculated as either the angle between OP3 and OP4 or the angle between OP2 and OP1. This is depicted in the following 2D cross section:
Assume that the camera is looking directly at the rectangle, this limit defines how much the camera must be tilted up before the centre of the camera is no longer pointing directly at the rectangle.
The YAW_LIMIT
is the angle in the X-Z plane between two vectors, the vector from the camera position to the left-most edge of the bounding box of the object, and the vector from the camera position to the right-most edge of the bounding box of the object. In this case it can be calculated as either the angle between OP1 and OP4 or the angle between OP2 and OP3. This is depicted in the following 2D cross section:
Likewise, this limit defines how much the camera can be rotated left or right before the centre of the camera is no longer pointing directly at the rectangle.
Note that the YAW_LIMIT
and PITCH_LIMIT
values get smaller if you are further away from the object and larger if you are close. You must calculate these limits for the object that you want to check is being looked at based on the current camera position and edge vertices of the object. The coordinate conversions to get from object space to camera space are handled in the isLookingAtObject()
method.