I would like to collect the magnetic field vector (i.e. x,y,z in microteslas) from the TYPE_MAGNETIC_FIELD Position sensor and put it in the same coordinate system that the Frame's we get from ARCore are in.
The magnetic field vector is in the Sensor Coordinate System. We need to get it into the Camera Coordinate System. I believe I can use the following two API's, which are provided on every camera frame (showing the NDK version I like the docs more):
Below, I compute the magnetic vector in the camera coordinate system (magneticVectorInCamera). When I test it (by passing a weak magnet around the phone, and by comparing it to iOS's CLHeading's raw x,y,z values, I don't get values I expect. Any suggestions?
scene.addOnUpdateListener(frameTime -> { processFrame(this.sceneView.getArFrame()) });
public void processFrame(Frame frame) {
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
// Get the magnetic vector in sensor that we stored in the onSensorChanged() delegate
float[] magneticVectorInSensor = {x,y,z};
// Get sensor to world
Pose sensorToWorldPose = frame.getAndroidSensorPose();
// Get world to camera
Pose cameraToWorldPose = frame.getCamera().getPose();
Pose worldToCameraPose = cameraToWorldPose.inverse();
// Get sensor to camera
Pose sensorToCameraPose = sensorToWorldPose.compose(worldToCameraPose);
// Get the magnetic vector in camera coordinate space
float[] magneticVectorInCamera = sensorToCameraPose.rotateVector(magneticVectorInSensor);
}
@Override
public void onSensorChanged(SensorEvent sensorEvent) {
int sensorType = sensorEvent.sensor.getType();
switch (sensorType) {
case Sensor.TYPE_MAGNETIC_FIELD:
mMagnetometerData = sensorEvent.values.clone();
break;
default:
return;
}
x = mMagnetometerData[0];
y = mMagnetometerData[1];
z = mMagnetometerData[2];
}
Here is an example log line I get from this:
V/processFrame: magneticVectorInSensor: [-173.21014, -138.63983, 54.873657]
V/processFrame: sensorToWorldPose: t:[x:-1.010, y:-0.032, z:-0.651], q:[x:-0.28, y:-0.62, z:-0.21, w:0.71]
V/processFrame: cameraToWorldPose: t:[x:-0.941, y:0.034, z:-0.610], q:[x:-0.23, y:0.62, z:0.66, w:-0.35]
V/processFrame: worldToCameraPose: t:[x:-0.509, y:0.762, z:-0.647], q:[x:0.23, y:-0.62, z:-0.66, w:-0.35]
V/processFrame: sensorToCameraPose: t:[x:-0.114, y:0.105, z:-1.312], q:[x:0.54, y:-0.46, z:-0.08, w:-0.70]
V/processFrame: magneticVectorInCamera: [15.159668, 56.381603, 220.96408]
One thing I'm confused about is why my sensoryToCamera pose is changing as I move my phone:
sensorToCameraPose: t:[x:0.068, y:-0.014, z:0.083], q:[x:0.14, y:-0.65, z:-0.25, w:-0.70]
sensorToCameraPose: t:[x:0.071, y:-0.010, z:0.077], q:[x:0.11, y:-0.66, z:-0.23, w:-0.70]
sensorToCameraPose: t:[x:0.075, y:-0.007, z:0.070], q:[x:0.08, y:-0.68, z:-0.20, w:-0.70]
sensorToCameraPose: t:[x:0.080, y:-0.007, z:0.061], q:[x:0.05, y:-0.69, z:-0.18, w:-0.70]
sensorToCameraPose: t:[x:0.084, y:-0.008, z:0.052], q:[x:0.01, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.091, y:-0.011, z:0.045], q:[x:-0.03, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.094, y:-0.017, z:0.037], q:[x:-0.09, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.098, y:-0.026, z:0.027], q:[x:-0.16, y:-0.67, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.100, y:-0.037, z:0.020], q:[x:-0.23, y:-0.65, z:-0.19, w:-0.70]
sensorToCameraPose: t:[x:0.098, y:-0.046, z:0.012], q:[x:-0.30, y:-0.62, z:-0.20, w:-0.70]
sensorToCameraPose: t:[x:0.096, y:-0.055, z:0.005], q:[x:-0.35, y:-0.59, z:-0.19, w:-0.70]
sensorToCameraPose: t:[x:0.092, y:-0.061, z:-0.003], q:[x:-0.41, y:-0.56, z:-0.18, w:-0.70]
sensorToCameraPose: t:[x:0.086, y:-0.066, z:-0.011], q:[x:-0.45, y:-0.52, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.080, y:-0.069, z:-0.018], q:[x:-0.49, y:-0.49, z:-0.16, w:-0.70]
sensorToCameraPose: t:[x:0.073, y:-0.071, z:-0.025], q:[x:-0.53, y:-0.45, z:-0.15, w:-0.70]
sensorToCameraPose: t:[x:0.065, y:-0.072, z:-0.031], q:[x:-0.56, y:-0.42, z:-0.13, w:-0.70]
sensorToCameraPose: t:[x:0.059, y:-0.072, z:-0.038], q:[x:-0.59, y:-0.38, z:-0.13, w:-0.70]
sensorToCameraPose: t:[x:0.053, y:-0.071, z:-0.042], q:[x:-0.61, y:-0.35, z:-0.12, w:-0.70]
sensorToCameraPose: t:[x:0.047, y:-0.069, z:-0.046], q:[x:-0.63, y:-0.32, z:-0.11, w:-0.70]
sensorToCameraPose: t:[x:0.041, y:-0.067, z:-0.048], q:[x:-0.64, y:-0.28, z:-0.10, w:-0.70]
sensorToCameraPose: t:[x:0.037, y:-0.064, z:-0.050], q:[x:-0.65, y:-0.26, z:-0.10, w:-0.70]
sensorToCameraPose: t:[x:0.032, y:-0.060, z:-0.052], q:[x:-0.67, y:-0.23, z:-0.09, w:-0.70]
sensorToCameraPose: t:[x:0.027, y:-0.057, z:-0.054], q:[x:-0.68, y:-0.20, z:-0.08, w:-0.70]
Note - there are several other questions about converting the magnetic field vector into a global coordinate space (i.e. this and this), but I haven't been able to find anything for going to camera coordinate space.
There were two issues with my code above.
First, I was using compose incorrectly. To transform by A and then B you do B.compose(A). With that fix I started getting consistent sensorToCameraPose's.
Second, after that fix, I had a 90° rotation between x and y. From u/inio on Reddit:
So usually for phone form-factor devices there will be a 90° rotation between the camera coordinate system (which is defined to have +x point in the diction of the horizontal axis of the physical camera image, typically the long axis of the device) and the android sensor coordinate system (which has +y pointing away from the android navigation buttons, and +x thus along the short axis of the device). The difference you describe is an 88.8° rotation. Maybe you want the virtual camera pose? Source
I tested with using the getDisplayOrientedPose(). With it, I get what I expect when I am in portrait mode. But, if I flip to landscape, then the coordinate system changes, and I am off by a 90° rotation. So I instead did the rotation myself.
public void processFrame(Frame frame) {
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
// Get the magnetic vector in sensor that we stored in the onSensorChanged() delegate
float[] magneticVectorInSensor = {x,y,z};
// Get sensor to world
Pose sensorToWorldPose = frame.getAndroidSensorPose();
// Get camera to world
Pose cameraToWorldPose = frame.getCamera().getPose();
// +90° rotation about Z
// https://github.com/google-ar/arcore-android-sdk/issues/535#issuecomment-418845833
Pose CAMERA_POSE_FIX = Pose.makeRotation(0, 0, ((float) Math.sqrt(0.5f)), ((float) Math.sqrt(0.5f)));
Pose rotatedCameraToWorldPose = cameraToWorldPose.compose(CAMERA_POSE_FIX);
// Get world to camera
Pose worldToCameraPose = rotatedCameraToWorldPose.inverse();
// Get sensor to camera
Pose sensorToCameraPose = worldToCameraPose.compose(sensorToWorldPose);
// Get the magnetic vector in camera coordinate space
float[] magneticVectorInCamera = sensorToCameraPose.rotateVector(magneticVectorInSensor);
}