There is a rotation vector
in Android. As per my understanding, this uses accelerometer and gyroscope for sensor fusion. There is one more fused sensor called geomagnetic rotation vector
, which uses magnetometer instead of the gyroscope.
But I am not able to find the logics behind the sensor fusion of these two virtual sensors. Could you please explain how these two sensors are implemented or what are the algorithms used.
Rotation vector in Android has four data entry. but often it only has 3 non-zero entry. and usually there involved with another 9 entry representation. Accelerometer t_a and gyroscope w_p are different things. Dont confuse yourself. One problem at a time
For your title rotation vector, go read on the following links. No way some 1 is writing an essay here.
https://en.wikipedia.org/wiki/Euler_angles
https://en.wikipedia.org/wiki/Rotation_matrix
https://en.wikipedia.org/wiki/Quaternion
math theory invloded:
Riemannian Geometry
Lie algebra and Lie Group
Rodrigous Transform. etc
If you read them all, roughly take you half a year.
For a programmer, In short:
Euler angle concatenation is direct angle plus. DIffciultin to concatenate with motion/accelerometer. have gimbal lock problem. three variable, efficient representation.
Rotation_matrix concatenation is direct matrix multiplication. Can direct concatenate with transformation matrix [R | t ; 0 | 1]. transformation matrix concatenation is direct matrix multiplication. Have gimbal lock problem and need 9 data to represent
Quaternion is vector multiplication. Four variable, no gimbal lock problem. Not easy to direct concatenate with motion.
Fusing is another topic, either loosely coupled or tightly coupled.
Loosely coupled model usually EKF. Tightly coupled usually graph optimization method.
After this you will ask for to fuse with other measurements like accelerometer or others. Then its become more complicated. eg full integration model and partial integration model. Up to here a lot of the work is more toward research level. it is hard to explain in plain word. I suggest you read come recent paper such as VINSMONO and OKVIS.
[1] Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart and Paul Timothy Furgale. Keyframe-based visual–inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 2015.
[2] Stefan Leutenegger. Unmanned Solar Airplanes: Design and Algorithms for Efficient and Robust Autonomous Operation. Doctoral dissertation, 2014.
[3] Stefan Leutenegger, Paul Timothy Furgale, Vincent Rabaud, Margarita Chli, Kurt Konolige, Roland Siegwart. Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization. In Proceedings of Robotics: Science and Systems, 2013.
[4]Tong Qin, Peiliang Li, Zhenfei Yang, Shaojie Shen, VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator, IEEE Transactions on Robotics 2017