machine-learningsensorsdrift

How do I correct sensor drift from external environment?


I have an electromagnetic sensor which reports how much electromagnetic field strength it reads in space. And I also have a device that emits electromagnetic field. It covers 1 meter area.

So I want to kind of predict position of the sensor using its reading. But the sensor is affected by metal so it makes the position prediction drifts.

It's like if the reading is 1, and you put it near a metal, you get 2. Something like that. It's not just noise, it's a permanent drift. Unless you remove the metal it will give reading 2 always.

What are the techniques or topics I need to learn in general to recover reading 1 from 2? Suppose that the metal is fixed somewhere in space and I can calibrate the sensor by putting it near metal first.

You can suggest anything about removing the drift in general. Also please consider that I can have another emitter putting somewhere so I should be able to recover the true reading easier.


Solution

  • Let me suggest that you view your sensor output as a combination of two factors:

    sensor_output = emitter_effect + environment_effect
    

    And you want to obtain emitter_effect without the addition of environment_effect. So, of course you need to subtract:

    emitter_effect = sensor_output - environment_effect 
    

    Subtracting the environment's effect on your sensor is usually called compensation. In order to compensate, you need to be able to model or predict the effect your environment (extra metal floating around) is having on the sensor. The form of the model for your environment effect can be very simple or very complex.

    Simple methods generally use a seperate sensor to estimate environment_effect. I'm not sure exactly what your scenario is, but you may be able to select a sensor which would independently measure the quantity of interference (metal) in your setup.

    More complex methods can perform compensation without referring to an independent sensor for measuring inteference. For example, if you expect the distance to be at 10.0 on average with only occasional deviations, you could use that fact to estimate how much interference is present. In my experience, this type of method is less reliable; systems with independent sensors for measuring interference are more predictable and reliable.

    You can start reading about Kalman filtering if you're interested in model-based estimation:

    https://en.wikipedia.org/wiki/Kalman_filter

    It's a complex topic, so you should expect a steep learning curve. Kalman filtering (and related Bayesian estimation methods) are the formal way to convert from "bad sensor reading" to "corrected sensor reading".