I'm trying to create a simple AR game, and I want to be able to detect if the user position is in the ADF. I'm not sure on how to do this. I've tried to load in a selected ADF and then compare the device frame with the ADF frame, but it isn't working. I could derive a walkable area but I'm not sure on how to do this either.
The device frame with ADF frame (I will use the notation adf_T_device) should work if you have an ADF loaded and the device is relocalized.
In learning mode, the adf_T_device pose will be valid right after the service started. In learning mode, this pose represents the pose after optimization (loop closed). So you will see this pose starts to build up an offset to the start_service_T_device pose, this is because the under-layer system is correcting the pose for it.
However, when there's an ADF loaded in the system, adf_T_device pose will not be valid until the device is relocalized based on the ADF loaded. If you haven't seen the device relocalize after a long time, maybe the environment has been changed too much so that the system couldn't recognize it anymore. This is very common due to object changes or lighting condition changes. I would suggest you record a new ADF and try again. Also, when you record an ADF, try to record the area from all angles. I always think the recording process as doing spray painting. After you painted all the areas, then the ADF is constructed properly. In Unity, we have an area learning example scene shows how to build an ADF as well.
To construct a walkable area, I would suggest you divide the world into small "cube", technically an octree. The size of the octree is determined by the application's use case. In the learning mode, each the adf_T_device's position should hit a "cube" in the octree, after walking around (learning the area), you will have a set of "cube" that is walkable. In next run, or when the ADF is loaded, you could use this octree to test if a specific area is valid in the ADF.