Project Summary:
I'm working on computer vision project for the Hololens 2, the goal is to be able to process frames from a live camera feed and then overlay the processed frames in AR within my Unity app.
ex) The user will look at a crack in a surface, then the crack will be high-lighted in AR.
The image processing is done in C++ via openCV and will be built into a .dll to access within Unity, the problem I'm running into currently is accessing a raw camera feed from the Hololens that I can use for processing.
Possibilities:
I've looked into Mixed Reality Capture but this seems to mostly be used for recording/streaming headset view and not a raw camera feed. Holograms will also interfere with the image processing.
I've also looked into Research Mode which gives access to sensor streams such as the cameras and depth sensors used for tracking the headset, this is a possibility but I'm wondering if there is a better way?
Camera stream requirements:
I think Research Mode can achieve your needs, but it should be noted that Research Mode is not suitable for production environments.
In addition to Research Mode, you can try PhotoCapture and VideoCapture provided by Unity, you can choose whether to capture holograms when calling CreateAsync. Please refer to the documentation below.
You can also try WebCamTexture, it doesn't capture holograms. You can refer to https://docs.unity3d.com/ScriptReference/WebCamTexture.html.