computer-visionhdr

Any comparison studies of advantage and drawbacks of High vs Standard Dynamic Range for machine vision techniques?


My intuition says that a High Dynamic Range image would provide more stable features and edges for various image segmentation and other low level vision algorithms to work with - but then it could go the other way with the larger number of bits leading to sparser features as well as the extra cost involved in generating HDR if it needs to be derived using exposure fusion or such instead of from hardware.

Can anyone point out any research on the topic, ideally it would be good to find out if there has been a comparison study for various machine vision techniques using Standard and High dynamic range images.


Solution

  • Since High Dynamic Range (HDR) images encode information captured from images at various exposure levels, they provide more visual information than traditional LDR image sequences for computer-vision tasks such as image-segmentation.

    HDR input images help improve accuracy of vision models with better feature learning and low-level feature extraction as there are fewer saturated (over-exposed or under-exposed) regions in the HDR image when compared to their LDR counterparts.

    However, there are certain challenges with using HDR inputs such as an increase in computational resources needed to process HDR images and the data needed to avoid learning sparse features due to an increase in their precision.

    Here is a research article that compares LDR vs HDR inputs for a machine vision task: Comparative Analysis between LDR and HDR Images for Automatic Fruit Recognition and Counting. Quoting from the research article: "The obtained results show that the use of HDR images improves the detection performance to more than 30% when compared to LDR".

    Below are a few more related research articles you might find useful: