pythonopencveye-trackingiris-recognition

Live Iris Detection with OpenCV - Thresholding vs HoughTransform


I am trying to create an application that is able to detect and track the iris of an eye in a live video stream. In order to do that, I want to use Python and OpenCV. While researching for this on the internet, it seemed to me that there are multiple possible ways to do that.

First Way:

Run a Canny Filter to get the edges, and then use HoughCircle to find the Iris.

Second Way:

Use Otsus-Algorithm to find the perfect threshold and then use cv2.findContours() to find the Iris.

Since I want this to run on a Raspberry Pi (4B), my question is which of these methods is better, especially in terms of reliability and performance?


Solution

  • I would take a third path and start from a well enstablished method for facial landmark detection (e.g. dlib). You can use a pre-trained model to get a reliable estimate on the position of the eye.

    This is an example output from a facial landmark detector:

    enter image description here

    Then you go ahead from there to find the iris, either using edge detection, Hough or whathever.

    Probably you can simply use an heuristic as you can assume the iris to be always in the center of mass of the keypoints around each eye.

    There are also some good tutorials online in a similar setting (even for Raspberry) for example this one or this other one from PyImageSearch.