opencvimage-processingimage-segmentationwaveletwavelet-transform

Wound Segmentation using Wavelet Transform in OpenCV


We tried Local Histogram approach for wound segmentation which didn't work well for all kinds of images and then we taught to use Wavelet transform for wound segmentation.

Which Wavelet transform will be good for wound segmentation and some tips to implement it ??

Is there any better way than the wavelet transform to segment wound in all light conditions ??

We also tried Image Clustering Which didn't went that well.

Here are some test cases and clustering program we used.

#include "cv.h"
#include "highgui.h"

#include <iostream>
void show_result(const cv::Mat& labels, const cv::Mat& centers, int height, int width);
int main(int argc, const char * argv[])
{    
        cv::Mat image = cv::imread("kmean.jpg");
        if ( image.empty() ) {
                std::cout << "unable to load an input image\n";
                return 1;
        }
        //cv::cvtColor(image,image,CV_BGR2HSV);
        std::cout << "image: " << image.rows << ", " << image.cols << std::endl;
        assert(image.type() == CV_8UC3);
        cv::imshow("image", image);

        cv::Mat reshaped_image = image.reshape(1, image.cols * image.rows);
        std::cout << "reshaped image: " << reshaped_image.rows << ", " << reshaped_image.cols << std::endl;
        assert(reshaped_image.type() == CV_8UC1);
        //check0(image, reshaped_image);

        cv::Mat reshaped_image32f;
        reshaped_image.convertTo(reshaped_image32f, CV_32FC1, 1.0 / 255.0);
        std::cout << "reshaped image 32f: " << reshaped_image32f.rows << ", " << reshaped_image32f.cols << std::endl;
        assert(reshaped_image32f.type() == CV_32FC1);

        cv::Mat labels;
        int cluster_number = 4;
        cv::TermCriteria criteria(cv::TermCriteria::COUNT, 100, 1);
        cv::Mat centers;
        cv::kmeans(reshaped_image32f, cluster_number, labels, criteria, 1, cv::KMEANS_PP_CENTERS, centers);

        show_result(labels, centers, image.rows,image.cols);

        return 0;
}

void show_result(const cv::Mat& labels, const cv::Mat& centers, int height, int width)
{
        std::cout << "===\n";
        std::cout << "labels: " << labels.rows << " " << labels.cols << std::endl;
        std::cout << "centers: " << centers.rows << " " << centers.cols << std::endl;
        assert(labels.type() == CV_32SC1);
        assert(centers.type() == CV_32FC1);

        cv::Mat rgb_image(height, width, CV_8UC3);
        cv::MatIterator_<cv::Vec3b> rgb_first = rgb_image.begin<cv::Vec3b>();
        cv::MatIterator_<cv::Vec3b> rgb_last = rgb_image.end<cv::Vec3b>();
        cv::MatConstIterator_<int> label_first = labels.begin<int>();

        cv::Mat centers_u8;
        centers.convertTo(centers_u8, CV_8UC1, 255.0);
        cv::Mat centers_u8c3 = centers_u8.reshape(3);

        while ( rgb_first != rgb_last ) {
                const cv::Vec3b& rgb = centers_u8c3.ptr<cv::Vec3b>(*label_first)[0];
                *rgb_first = rgb;
                ++rgb_first;
                ++label_first;
        }
        cv::imshow("tmp", rgb_image);


        cv::waitKey();
}

Would-1 with Background : (two clusters)

Would-1 with Background

Would-1 with out Background :

Would-1 with out Background

Would-2 with Background :

Would-2 with Background

Would-2 with out Background :(three clusters)

Would-2 with out Background

When we remove background we are getting a bit better segmentation, but for removing background we are using grab-cut which relies on manual operation. So we need a substitute for the kmean-clustering for segmenting image (or) some improvements in above code to achieve 100% success cases.

So is there any better way to segment the wounds ??


Solution

  • Instead of attempting to use the traditional wavelet transform, you may want to try Haar-like wavelets tuned for object detection tasks, similar to the basis of integral images used in the Viola Jones face detector. This paper by Lienhart et al, used for generic object detection, would be a good start.

    From the looks of your example images, the variance of intensities within small pixel neighbourhoods in the wound is a lot higher, whereas the unbruised skin appears to be fairly uniform in small neighbourhoods. The Lienhart paper should be able to detect such variations - you can either feed the features into a machine learning setup, or just make manual observations and define the search windows and related heuristics.

    Hope this helps.