iosopencvobjective-c++hsl

Edit a RGB colorspace image with HSL conversion failed


I'm making an app to edit image's HSL colorspace via opencv2 and some conversions code from Internet.

I suppose the original image's color space is RGB, so here is my thought:

  1. Convert the UIImage to cvMat
  2. Convert the colorspace from BGR to HLS.
  3. Loop through all the pixel points to get the corresponding HLS values.
  4. Custom algorithms.
  5. Rewrite the HLS value changes to cvMat
  6. Convert the cvMat to UIImage

Here is my code:

Conversion between UIImage and cvMat

Reference: https://stackoverflow.com/a/10254561/1677041

#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>

UIImage *UIImageFromCVMat(cv ::Mat cvMat)
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;
    CGBitmapInfo bitmapInfo;

    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
        bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
#if 0
        // OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
        // this means using the "32Little" byte order, and potentially
        // skipping the first pixel. These may need to be adjusted if the
        // input matrix uses a different pixel format.
        bitmapInfo = kCGBitmapByteOrder32Little | (
            cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
        );
#else
        bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
#endif
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    // Creating CGImage from cv::Mat
    CGImageRef imageRef = CGImageCreate(
        cvMat.cols,                 // width
        cvMat.rows,                 // height
        8,                          // bits per component
        8 * cvMat.elemSize(),       // bits per pixel
        cvMat.step[0],              // bytesPerRow
        colorSpace,                 // colorspace
        bitmapInfo,                 // bitmap info
        provider,                   // CGDataProviderRef
        NULL,                       // decode
        false,                      // should interpolate
        kCGRenderingIntentDefault   // intent
    );

    // Getting UIImage from CGImage
    UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return finalImage;
}

cv::Mat cvMatWithImage(UIImage *image)
{
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
    CGFloat cols = image.size.width;
    CGFloat rows = image.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4);  // 8 bits per component, 4 channels
    CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;

    // check whether the UIImage is greyscale already
    if (numberOfComponents == 1) {
        cvMat = cv::Mat(rows, cols, CV_8UC1);  // 8 bits per component, 1 channels
        bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
    }

    CGContextRef contextRef = CGBitmapContextCreate(
        cvMat.data,         // Pointer to backing data
        cols,               // Width of bitmap
        rows,               // Height of bitmap
        8,                  // Bits per component
        cvMat.step[0],      // Bytes per row
        colorSpace,         // Colorspace
        bitmapInfo          // Bitmap info flags
    );

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

I tested these two functions alone and confirm that they work.

Core operations about conversion:

/// Generate a new image based on specified HSL value changes.
/// @param h_delta h value in [-360, 360]
/// @param s_delta s value in [-100, 100]
/// @param l_delta l value in [-100, 100]
- (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
{
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        Mat original = cvMatWithImage(self.originalImage);
        Mat image;

        cvtColor(original, image, COLOR_BGR2HLS);
        // https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way

        // accept only char type matrices
        CV_Assert(image.depth() == CV_8U);

        int channels = image.channels();

        int nRows = image.rows;
        int nCols = image.cols * channels;

        int y, x;

        for (y = 0; y < nRows; ++y) {
            for (x = 0; x < nCols; ++x) {
                // https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
                // https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
                Vec3b hls = original.at<Vec3b>(y, x);
                uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];

//              h = MAX(0, MIN(360, h + h_delta));
//              s = MAX(0, MIN(100, s + s_delta));
//              l = MAX(0, MIN(100, l + l_delta));

                 printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1

                 original.at<Vec3b>(y, x)[0] = h;
                 original.at<Vec3b>(y, x)[1] = l;
                 original.at<Vec3b>(y, x)[2] = s;
            }
        }

        cvtColor(image, image, COLOR_HLS2BGR);
        UIImage *resultImage = UIImageFromCVMat(image);

        dispatch_async(dispatch_get_main_queue(), ^ {
            if (completion) {
                completion(resultImage);
            }
        });
    });
}

The question is:

  1. Why does the HLS values out of my expected range? It shows as [0, 255] like RGB range, is that cvtColor wrong usage?
  2. Should I use Vec3b within the two for loop? or Vec3i instead?
  3. Does my thought have something wrong above?

Update:

Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];

// Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
// https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
float fh, fl, fs;
fh = h * 2.0;
fl = l / 255.0;
fs = s / 255.0;

fh = MAX(0, MIN(360, fh + h_delta));
fl = MAX(0, MIN(1, fl + l_delta / 100));
fs = MAX(0, MIN(1, fs + s_delta / 100));

// Convert them back
fh /= 2.0;
fl *= 255.0;
fs *= 255.0;

printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);

original.at<Vec3b>(y, x)[0] = short(fh);
original.at<Vec3b>(y, x)[1] = short(fl);
original.at<Vec3b>(y, x)[2] = short(fs);

Solution

  • 1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.

    8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)

    V should be L, there is a typo in the documentation

    You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.

    But you have to be careful converting it back.

    2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.

    If you have a question, feel free to comment