I have a UITextView and I need to detect if a user enters an emoji character.
I would think that just checking the unicode value of the newest character would suffice but with the new emoji 2s, some characters are scattered all throughout the unicode index (i.e. Apple's newly designed copyright and register logos).
Perhaps something to do with checking the language of the character with NSLocale or LocalizedString values?
Does anyone know a good solution?
Thanks!
Over the years these emoji-detecting solutions keep breaking as Apple adds new emojis w/ new methods (like skin-toned emojis built by pre-cursing a character with an additional character), etc.
I finally broke down and just wrote the following method which works for all current emojis and should work for all future emojis.
The solution creates a UILabel with the character and a black background. CG then takes a snapshot of the label and I scan all pixels in the snapshot for any non solid-black pixels. The reason I add the black background is to avoid issues of false-coloring due to Subpixel Rendering
The solution runs VERY fast on my device, I can check hundreds of characters a second, but it should be noted that this is a CoreGraphics solution and should not be used heavily like you could with a regular text method. Graphics processing is data heavy so checking thousands of characters at once could result in noticeable lag.
-(BOOL)isEmoji:(NSString *)character {
UILabel *characterRender = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 1, 1)];
characterRender.text = character;
characterRender.backgroundColor = [UIColor blackColor];//needed to remove subpixel rendering colors
[characterRender sizeToFit];
CGRect rect = [characterRender bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
CGContextRef contextSnap = UIGraphicsGetCurrentContext();
[characterRender.layer renderInContext:contextSnap];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = [capturedImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
BOOL colorPixelFound = NO;
int x = 0;
int y = 0;
while (y < height && !colorPixelFound) {
while (x < width && !colorPixelFound) {
NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat h, s, b, a;
UIColor *c = [UIColor colorWithRed:red green:green blue:blue alpha:1.0f];
[c getHue:&h saturation:&s brightness:&b alpha:&a];
b /= 255.0f;
if (b > 0) {
colorPixelFound = YES;
}
x++;
}
x=0;
y++;
}
return colorPixelFound;
}
*Note: If Apple were to ever roll out a solid black emoji this technique could be improved by running the process twice, once with black font and black background, then again with white font and white background, and OR'ing the results.