I'm trying out the performance shaders for the first time and encountered a runtime problem. The MTLTexture
that MTKTextureLoader
returns seems to be uncompatible with Metal Performance Shaders' MPSImageFindKeypoints
encoder.
The only hint so far that I found is from @warrenm's sample code on MPS that specifies MTKTextureLoaderOptions
just like I did. I did not find any other mentions in the docs.
Any help is highly appreciated.
/BuildRoot/Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-121.0.2/MPSImage/Filters/MPSKeypoint.mm:166: failed assertion `Source 0x282ce8fc0 texture type (80) is unsupported
where 0x282ce8fc0 is the MTLTexture
from the texture loader.
As far as I could see there is no MTLTexture type 80, the enum ranges up to 8 or so (not hex).
CGFloat w = CGImageGetWidth(_image);
CGFloat h = CGImageGetHeight(_image);
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
NSDictionary* textureOptions = @{ MTKTextureLoaderOptionSRGB: [[NSNumber alloc] initWithBool:NO] };
id<MTLTexture> texture = [[[MTKTextureLoader alloc] initWithDevice:device] newTextureWithCGImage:_image
options:textureOptions
error:nil];
id<MTLBuffer> keypointDataBuffer;
id<MTLBuffer> keypointCountBuffer;
MTLRegion region = MTLRegionMake2D(0, 0, w, h);
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
MPSImageKeypointRangeInfo rangeInfo = {100,0.5};
MPSImageFindKeypoints* imageFindKeypoints = [[MPSImageFindKeypoints alloc] initWithDevice:device
info:&rangeInfo];
[imageFindKeypoints encodeToCommandBuffer:commandBuffer
sourceTexture:texture
regions:®ion
numberOfRegions:1
keypointCountBuffer:keypointCountBuffer
keypointCountBufferOffset:0
keypointDataBuffer:keypointDataBuffer
keypointDataBufferOffset:0];
[commandBuffer commit];
NSLog(keypointCountBuffer);
NSLog(keypointDataBuffer);
After converting my image to the correct pixel format I am now initialising the buffers like so:
id<MTLBuffer> keypointDataBuffer = [device newBufferWithLength:maxKeypoints*(sizeof(MPSImageKeypointData)) options:MTLResourceOptionCPUCacheModeDefault];
id<MTLBuffer> keypointCountBuffer = [device newBufferWithLength:sizeof(int) options:MTLResourceOptionCPUCacheModeDefault];
There is no error anymore. But how can I reading the contents now?
((MPSImageKeypointData*)[keypointDataBuffer contents])[0].keypointCoordinate
returns (0,0) for all indexes. Also I don't know how to read the keypointsCountBuffer
. The buffer contents converted to an int value show a higher value than the defined maxKeypoints. I don't see where the docs say what kind of format the count buffer has.
Finally the code is running and just for completeness sake I thought I should post the whole code as an answer
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
// init textures
NSDictionary* textureOptions = @{ MTKTextureLoaderOptionSRGB: [[NSNumber alloc] initWithBool:NO] };
id<MTLTexture> texture = [[[MTKTextureLoader alloc] initWithDevice:device] newTextureWithCGImage:_lopoImage
options:textureOptions
error:nil];
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:(MTLPixelFormatR8Unorm) width:w height:h mipmapped:NO];
descriptor.usage = (MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite);
id<MTLTexture> unormTexture = [device newTextureWithDescriptor:descriptor];
// init arrays and buffers for keypoint finder
int maxKeypoints = w*h;
id<MTLBuffer> keypointDataBuffer = [device newBufferWithLength:sizeof(MPSImageKeypointData)*maxKeypoints options:MTLResourceOptionCPUCacheModeWriteCombined];
id<MTLBuffer> keypointCountBuffer = [device newBufferWithLength:sizeof(int) options:MTLResourceOptionCPUCacheModeWriteCombined];
MTLRegion region = MTLRegionMake2D(0, 0, w, h);
// init colorspace converter
CGColorSpaceRef srcColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceLinearGray);
CGColorConversionInfoRef conversionInfo = CGColorConversionInfoCreate(srcColorSpace, dstColorSpace);
MPSImageConversion *conversion = [[MPSImageConversion alloc] initWithDevice:device
srcAlpha:(MPSAlphaTypeAlphaIsOne)
destAlpha:(MPSAlphaTypeNonPremultiplied)
backgroundColor:nil
conversionInfo:conversionInfo];
// init keypoint finder
MPSImageKeypointRangeInfo rangeInfo = {maxKeypoints,0.75};
MPSImageFindKeypoints* imageFindKeypoints = [[MPSImageFindKeypoints alloc] initWithDevice:device
info:&rangeInfo];
// encode command buffer
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
[conversion encodeToCommandBuffer:commandBuffer sourceTexture:texture destinationTexture:unormTexture];
[imageFindKeypoints encodeToCommandBuffer:commandBuffer
sourceTexture:unormTexture
regions:®ion
numberOfRegions:1
keypointCountBuffer:keypointCountBuffer
keypointCountBufferOffset:0
keypointDataBuffer:keypointDataBuffer
keypointDataBufferOffset:0];
// run command buffer
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
// read keypoints
int count = ((int*)[keypointCountBuffer contents])[0];
MPSImageKeypointData* keypointDataArray = ((MPSImageKeypointData*)[keypointDataBuffer contents]);
for (int i = 0 ; i<count;i++) {
simd_ushort2 coordinate = keypointDataArray[i].keypointCoordinate;
NSLog(@"color:%f | at:(%u,%u)", keypointDataArray[i].keypointColorValue, coordinate[0], coordinate[1] );
}
I guess there should be a more clever way to allocate the keypoint buffers with [device newBufferWithBytesNoCopy]
so then you would not need to copy the contents back into your allocated arrays. It just didn't figure out to correctly align the buffer.
Also I should mention that I guess usually you will have a grayscale texture after any kind of feature detection so that the image converting part will not be necessary.