I am developing simple drawing app using UIKit using the idea shared in Ray Wenderlich's tutorial. Difference is that I need to implement a feature so that I can zoom/scale into my drawing and draw finer lines. I am able to zoom in using CGAffineTransformScale (with ofcourse UIPinchGestureRecognizer) and move around the UIImage using CGAffineTransform - the problem is that once zoomed in the UITouch points detected and the actual touch points have a huge offset. This offset gets bigger as I keep scaling the image.
In the code
drawingImage - one which user interacts with
savingImage - drawn lines are saved
transform_translate - CGAffinetransform
lastScale - CGFloat to save last zoom scale value
lastPoint - CGPoint to save last point of touch
lastPointForPinch - CGPoint to save last pinch point
Pinch gesture is initialized in viewDidLoad as -
pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(pinchGestureDetected:)];
[self.drawingImage addGestureRecognizer:pinchGestureRecognizer];
The method for UIPinchGesture detection is is -
- (void)pinchGestureDetected:(UIPinchGestureRecognizer *)recognizer
{
if([recognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [recognizer scale];
lastPointForPinch = [recognizer locationInView:self.drawingImage];
}
if ([recognizer state] == UIGestureRecognizerStateBegan ||
[recognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[recognizer view].layer valueForKeyPath:@"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 2.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [recognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[recognizer view] transform], newScale, newScale);
self.savingImage.transform = transform;
self.drawingImage.transform=transform;
lastScale = [recognizer scale]; // Store the previous scale factor for the next pinch gesture call
CGPoint point = [recognizer locationInView:self.drawingImage];
transform_translate = CGAffineTransformTranslate([[recognizer view] transform], point.x - lastPointForPinch.x, point.y - lastPointForPinch.y);
self.savingImage.transform = transform_translate;
self.drawingImage.transform=transform_translate;
lastPointForPinch = [recognizer locationInView:self.drawingImage];
}
}
The method for drawing of lines (FYI this is a fairly standard procedure taken from the above mentioned tutorial, putting it here if incase I made some mistake here it can be caught) -
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
mouseSwiped = NO;
UITouch *touch = [touches anyObject];
lastPoint = [touch locationInView:self.drawingImage];
UIGraphicsBeginImageContext(self.savingImage.frame.size);
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.drawingImage];
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(ctxt, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(ctxt, currentPoint.x, currentPoint.y);
CGContextSetLineCap(ctxt, kCGLineCapRound);
CGContextSetLineWidth(ctxt, brush );
CGContextSetRGBStrokeColor(ctxt, red, green, blue, opacity);
CGContextSetBlendMode(ctxt,kCGBlendModeNormal);
CGContextSetShouldAntialias(ctxt,YES);
CGContextSetAllowsAntialiasing(ctxt, YES);
CGContextStrokePath(ctxt);
self.drawingImage.image = UIGraphicsGetImageFromCurrentImageContext();
lastPoint = currentPoint;
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
if(!mouseSwiped) {
UIGraphicsEndImageContext();
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.drawingImage];
UIGraphicsBeginImageContext(self.drawingImage.frame.size);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), brush);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, opacity);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.drawingImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.drawingImage.image drawInRect:CGRectMake(0, 0, self.drawingImage.frame.size.width, self.drawingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.savingImage.frame.size);
[self.savingImage.image drawInRect:CGRectMake(0, 0, self.savingImage.frame.size.width, self.savingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
[self.drawingImage.image drawInRect:CGRectMake(0, 0, self.drawingImage.frame.size.width, self.drawingImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.savingImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.drawingImage.image=nil;
UIGraphicsEndImageContext();
}
}
I have tried doing CGPointApplyAffineTransform(point, transform_translate) but the huge offset still remains.
Hope my question was explained clearly and someone can help me. I have been struggling to make progress in this. Thanks in advance
I found the solution finally...one silly mistake done again and again. locationInView was needed to be from self.view and not from the image.
@davidkonard thanks for the suggestion - actually I did not realize that (in context of drawing app) the user touches the screen with an intent that exactly at that point the drawing will done, therefore even if the UIImageView is moved still the user wants to draw a point/line/whatever under his finger. So locationInView is supposed to be self.view (and self.view in my case was not transformed ever).
Hope this explains why I was making a mistake and how I came up with solution.