I need to implement a functionality in my app that allow the user to select an area in an imageView
just touching and dragging.
I tried with touchesBegan
, but as I am new in Swift I had some difficulty.
How can I do that?
I got here, but what now?
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
touch = touches.first
lastPoint = touch.location(in: imageView)
for touch in touches {
print(touch.location(in: imageView))
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
touch = touches.first
currentPoint = touch.location(in: imageView)
self.imageView.setNeedsDisplay()
lastPoint = currentPoint
}
If you want to do something when you’re done selecting the portion of the image, e.g., implement touchesEnded
.
Let’s imagine, for example, that you want to show a rectangle of the prospective area as you drag and you want to make an image snapshot of the selected portion when you’re done dragging. You could then do something like:
@IBOutlet weak var imageView: UIImageView!
var startPoint: CGPoint?
let rectShapeLayer: CAShapeLayer = {
let shapeLayer = CAShapeLayer()
shapeLayer.strokeColor = UIColor.black.cgColor
shapeLayer.fillColor = UIColor.clear.cgColor
shapeLayer.lineWidth = 3
return shapeLayer
}()
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
startPoint = nil
guard let touch = touches.first else { return }
startPoint = touch.location(in: imageView)
// you might want to initialize whatever you need to begin showing selected rectangle below, e.g.
rectShapeLayer.path = nil
imageView.layer.addSublayer(rectShapeLayer)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first, let startPoint = startPoint else { return }
let currentPoint: CGPoint
if let predicted = event?.predictedTouches(for: touch), let lastPoint = predicted.last {
currentPoint = lastPoint.location(in: imageView)
} else {
currentPoint = touch.location(in: imageView)
}
let frame = rect(from: startPoint, to: currentPoint)
// you might do something with `frame`, e.g. show bounding box
rectShapeLayer.path = UIBezierPath(rect: frame).cgPath
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first, let startPoint = startPoint else { return }
let currentPoint = touch.location(in: imageView)
let frame = rect(from: startPoint, to: currentPoint)
// you might do something with `frame`, e.g. remove bounding box but take snapshot of selected `CGRect`
rectShapeLayer.removeFromSuperlayer()
let image = imageView.snapshot(rect: frame, afterScreenUpdates: true)
// do something with this `image`
}
private func rect(from: CGPoint, to: CGPoint) -> CGRect {
return CGRect(x: min(from.x, to.x),
y: min(from.y, to.y),
width: abs(to.x - from.x),
height: abs(to.y - from.y))
}
Where you have this UIView
extension for creating a snapshot image:
extension UIView {
/// Create image snapshot of view.
///
/// - Parameters:
/// - rect: The coordinates (in the view's own coordinate space) to be captured. If omitted, the entire `bounds` will be captured.
/// - afterScreenUpdates: A Boolean value that indicates whether the snapshot should be rendered after recent changes have been incorporated. Specify the value false if you want to render a snapshot in the view hierarchy’s current state, which might not include recent changes.
/// - Returns: The `UIImage` snapshot.
func snapshot(rect: CGRect? = nil, afterScreenUpdates: Bool = true) -> UIImage {
return UIGraphicsImageRenderer(bounds: rect ?? bounds).image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
}
}
Now, if you want to do something other than capturing snapshot of the image, then, fine, do whatever you want. But this illustrates the basic idea.
A couple of minor things in my above example:
Note that I limit my ivars to those things I absolutely need. E.g. the current touch
should probably be a local variable, not an ivar. We should always limit our variables to the narrowest possible scope to avoid unintended consequences, etc.
I added a minor refinement to your touchesMoved
to use predictive touches. That’s not necessary, but can help minimize any perceived lagginess when dragging one’s finger.
I’m not at all sure why you called setNeedsDisplay
. It seems unnecessary unless there was something else that you were intending there.
I’m not sure what content mode you were using for your image view. For example, if you were using “Aspect scale fit” and wanted to do a snapshot of that, you might chose a different snapshotting algorithm, for example as outlined in https://stackoverflow.com/a/54191120/1271826.