Obviously, we can use CoreImage to convert to gray, denoise and blur an image but then when it comes to Canny Edge Detection, contour detection and to draw a contour of a shape it seems that this is not possible just using swift features. Am I right or is there something that I am missing here?
In June 2020 Apple added a new Contour Detection feature to their Vision Framework in iOS 14. (currently in beta), so you can now use Swift (w/o OpenCV) in order to detect contours.
You can find more details in this documentation from Apple: https://developer.apple.com/documentation/vision/vndetectcontoursrequest
Also relevant is this WWDC2020 video: https://developer.apple.com/videos/play/wwdc2020/10673/