I'm currently working on a GUI in python where I display camera and processed images. For this, I implemented a ROI-option where the user can select a certain region in the QLabel holding the image to only display this region. The image itself is a numpy array for which reason I transform it according to this code:
self.spec = fft2d.dft2(image)
trnsfImg = np.log(np.abs(self.spec) + 0.00000000001) #to prevent zero division (non optimal solution)
trnsfImg = trnsfImg[self.s_top:self.s_top+self.s_hei, self.s_left:self.s_left+self.s_wid]
trnsfImg = (255*(trnsfImg-trnsfImg.min())/(trnsfImg.max()-trnsfImg.min()))
trnsfImg = trnsfImg.astype(np.uint8) #for qlabel dtype needs to be uint8
qimg = QImage(trnsfImg, trnsfImg.shape[1], trnsfImg.shape[0], QImage.Format_Indexed8)
self.spec_pix = QtGui.QPixmap.fromImage(qimg).scaled(self._ui.specLB.width(), self._ui.specLB.height())
self._ui.specLB.setPixmap(self.spec_pix)
This works fine when I'm selecting areas which are bigger than certain values. But when I select a "small" area, the image is distorted in an unexpected way: Distorted area of image. As a reference, I saved the numpy image before transforming it Saved numpy image with QImage()
and QtGui.QPixmap.fromImage(qimg).scaled()
using PIL
:
im = Image.fromarray(trnsfImg).convert('L')
im.save("PathtoImage/img.jpeg")
Can anyone tell why this is happening?
The answer has been given here: https://forum.qt.io/topic/138087/qpixmap-scaled-shows-unexpected-transformed-image. Specifying bytesperline
in QImage()
with the width of the image seems to do the trick (even though I cannot explain why this is necessary for small areas while it works without it for larger areas).