c++qtqgraphicsview

How does the QGraphicsView coordinate system work?


Following this tutorial, I wrote a little program with a QGraphicsView, and a paint function within a class that inherits QGraphicsItem, that can paint to that QGraphicsView. But here's where I'm confused. If I say:

painter->drawLine(-50,-50,50,50);

in what coordinate system am I working? What determines where the origin is, and what counts as a unit? I know by messing around, that the dimensions of boundingRect() have something to do with it. But they clearly don't determine the area. If I return:

return QRectF(0,0,200,200);

I can still draw the line above, and it doesn't even appear at the edge of my drawing space! So where does the coordinate system arise and how can I control it?

Edit: To make this a bit more clear, I have a QGraphicsView and QGraphicsScene which I place in it. I then have a class that inherits QGraphicsItem, and I add the class to the scene. So now I have a paint function in the class and whatever I paint there using QPainter should be rendered in the QGraphicsView as though it were a mini window. At least that's my understanding.

However, in the paint function, I wrote stuff relying on a system of coordinates. My problem is I'm not sure how to figure out the coordinate system within my item that is within the scene that is within the view. In order to draw, I need to know where the origin is and how many points there are in each direction. I can seem to figure that out in a consistent way.


Solution

  • Relative coordinate system

    Everything in the graphics view framework uses coordinates relative to their context. More correctly, almost any geometry-related class in Qt primarily uses a local coordinate system that may be eventually translated to its parent if necessary.

    This is not that different from what is normally done with widgets, and even windows. Suppose that you have a custom widget that also happens to be a top level window without borders and title bar, and you draw a 10x10 square at (10, 10) pixels from its origin (its top left corner, aka, (0, 0)[1]). You place that window on the top left corner of your screen, and it will happen to draw the square exactly at (10, 10) pixels away from the top left of your screen. If you move that window, though, the square will be shown at a different position, relative to the screen, but still at (10, 10) relative to the window.

    If that widget is then shown in a parent window, the concept will be the same, with a further level: if the window is aligned to the top left of the screen, and the widget is placed at (0, 0), you'll see again a 10x10 square shown 10 pixels from the top and left of the screen. If the widget is then moved within its parent at (10, 10), the square will be shown at 20, 20 of the top left corner of the screen, but if you move the window, that will also change in screen coordinates.

    Still, in all these cases, the widget will always be drawing a 10x10 square at its own (10, 10) coordinates.

    The common misunderstanding with the graphics view framework is that, at first sight, everything seem to exist in scene coordinates, including the bounding rect.

    All this is because, by default, all items positions are always initialized to (0, 0) (just like widgets!), or, more precisely, QPointF(), which is implicitly as QPointF(0.0, 0.0) (or the integer based QPoint() for widgets, aka, QPoint(0, 0)).

    For instance, when adding a QGraphicsRectItem with the QGraphicsScene::addRect() helper function, one might believe that the item exists at the given coordinates.

    Take for instance this:

    scene.addRect(10, 10, 10, 10)
    

    Note, I'm no C++ dev, so I won't attempt to write C++ code that will probably be wrong. For simplicity, I'll use Python syntax, which can be considered as a form of pseudo code.

    The above will show a rectangle at 10, 10, relative to the scene rect.

    Now consider this:

    rect = scene.addRect(0, 0, 10, 10)
    rect.setPos(10, 10)
    

    The above will show exactly the same thing. But with two important differences:

    So, since they're identical, why should one use the second? Well, it depends. There are some cases for which using apparently absolute coordinates might work well. But there are many cases for which always using relative coordinates is necessary. Consider the case of drawing a control point (for example, to resize a rectangle), which normally is centered on its reference position.

    If you use a pseudo-absolute approach, you should always consider the reference position and subtract half the size:

    # show a 10x10 control point and then "center" it at 100, 50
    size = 10
    cp = scene.addRect(0, 0, size, size)
    
    ... later, somewhere else
    
    x = 100
    y = 50
    cp.setRect(x - cp.width() / 2, y - cp.height() / 2, cp.width(), cp.height())
    

    Using proper relative coordinates, that's much easier and logical:

    size = 10
    cp = scene.addRect(-size / 2, -size / 2, size, size)
    
    ...
    
    cp.setPos(x, y)
    

    All this will complicate things when using pseudo-absolute coordinates, if the item is also a child of another one[3].

    The bounding rect

    The main purpose of the boundingRect() is to define the boundaries in which painting should happen. To simplify things, the default behavior of QGraphicsItem (which is slightly different from that of the primitive items Qt provides) is to also use the bounding rect for collision detection, for instance to allow item selection or moving the item with the mouse.

    In reality, it primary use is to optimize painting: when there is a small change in a graphics scene, the view will only update the portion of the scene interested in that change. That's the reason for this note in the documentation:

    all painting must be restricted to inside an item's bounding rect. QGraphicsView uses this to determine whether the item requires redrawing.

    Now, your query is correct: if the bounding rect is QRectF(0,0,200,200), how can you do painter->drawLine(-50,-50,50,50);?

    The reality is that the bounding rect does not actually restrict painting outside of it.

    By default, the graphics view will draw everything any of its item's paint() function ask; specifically, that's what will normally happen the first time it's shown.

    But here's the catch: note the redrawing in the quote above. The main optimization of the graphics view is to only draw what is actually necessary. If you change a small part of a huge scene, there's no point in redrawing everything else: the renderer will cache everything as much as possible, and only clear/redraw the small changes.
    And here comes the bounding rect: if the change does not interest the bounding rect, the scene will not update it. If your item draws outside of its bounding rect, anything that has been drawn outside of it will not be cleared, resulting in "ghost artifacts".

    Try to setFlag(QGraphicsItem::ItemIsMovable) your item and move it around the scene, then you'll probably see something like the following:

    Bad bounding rect result

    The scene rect

    By default, a graphics scene has a sceneRect() that includes all the bounding rect of the items it contains. Similarly, by default, the graphics view uses the sceneRect() of the scene. Still, you must consider the following:

    The above also explains why your line doesn't appear on top of the view: the scene rect is small, the view is large, so it displays your item "centered": note, though, that the centering is based on the bounding rect of the items in the scene, so the displayed line is not actually centered.
    Add a painter->drawRect(boundingRect()) within paint() and you'll see that that rect is actually centered in the view.

    Coordinate system and units

    Note that all this is not a peculiar behavior of the graphics view framework. In reality, it just uses common geometry concepts: the Cartesian coordinate system and its related reference system.

    Similarly to basic algebra math, the concept of unit is completely abstract and follows the principle of standard basis.
    They are logical units, where "one unit" is just one, it's not a pixel, it's not an inch, a millimeter, a fingernail, an atom, a potato or anything else. But it can become so as soon as a reference is given to that unit.

    When a scene is drawn, though, it uses the device pixel on which it's painted as reference for the coordinate system, meaning that 1 is usually one "pixel", unless transformations are applied or any other reference system is used (consider a physical printer).

    For instance, drawing a QRectF(0, 0, 10, 10) will display a 10x10 square that will actually toggle a 10x10 pixels square on a standard screen. If the screen uses HighDPI, it will look the same as in a screen with the same physical size, but many more pixels will actually be used to actually show it.

    If any transformation is applied, though (for instance, by using fitInView()), the same rectangle may be much larger or smaller, in pixel size; yet, it will still be a "10x10 square" in logical coordinates.

    Final notes and further reading

    Related documentation and SO posts (for which you should read all answers):

    Just take your time for the above, and also do a basic search engine query with related keywords (QGraphicsScene and/or QGraphicsView, coordinate system, bounding rect, etc.).

    [1] Cartesian coordinates use positive values for "above" and negatives for "below", as opposite to common screen coordinates: a standard single screen will show y = 0 on its top, meaning that while 10 in Cartesian would show something "up from the bottom" (0 on the horizontal axis), the same value will be displayed as "down from the top" on a common screen layout; QGraphicsScene follows the same screen concept too;
    [2] In reality, as the documentation suggests, the bounding rect of an item that uses a QPen to draw its contents should always consider the width of that pen; specifically, at least half of the pen width for orthogonal shapes (rectangles), but that might increase for different shapes and if the pen joinStyle might extend over due to the angles touching the edges;
    [3] The local coordinate system used by items is always relative to their parent; if the item is a "top level" (it has no QGraphicsItem parent), the parent is implicitly "the scene", but remember that, in theory, a graphics item could exist and "work" without being added to a scene at all; this follows one of the principles of OOP modularity, for which an object should be able to work on its own, no matter its external context;