androidopengl-eseglgrafika

Unable to understand BufferQueue sync logic that helps for async display


I am reading android architecture from this link.

At the very first attempt i could not understand everything but bits and pieces here and there.

What i understand?

1)

There something called 'sync framework'. - OK

2)

This sync framework can be used between processes and between user space and kernel space. - OK

3)

Used for Asynchronous mechanism. - OK.

4)

Assuming there is no Sync framework.

   void display_buffer(struct dma_buf *buf);

There is a display system which displays the given buffer.

 while (1) {
    -> Fill the buffer( buf)
    -> call display_buffer(buf);  -> Displays into the screen.
  }

There could be a latency because, it is synchronous.

5)

Assuming we have sync framework support. It is something like: I will give display system the buffer, and startfence, you need to display the buffer when you get signal on startfence. At the same time, when you are done, you need to intimate me with donefence. which the display system provided me. [ i understood not completely but, i can feel it helps for async model], so that i can fill the buffer while the system is rendering.

struct sync_fence* display_buffer(struct dma_buf *buf,
    struct sync_fence *fence);

Gray Area: But still, i could not able to write a pseudo code for how could display_buffer on async mode be used?

Reading paragraph from the android official link.

"Most recent Android devices support the "sync framework". This allows the system to do some nifty thing when combined with hardware components that can manipulate graphics data asynchronously. For example, a producer can submit a series of OpenGL ES drawing commands and then enqueue the output buffer before rendering completes. The buffer is accompanied by a fence that signals when the contents are ready. A second fence accompanies the buffer when it is returned to the free list, so that the consumer can release the buffer while the contents are still in use. This approach improves latency and throughput as the buffers move through the system."

QUESTION:

I am confused particularly with this statement.

"A second fence accompanies the buffer when it is returned to the free list, so that the consumer can release the buffer while the contents are still in use. This approach improves latency and throughput as the buffers move through the system."

A second fence on the same buffer?. or different buffer?. as i see there are two buffer queues one is filled list, and the other one is empty list.


Solution

  • If you want to get deeper into the sync framework, you should also read this document and especially the "explicit synchronization" section of this document.

    Your description is close but not quite right. Each buffer has an "acquire" fence and a "release" fence. The "acquire" fence indicates when the producer, such as OpenGL ES, has finished rendering. It doesn't tell the consumer (HardwareComposer, which feeds the display) that it needs to display the buffer, but rather that it is now allowed to display the buffer because rendering has completed. The "release" fence is signaled by the HWC when the buffer is no longer being accessed by the display hardware, which means a producer is again allowed to write to it.

    The latency reduction is the result of not tying the buffer's state to its state in the queue. The BufferQueue can consider you to be full of data before rendering completes, and can have you on the "free" list before the display is done showing the buffer. This allows the IPC queue mechanisms to do their thing without blocking on the GPU or display.

    Once a fence is signaled, it never becomes un-signaled, so you can't use a single fence for multiple events. That's why you need two different fences for "data is ready" and "data is no longer needed".

    FWIW, the name "BufferQueue" is slightly misleading. Filled buffers are in a queue, empty buffers are in a pool. (The pool has a FIFO policy, so it's essentially a queue, but that's not guaranteed.)