androidandroid-5.0-lollipopandroid-4.4-kitkatvsyncsurfaceflinger

Understanding necessity of Android VSYNC signals


I'm trying to get a better understanding of the Android display subsystem, but one item that's still confusing to me is how VSYNC signals are handled, and why so many exist in the first place.

Android is designed to use VSYNC at its core, but there are multiple VSYNC signals that it employs. Via https://source.android.com/devices/graphics/implement.html in the "VSYNC Offset" section, there is a flow diagram which diagrams three VSYNC signals: HW_VSYNC_0, VSYNC, and SF-VSYNC. I understand that HW_VSYNC is used to update the timing in DispSync, and that VSYNC and SF-VSYNC are used by the apps and surfaceflinger, but why are these individual signals necessary at all? Furthermore, how do the offsets impact these signals? Is there a timing diagram available anywhere which better explains this?

Thanks for any help you can offer.


Solution

  • To understand this stuff, it's best to start with the System-Level Graphics Architecture document, taking particular note of The Need for Triple-Buffering section and the associated diagram (which ideally would be an animated GIF). The sentence that begins, "If the app starts rendering halfway between VSYNC signals" is talking specifically about DispSync. Once you've read that, hopefully the DispSync section of the device graphics doc makes more sense.

    Most devices don't have DispSync offsets configured, so there is really only one VSYNC signal. In what follows I'm assuming DispSync is enabled.

    The hardware only provides one VSYNC signal, corresponding to the primary display refresh. The others are generated in software by the SurfaceFlinger DispSync code, firing at fixed offsets from the actual VSYNC. Some clever software is used to keep the timings from slipping out of phase.

    The signals are used to trigger SurfaceFlinger composition and app rendering. If you follow the section in the architecture document, you can see that this establishes two frames of latency between when the app renders its content, and when the content appears on the screen. Think of it like this: given three occurrences of VSYNC, the app draws at V0, the system does composition at V1, and the composed frame is sent to the display at V2.

    If you're trying to track touch input, perhaps moving a map around under the user's finger, any latency will be seen by the user as sluggish touch response. The goal is to minimize the latency to improve the user experience. Suppose we delayed the events slightly, so the app draws at V0.5, we composite at V1.2, and then swap to the display at V2. By offsetting the app and SF activity we reduce the total latency from 2 frames to 1.5, as shown below.

    enter image description here

    That's what DispSync is for. In the feedback diagram on the page you linked, HW_VSYNC_0 is the hardware refresh for the physical display, VSYNC causes the app to render, and SF_VSYNC causes SurfaceFlinger to perform composition. Referring to them as "VSYNC" is a bit of a misnomer, but on an LCD panel referring to anything as "VSYNC" is probably a misnomer.

    The "retire fence timestamps" noted in the feedback loop diagram refers to a clever optimization. Since we're not doing any work on the actual hardware VSYNC, we can be slightly more efficient if we turn the refresh signal off. The DispSync code will instead use the timestamps from retire fences (which is a whole other discussion) to see if it is falling out of sync, and will temporarily re-enable the hardware signal until it's back on track.

    Edit: you can see how the values are configured in the Nexus 5 boardconfig. Note the settings for VSYNC_EVENT_PHASE_OFFSET_NS and SF_VSYNC_EVENT_PHASE_OFFSET_NS.