Composition is the process of combining multiple content sources together into a single image.
It's what happens when you have multiple visible windows to be shown on one display. Screen performs the composition while ensuring that everything that's supposed to be visible is visible and everything that's supposed to be covered is covered. As you can probably guess, when you consider the number of properties for windows and displays alone, along with each of the multiple windows, there's almost an infinite number of possibilities on how to combine the images.
Because the final image needs to be correctly constructed, Screen can't necessarily use one composition option solely over another. For example, even a gain in performance isn't justified if the resulting image produced isn't accurate.
Screen uses a composition strategy that's optimal for the current scene while taking memory bandwidth and available hardware blocks into consideration. When multiple hardware layers (pipelines) and buffers are supported by the device driver, Screen takes advantage of these capabilities to use each pipeline and to combine the pipelines at display time. For applications that require complex graphical operations, hardware-accelerated options such as 3D hardware with OpenGL ES and/or 2D bit-blitting hardware are also used. And there are times that even using the CPU for composition provides some advantages.
When Screen is tasked to show your windows' content, it's going to determine how many windows are visible, what portions of the window are visible, and then based on the windows' properties, whether scaling, or rotation, or image adjustments need to be considered. Screen decides what hardware blocks, or CPU, to use for composition based on all these factors.
Screen API allows the application to limit composition options by setting the following properties:
You can set both of the above properties can be set by calling screen_get_window_property_iv().
2D bit-blitting and 3D hardware require the appropriate hardware drivers to have started before Screen can use them. If, at the time that Screen needs the drivers for composition, the drivers haven't already been started by an OpenGL ES call, or by call to screen_blit(), Screen starts these drivers. Therefore, the first time that Screen uses 2D bit-blitting for composition may take longer if Screen is loading and starting the drivers before the actual task of compositing.
Screen uses a framebuffer to save the composition results if it chooses to use either 2D bit-blitting, 3D hardware, or CPU. The reason for this, is that these options need to save to the graphics memory. If there isn't already a framebuffer available, Screen creates one at the point of use. Screen can reuse the same framebuffer without having to do any sort of chaining because typically, a framebuffer isn't used for any other purpose than for displaying content.
The composition strategy (whether one option or combined options are used) taken by Screen to produce the final image may vary, even on the same platform. It could be that in one case only hardware layers are used, but at another time, on the same system, a completely different combination of hardware blocks and CPU are used. Screen makes the decision on composition based on the visible windows at the time. Different applications doing different things at different times can cause Screen to perform composition in different ways. Regardless of the combination of composition options that Screen chooses, the goal is an accurate final image where performance load and memory are optimized.
Screen tries to minimize the number of display updates required. That's why if there are multiple visible windows that are not necessarily updating all at once, but within a vertical synchronization (vsync) interval, the updates are batched together and updated on the next vsync interval. This strategy is more efficient than updating the display every time there's a change in a window.
For example, if there are three visible windows that all have content to show at approximately the same time (within one vsync internal), then Screen will take the updates of all three windows, and composites them to produce only one display update. If only one of the windows has a content change, while the other two have content changes outside of the same vsync interval, then Screen may update the first and batch the other two in a second update for the next vsync interval.
Screen still needs to account for invisible windows when they update. Even though changes in invisible windows don't result in a display update, they still use and hold resources. Eventually, the invisible windows might block resources for other visible windows if we just leave them. Therefore, even invisible window updates are constrained by the vsync intervals.