These comments were a bit misleading:
- "GL_TEXTURE_2D == mutable": not really, imported non-external-only
DMA-BUFs would also use this target, but are not mutable.
- "Only affects target == GL_TEXTURE_2D": same here.
- "If imported from a wlr_buffer": not really, would be NULL if
imported from a shm wlr_buffer.
Adjust these comments to better reflect reality and adjust the check
in gles2_texture_update_from_buffer().
We can double import a dmabuf if we use it as a texture target and
a render target. Instead, let's unify render targets and texture dmabuf
imports to use wlr_gles2_buffer which manages the EGLImageKHR
Since imported textures will be based off of gles2_buffer we have
to destroy textures first or else they will have an invalid reference
to the buffers they are imported from.
If the external-only flag is set, then the EGLImage is only
supported for use with GL_TEXTURE_EXTERNAL_OES texture targets.
In particular, the EGLImage cannot be bound to a RBO.
It's possible that we don't have an EGLDevice if we created the
EGL context from a GBM device. Let's ensure all GPU-accelerated
renderers always have a DRM FD to return by falling back to GBM's
FD.
When a texel from the Vulkan format VK_FORMAT_B8G8R8A8_SRGB is read,
the sRGB to linear conversion is applied independently to the R, G,
and B channels; the A channel has no influence on this. However,
DRM_FORMAT_ARGB8888 buffers are, per Wayland protocol, not encoded
in this fashion; one must first unpremultiply the color channels
before doing sRGB to linear conversion. This commit switches to
handling ARGB8888 and ABGR8888 formats using the general fragment
shader conversion from electrical to optical values.
Not needs set GL_DEPTH_TEST, Because when rendering to a framebuffer
that has no depth buffer, depth testing always behaves as though
the test is disabled, The initial value for each capability with
the exception of GL_DITHER is GL_FALSE.
We could potentially leak a display here, but not really because the
display acts as a singleton that will be returned next time a renderer
of the same device is created.
If the compositor were to try to handle a GPU reset within the lost
signal (by recreating the renderer) we should avoid referencing renderer
resources after the lost signal. This prevents use after free for such
compositors.