This requires functions without a prototype definition to be static.
This allows to detect dead code, export less symbols and put shared
functions in headers.
Prior to this commit, compositors needed to render the texture to an
intermediate off-screen buffer using wlr_renderer APIs if they wanted to
use a custom rendering path (e.g. render to a 3D scene).
A new wlr_gles2_texture_get_attribs exposes the GL texture target and ID
so that compositors can render wlr_textures with their own shaders. An
example of a compositor doing so is available at [1].
[1]: 3db905b784/src/render.c (L227)
We don't need our own enum for types. Instead we just use
GL_TEXTURE_{2D,EXTERNAL_OES}, which already describes usage.
Also fixes a situation where we were using GL_TEXTURE_2D in a situation
we should not have. wl_drm buffers are always GL_TEXTURE_EXTERNAL_OES,
no matter if they're RGB or any other format.
When a texture is destroyed between wlr_egl_make_current and
wlr_egl_swap_buffers, it resets the current EGL surface to NULL. This
makes wlr_egl_swap_buffers fail.
If the EGL context is already current, there's no need to reset it.
According to the spec:
> If <n_rects> is 0 then <rects> is ignored and the entire
> surface is implicitly damaged and the behaviour is equivalent
> to calling eglSwapBuffers.
When we want to swap with an empty damage region, set the damage to a single
empty rectangle.
The deleted includes are redundant, because other headers will include
the necessary files. Additionally, they cause build failures, because
including EGL/egl.h or EGL/eglext.h directly, instead of through
wlr/render/egl.h or wlr/render/interface.h, will mean that
MESA_EGL_NO_X11_HEADERS will not have been defined, and so the EGL
headers will attempt to pull in unnecessary X11 headers that may not
exist on the system.
For the headers produced by glgen.sh, the includes couldn't simply be
deleted, because no other header would include the EGL headers. Neither
wlr/render/egl.h or wlr/render/interface.h felt appropriate to include,
so I opted instead to copy the MESA_EGL_NO_X11_HEADERS definition before
the EGL includes.
This types adds a container for formats + modifiers.
A list that is of [format [modifier]] was chosen instead of
[format modifer] because that is how GBM accepts them.
Co-Authored-By: emersion <contact@emersion.fr>
The read format is dependent on the output, so we first need to make it
current. This fixes a race condition in wlr-screencopy-v1 where a dmabuf
client would cause EGL_NO_SURFACE to be bound at the time when
screencopy needs to query for the preferred format, causing GL errors.
We were assuming GL_BGRA_EXT was always supported.
We now check that it's supported for rendering. We fail if it isn't because
this format is specified as "always supported" by the Wayland protocol.
We also check if it's supported for reading pixels. A new preferred_read_format
function returns the preferred format that can be used to read pixels. This is
used by the screencopy protocol.
This removes any assumptions about how the libdrm headers are installed,
and uses the pkg-config include directories as we're "supposed to".
This only adds a partial dependency, since we don't actually need to
link against libdrm.
This PR broke a private nixpkgs definition I have for wlroots: https://github.com/swaywm/wlroots/pull/1304
It is fixed by changing `#include <drm_fourcc.h>` to `#include <libdrm/drm_fourcc.h>`, which follows what is already done in the dmabuf example.
If a client uses an older version of the dmabuf protocol, use the
`formats` event instead of `modifiers` (since that didn't exist in older
versions).
With a bit of necessary guessing, support dmabuf importing even when
EGL_EXT_image_dma_buf_import_modifiers isn't present instead of
failing up front.
Detecting whether eglSwapBuffersWithDamageEXT or
eglSwapBuffersWithDamageKHR is used should be based on the extension
string, not only on the availability of the function.
Compositors now have more control over how the backend creates its
renderer. Currently all backends create an EGL/GLES2 renderer, so
the necessary attributes for creating the context are passed to a
user-provided callback function. It is responsible for initializing
provided wlr_egl and to return a renderer. On fail, return 0.
Fixes#987
../render/gles2/renderer.c: In function ‘gles2_render_texture_with_matrix’:
../render/gles2/renderer.c:140:2: error: ‘target’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
glBindTexture(target, tex_id);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../render/gles2/renderer.c:145:2: error: ‘prog’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
glUseProgram(prog);
- Textures are now immutable (apart from those created from raw
pixels), no more invalid textures
- Move all wl_drm stuff in wlr_renderer
- Most of wlr_texture fields are now private
- Remove some duplicated DMA-BUF code in the DRM backend
- Add more assertions
- Stride is now always given as bytes rather than pixels
- Drop wl_shm functions
Fun fact: this patch has been written 10,000 meters up in the air.
By using the same vertex shader and adding alpha to the fragment shader
for external textures we can:
- use alpha blending
- have wlr_gles2_render_texture_with_matrix work with
the GL_TEXTURE_EXTERNAL_OES. So far this failed
since we passed in alpha which was unknown by fragment_src_external
Allow to set the texture target type when generating/binding the
texture. This allows us to attach the texture type to the texture so we
don't have to keep the logic elsewhere.
Tested with
./weston-simple-dmabuf-drm
./weston-simple-dmabuf-drm --import-immediate=1
./weston-simple-dmabuf-drm --y-inverted=1
(and combinations)
Supports only single plane XRGB dmabufs for now.
Due to the strstr prefix match EGL_EXT_foo would be incorrectly matched
if EGL_EXT_foobar would be available but not foo.
This doesn't matter for the currently checked extensions but will matter
for EGL_EXT_image_dma_buf_import_modifiers vs
EGL_EXT_image_dma_buf_import
Code borrowed from weston
This backports some changes to #319 to fix the screenshooter data
format. This also adds wlr_backend_get_renderer which will be
useful to support multiple renderers.
Since wlroots shaders only use one texture at a time (ie there is only one
sampler2D variable in any shader), it is unnecessary to switch between active
texture units at this time.
Add a signal for wlr_surface destruction on the wlr_surface that compositors
can listen to to remove the surface from their state.
Implement a listener for this in the example wl_compositor to remove the
surface from its internal list of surfaces.
Destroy the surface in the compositor destroy_surface callback given when the
surface resource was created.
Add a reference to the surface resource to the wlr_surface so a compositor can
find it in its list of resources upon wlr_resource destruction.
Implement surface_attach method. This is called when a client attaches an shm
buffer with wl_surface_attach().
Implement the GLES2 interface for attaching shm buffers. This creates an opengl
texture with the shm buffer contents for the surface.
This commit also includes some working code to render the surfaces onto the
screen for demonstration purposes.