Hi Andi,
> with that list of people Cc'ed it's probable that the series
> won't reach the right people.
>
> Please Cc the people you have marked as "CC:" in your commit,
> including the kernel stable mailing list (git-send-email would
> take care of it unless you have explicitely added the
> "sup
On Tue, Sep 16, 2025 at 04:56:44AM -, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915/display: Use DISPLAY_VER over GRAPHICS_VER (rev2)
> URL : https://patchwork.freedesktop.org/series/153973/
> State : success
>
> == Summary ==
>
> CI Bug Log - changes from CI_DRM_17208_full
On Tue, Sep 16, 2025 at 05:43:18PM +, Jonathan Cavitt wrote:
> Remove the 'reg >= 0' check from reg_is_mmio because it's unnecessary
> due to it always being true. Also, fix the kernel docs for
> intel_vgpu_gpa_to_mmio_offset, as they incorrectly assert it returns
> 'Zero on success, negative
== Series Details ==
Series: User readable error codes on atomic_ioctl failure (rev4)
URL : https://patchwork.freedesktop.org/series/152275/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_17209 -> Patchwork_152275v4
Summary
On Mon, 08 Sep 2025, Ville Syrjälä wrote:
> It's aligning stride, not the size. So doesn't make sense. The only
> time you need page alignment for stride is for remapping, which is
> handled correctly by i915 in the dumb bo codepath and not handled at all
> by xe as usual.
>
> I suspect what we re
Replace the manual cpufreq_cpu_put() with __free(put_cpufreq_policy)
annotation for policy references. This reduces the risk of reference
counting mistakes and aligns the code with the latest kernel style.
No functional change intended.
Signed-off-by: Zihuan Zhang
Reviewed-by: Jonathan Cameron
== Series Details ==
Series: Some shmem fixes (rev2)
URL : https://patchwork.freedesktop.org/series/154599/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_17218 -> Patchwork_154599v2
Summary
---
**FAILURE**
Serious
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
On Tue, Sep 16, 2025 at 02:44:53PM +0300, Jani Nikula wrote:
> On Mon, 08 Sep 2025, Ville Syrjälä wrote:
> > It's aligning stride, not the size. So doesn't make sense. The only
> > time you need page alignment for stride is for remapping, which is
> > handled correctly by i915 in the dumb bo codep
On Tue, Sep 16, 2025 at 12:47:30PM +0800, pengdonglin wrote:
> From: pengdonglin
>
> Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
> definitions")
> there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
> rcu_read_lock_sched() in terms of RCU read
Remove the 'reg >= 0' check from reg_is_mmio because it's unnecessary
due to it always being true. Also, fix the kernel docs for
intel_vgpu_gpa_to_mmio_offset, as they incorrectly assert it returns
'Zero on success, negative error code if failed', when in reality it
returns 'the MMIO offset of the
On Mon, Sep 15, 2025 at 08:24:06PM +0300, Ilpo Järvinen wrote:
On Mon, 15 Sep 2025, Lucas De Marchi wrote:
On Mon, Sep 15, 2025 at 12:13:47PM +0300, Ilpo Järvinen wrote:
> pci.c has been used as catch everything that doesn't fits elsewhere
> within PCI core and thus resizable BAR code has been
intel_vgpu_gpa_to_mmio_offset states that it returns
'Zero on success, negative error code if failed'
in the kernel docs.
This is false. The function actually returns
'The MMIO offset of the given GPA'.
Correct the docs.
Signed-off-by: Jonathan Cavitt
---
drivers/gpu/drm/i915/gvt/mmio.c | 2 +-
On Tue, Sep 16, 2025 at 10:57:24AM +0200, Christian König wrote:
> On 16.09.25 10:12, Jani Nikula wrote:
> > On Mon, 15 Sep 2025, Rodrigo Vivi wrote:
> >> On Mon, Sep 15, 2025 at 07:24:10PM +0200, Andi Shyti wrote:
> >>> Hi,
> >>>
> >>> On Mon, Sep 15, 2025 at 03:42:23PM +0300, Jani Nikula wrote:
> -Original Message-
> From: Intel-gfx On Behalf Of Luca
> Coelho
> Sent: 09 September 2025 14:00
> To: intel-gfx@lists.freedesktop.org
> Subject: [PATCH] drm/i915/dmc: explicitly sanitize num_entries from
> package_header
>
> num_entries comes from package_header, which is read from a
On 2025-09-16 06:47, pengdonglin wrote:
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side
function definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant
grace
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which implies rcu_read_lo
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Per Documentation/RCU/rcu_dereference.rst [1], since Linux 4.20's RCU
consolidation [2][3], RCU read-side critical sections include:
- Explicit rcu_read_lock()
- BH/interrupt/preemption-disabling regions
- Spinlock critical sections (including CONFIG_PREEMPT_RT kernels [4]
Hello pengdonglin,
Thank you for the patch, looks reasonable and justified.
On Tue, Sep 16, 2025 at 12:47:32PM +0800, pengdonglin wrote:
> From: pengdonglin
>
> Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
> definitions")
> there is no difference between rcu_rea
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
From: pengdonglin
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function
definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which
On 9/15/2025 6:02 PM, Ville Syrjälä wrote:
On Sun, Sep 14, 2025 at 11:29:10AM +0530, Nautiyal, Ankit K wrote:
On 9/11/2025 7:55 PM, Ville Syrjälä wrote:
On Thu, Sep 11, 2025 at 08:15:50AM +0530, Ankit Nautiyal wrote:
When VRR TG is always enabled and an optimized guardband is used, the pipe
Hi Daniel,
On 2025-09-12 at 15:19:28 -0300, Daniel Almeida wrote:
> We will be adding tests for Panthor in a subsequent patch, so first add
> the ability to open the device.
>
> In particular, these will be used to test both Panthor[0] and Tyr[1],
> i.e.: the new Rust GPU driver that implements Pa
== Series Details ==
Series: Some shmem fixes
URL : https://patchwork.freedesktop.org/series/154599/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_17213 -> Patchwork_154599v1
Summary
---
**SUCCESS**
No regressions
Hi Taotao,
now that we have fixed the the drm-intel-gt-next branch we can
apply both the patches. I'm resending them both for CI tests.
Thanks for your patience,
Andi
Taotao Chen (2):
drm/i915: set O_LARGEFILE in __create_shmem()
drm/i915: Fix incorrect error handling in shmem_pwrite()
dri
From: Taotao Chen
Without O_LARGEFILE, file->f_op->write_iter calls
generic_write_check_limits(), which enforces a 2GB (MAX_NON_LFS) limit,
causing -EFBIG on large writes.
In shmem_pwrite(), this error is later masked as -EIO due to the error
handling order, leading to igt failures like gen9_exe
From: Taotao Chen
shmem_pwrite() currently checks for short writes before negative error
codes, which can overwrite real errors (e.g., -EFBIG) with -EIO.
Reorder the checks to return negative errors first, then handle short
writes.
Signed-off-by: Taotao Chen
Reviewed-by: Andi Shyti
Link: https
== Series Details ==
Series: drm/{i915,xe}/dsb: refactor DSB buffer allocation
URL : https://patchwork.freedesktop.org/series/154591/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_17212 -> Patchwork_154591v1
Summary
---
Hi Daniel,
On 2025-09-12 at 15:19:29 -0300, Daniel Almeida wrote:
> Add the necessary code needed to compile panthor tests as well as the
> basic infrastructure that will be used by the Panthor tests themselves.
>
> To make sure everything is in order, add a basic test in
> panthor_query.c.
>
> S
The DSB buffer implementation is really independent of display. Pass
struct drm_device instead of struct intel_crtc to
intel_dsb_buffer_create(), and drop the intel_display_types.h include.
Signed-off-by: Jani Nikula
---
drivers/gpu/drm/i915/display/intel_dsb.c| 2 +-
drivers/gpu/drm/i91
Now that struct intel_dsb_buffer is opaque, it can be made unique to
both drivers, and we can drop the unnecessary struct i915_vma part. Only
the struct xe_bo part is needed.
Signed-off-by: Jani Nikula
---
drivers/gpu/drm/xe/display/xe_dsb_buffer.c | 28 +++---
1 file changed, 8
Move the definitions of struct intel_dsb_buffer to the driver specific
files, hiding the implementation details from the shared DSB code.
Signed-off-by: Jani Nikula
---
drivers/gpu/drm/i915/display/intel_dsb_buffer.c | 6 ++
drivers/gpu/drm/i915/display/intel_dsb_buffer.h | 8 +---
drive
Better separate i915/xe drivers specific details from display.
Jani Nikula (4):
drm/{i915,xe}/dsb: make {intel,xe}_dsb_buffer.c independent of display
drm/{i915,xe}/dsb: allocate struct intel_dsb_buffer dynamically
drm/{i915,xe}/dsb: make struct intel_dsb_buffer opaque
drm/xe/dsb: drop the
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> The watermark algorithm sometimes produces results where higher
> watermark levels have smaller blocks/lines watermarks than the lower
> levels. That doesn't really make sense as the corresponding latencies
> are su
Hi Krzysztof,
with that list of people Cc'ed it's probable that the series
won't reach the right people.
Please Cc the people you have marked as "CC:" in your commit,
including the kernel stable mailing list (git-send-email would
take care of it unless you have explicitely added the
"suppress-cc=
Hi Krzysztof,
On Tue, Sep 16, 2025 at 06:34:06AM +, Krzysztof Karas wrote:
> Fields hdisplay and vdisplay are defined as u16 and their
> multiplication causes implicit promotion to signed 32-bit value,
> which may overflow and cause undefined behavior.
>
> Prevent possible undefined behavior
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Some systems (eg. LNL Lenovo Thinkapd X1 Carbon) declare
> semi-bogus non-monotonic WM latency values:
> WM0 latency not provided
> WM1 latency 100 usec
> WM2 latency 100 usec
> WM3 latency 100 usec
> WM4 laten
On Tue, 16 Sep 2025, Andi Shyti wrote:
> Hi Krzysztof,
>
> On Tue, Sep 16, 2025 at 06:33:00AM +, Krzysztof Karas wrote:
>> There are two unsafe scenarios in that function:
>> 1) drm_format_info_block_width/height() may return 0 and cause
>> division by 0 down the line. Return early if any of
Hi Krzysztof,
On Tue, Sep 16, 2025 at 06:33:33AM +, Krzysztof Karas wrote:
> Due to the nature of round_up(), its first argument is
> decremented by one. drm_suballoc_hole_soffset() may return 0,
> which is then passed to round_up() and may wrap around.
> Remedy that by adding a guard against
Hi Krzysztof,
On Tue, Sep 16, 2025 at 06:33:00AM +, Krzysztof Karas wrote:
> There are two unsafe scenarios in that function:
> 1) drm_format_info_block_width/height() may return 0 and cause
> division by 0 down the line. Return early if any of these values
> are 0.
> 2) dma_addr calculation
== Series Details ==
Series: drm: Add GPU frequency tracepoint (rev2)
URL : https://patchwork.freedesktop.org/series/154231/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_17211 -> Patchwork_154231v2
Summary
---
**SUC
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> In order to help with debugging print both the original wm
> latencies read from the mailbox/etc., and the final fixed/adjusted
> values.
>
> Signed-off-by: Ville Syrjälä
> ---
Reviewed-by: Luca Coelho
--
Cheer
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Flatten the loop inside sanitize_wm_latency() a bit
> by using 'continue'.
>
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/display/skl_watermark.c | 12 ++--
> 1 file changed, 6 insertions(+
On 16.09.25 10:12, Jani Nikula wrote:
> On Mon, 15 Sep 2025, Rodrigo Vivi wrote:
>> On Mon, Sep 15, 2025 at 07:24:10PM +0200, Andi Shyti wrote:
>>> Hi,
>>>
>>> On Mon, Sep 15, 2025 at 03:42:23PM +0300, Jani Nikula wrote:
On Mon, 15 Sep 2025, Ilpo Järvinen wrote:
> PCI core provides pci_r
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Pull the "zero out invalid WM latencies" stuff into a helper.
> Mainly to avoid mixing higher level and lower level stuff in
> the same adjust_wm_latency() function.
>
> Signed-off-by: Ville Syrjälä
> ---
Reviewe
On Tue, 16 Sep 2025, S Sebinraj wrote:
> Moved the trace file header to appropriate path
> "include/drm" and updated the code as per the same.
You're not supposed to address code review in independent patches, but
rather modify the original patches. This is kernel development basics.
BR,
Jani.
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Bump the latency for all watermark levels in the
> 16Gb+ DIMM w/a. The spec does ask us to do it only for level
> 0, but it seems more sane to bump all the levels. If the actual
> memory access is slower then the wa
On Mon, 2025-09-08 at 10:35 +0300, Luca Coelho wrote:
> There's no need to have a forward-declaration for skl_sagv_disable(),
> so move the intel_sagv_init() function below the called function to
> prevent it.
>
> Signed-off-by: Luca Coelho
> ---
> drivers/gpu/drm/i915/display/skl_watermark.c |
On Mon, 2025-09-08 at 10:35 +0300, Luca Coelho wrote:
> Some of the ops in struct intel_wm_funcs are used only for legacy
> watermark management, while others are only for SKL+ or both. Clarify
> that in the struct definition.
>
> Signed-off-by: Luca Coelho
> ---
> drivers/gpu/drm/i915/display/
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> If WM0 latency is zero we need to bump it (and the WM1+ latencies)
> but a fixed amount. But any WM1+ level with zero latency must
> not be touched since that indicates that corresponding WM level
> isn't supported.
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Extract the "increase wm latencies by some amount" code into
> a helper that can be reused.
>
> Signed-off-by: Ville Syrjälä
> ---
Reviewed-by: Luca Coelho
--
Cheers,
Luca.
> drivers/gpu/drm/i915/display/skl
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> I want skl_read_wm_latency() to just do what it says on
> the tin, ie. read the latency values from the pcode mailbox.
> Move the DG2 "multiply by two" trick elsewhere.
>
> Signed-off-by: Ville Syrjälä
> ---
Revi
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> {mtl,skl}_read_wm_latency() are doing way too many things for
> my liking. Move the adjustment stuff out into the caller.
> This also gives us one place where we specify the 'read_latency'
> for all the platforms, i
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> We always operate on i915->display.wm.skl_latency in
> {skl,mtl}_read_wm_latency(). No real need for the caller
> to have to pass that in explicitly.
>
> Signed-off-by: Ville Syrjälä
> ---
Reviewed-by: Luca Coelh
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Currently the code assumes that every platform except dg2 need the
> 16Gb DIMM w/a, while in reality it's only needed by skl and icl (and
> derivatives). Switch to a more specific platform check.
>
> Signed-off-by:
On Fri, 2025-09-05 at 17:58 +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> While the spec only asks us to do the WM0 latency bump for 16Gb
> DRAM devices I believe we should apply it for larger DRAM chips.
> At the time the w/a was added there were no larger chips on
> the market, but I th
On Mon, 15 Sep 2025, Rodrigo Vivi wrote:
> On Mon, Sep 15, 2025 at 07:24:10PM +0200, Andi Shyti wrote:
>> Hi,
>>
>> On Mon, Sep 15, 2025 at 03:42:23PM +0300, Jani Nikula wrote:
>> > On Mon, 15 Sep 2025, Ilpo Järvinen wrote:
>> > > PCI core provides pci_rebar_size_supported() that helps in checki
Many callers of pci_rebar_get_possible_sizes() are interested in
finding out if a particular BAR Size (PCIe r6.2 sec. 7.8.6.3) is
supported by the particular BAR.
Add pci_rebar_size_supported() into PCI core to make it easy for the
drivers to determine if the BAR Size is supported or not.
Use the
On Mon, 15 Sep 2025, Andi Shyti wrote:
> Hi Ilpo,
>
>> +/**
>> + * pci_rebar_size_supported - check if size is supported for BAR
>> + * @pdev: PCI device
>> + * @bar: BAR to check
>> + * @size: size as defined in the PCIe spec (0=1MB, 31=128TB)
>> + *
>> + * Return: %true if @bar is resizable and
Quoting Gustavo Sousa (2025-09-15 10:40:31-03:00)
>Quoting Dnyaneshwar Bhadane (2025-09-11 17:55:39-03:00)
>>- Add WCL as subplatform and update the definition struct.
>>- Update condition required to distinguish WCL C10 PHY selection
>>on port B.
>
>I have added comments in individual patches. Mo
For failures in async flip atomic check/commit path return user readable
error codes in struct drm_atomic_state.
Signed-off-by: Arun R Murthy
---
drivers/gpu/drm/i915/display/intel_display.c | 25 ++---
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/drivers/g
== Series Details ==
Series: drm: Miscellaneous fixes in drm code (rev3)
URL : https://patchwork.freedesktop.org/series/154173/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_17209 -> Patchwork_154173v3
Summary
---
**
Moved the trace file header to appropriate path
"include/drm" and updated the code as per the same.
Signed-off-by: S Sebinraj
---
drivers/gpu/drm/drm_gpu_frequency_trace.c | 2 +-
drivers/gpu/drm/xe/xe_gpu_freq_trace.h | 2 +-
{drivers/gpu => include}/drm/drm_gpu_fre
Integrate xe PMU with the DRM-level GPU frequency tracepoint to provide
efficient frequency monitoring with change detection.
Key changes:
- Add frequency change detection
- Implement per-GT frequency tracking using last_act_freq array
- Only trace when GPU frequency actually changes per GT
The i
Add a GPU frequency tracepoint at the DRM subsystem level
The implementation includes:
- DRM-level tracepoint exposed at
/sys/kernel/debug/tracing/events/power/gpu_frequency/
- CONFIG_DRM_GPU_FREQUENCY_TRACE Kconfig option (default=n)
The tracepoint follows kernel tracing and provides kHz freque
Add a GPU frequency tracepoint at the DRM subsystem level.
Integrates with the Xe PMU to provide frequency tracing.
The tracepoint is exposed at:
/sys/kernel/debug/tracing/events/power/gpu_frequency
Format: {unsigned int state, unsigned int gpu_id}
- state: GPU frequency in KHz
- gpu_id: GPU
Now that a proper error code will be returned to the user on any failure
in atomic_ioctl() via struct drm_mode_atomic, add a new element
error_code in the struct drm_atomic_state so as to hold the error code
during the atomic_check() and atomic_commit() phases.
New function added to print the error
Moving atomic_state allocation to the beginning of the atomci_ioctl
to accommodate drm_mode_atomic_err_code usage for returning error
code on failures.
Signed-off-by: Arun R Murthy
---
drivers/gpu/drm/drm_atomic_uapi.c | 21 +++--
1 file changed, 11 insertions(+), 10 deletions(-)
Add user readable error codes for failure cases in drm_atomic_ioctl() so
that user can decode the error code and take corrective measurements.
Signed-off-by: Arun R Murthy
---
drivers/gpu/drm/drm_atomic_uapi.c | 71 ---
1 file changed, 52 insertions(+), 19 del
There can be multiple reasons for a failure in atomic_ioctl. Most often
in these error conditions -EINVAL is returned. User/Compositor would
have to blindly take a call on failure of this ioctl so as to use
ALLOW_MODESET or any. It would be good if user/compositor gets a
readable error code on fail
The series focuses on providing a user readable error value on a failure
in drm_atomic_ioctl(). Usually -EINVAL is returned in most of the error
cases and it is difficult for the user to decode the error and get to
know the real cause for the error. If user gets to know the reason for
the error the
78 matches
Mail list logo