> -----Original Message-----
> From: Almahallawy, Khaled <[email protected]>
> Sent: Wednesday, February 7, 2024 11:58 AM
> To: Murthy, Arun R <[email protected]>; Nikula, Jani
> <[email protected]>; [email protected]
> Cc: Shankar, Uma <[email protected]>; Deak, Imre
> <[email protected]>; Syrjala, Ville <[email protected]>
> Subject: Re: [RFC 2/4] drm/i915/display/dp: Dont send hotplug event on LT
> failure
> 
> On Tue, 2024-02-06 at 15:06 +0000, Murthy, Arun R wrote:
> > > -----Original Message-----
> > > From: Nikula, Jani <[email protected]>
> > > Sent: Tuesday, February 6, 2024 5:10 PM
> > > To: Murthy, Arun R <[email protected]>;
> > > [email protected]
> > > Cc: Deak, Imre <[email protected]>; Syrjala, Ville <
> > > [email protected]>; Shankar, Uma <[email protected]>;
> > > Murthy, Arun R <[email protected]>
> > > Subject: Re: [RFC 2/4] drm/i915/display/dp: Dont send hotplug event
> > > on LT failure
> > >
> > > On Tue, 06 Feb 2024, Arun R Murthy <[email protected]> wrote:
> > > > On link training failure fallback sequence a hotpplu event was
> > > > sent to the user, but this is not requried as we are not changing
> > > > the mode and instead only changing the link rate and lane count.
> > > > User has no dependency with these parameters.
> > > >
> > > > Signed-off-by: Arun R Murthy <[email protected]>
> > > > ---
> > > >  drivers/gpu/drm/i915/display/intel_dp_link_training.c | 5 +----
> > > >  1 file changed, 1 insertion(+), 4 deletions(-)
> > > >
> > > > diff --git
> > > > a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > index 1abfafbbfa75..242cb08e9fc4 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> > > > @@ -1074,8 +1074,6 @@ intel_dp_link_train_phy(struct intel_dp
> > > > *intel_dp,  static void
> > > > intel_dp_schedule_fallback_link_training(struct intel_dp
> > > *intel_dp,
> > > >                                                      const struct
> > > > intel_crtc_state
> > > *crtc_state)  {
> > > > -       struct intel_connector *intel_connector = intel_dp-
> > > > attached_connector;
> > > > -       struct drm_i915_private *i915 = dp_to_i915(intel_dp);
> > > >
> > > >         if (!intel_digital_port_connected(&dp_to_dig_port(intel_dp)-
> > > > >base)) {
> > > >                 lt_dbg(intel_dp, DP_PHY_DPRX, "Link Training failed on
> > > disconnected
> > > > sink.\n"); @@ -1092,8 +1090,7 @@ static void
> > > intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
> > > >                 return;
> > > >         }
> > > >
> > > > -       /* Schedule a Hotplug Uevent to userspace to start modeset */
> > > > -       queue_work(i915->unordered_wq, &intel_connector-
> > > > modeset_retry_work);
> > > > +       /* TODO: Re-visit, sending hotplug is not required. No need to
> > > > +notify user as we are not changing the mode */
> > >
> > > Yeah, we're not changing the mode, we're asking the userspace to
> > > change the mode.
> > As far as I see mode change is not necessity. Link rate and lane count
> > change is internal to KMD.
> 
> Userspace may need to reprobe again in order to ensure that the
> resolution/refresh rate still fits within the bandwidth provided by LR/LC.
> Also I believe this part works with DP1.4 LT fallback when we tested recently.
> 
That's right, I missed it. The case when we are shifting from UHBR to HBR rate, 
we might have to check for the mode support and if not then trigger hotplug.
A table with link rate and the max supported resolutions helps for this, for a 
new resolution based on the fallback if the current mode fits in then proceed 
else trigger hotplug.

Will take this change in my patch series.

Thanks and Regards,
Arun R Murthy
-------------------

> Thanks
> Khaled
> >
> > Thanks and Regards,
> > Arun R Murthy
> > --------------------
> > > BR,
> > > Jani.
> > >
> > > >  }
> > > >
> > > >  /* Perform the link training on all LTTPRs and the DPRX on a
> > > > link. */
> > >
> > > --
> > > Jani Nikula, Intel

Reply via email to