** Also affects: gdm3 (Ubuntu Jammy)
   Importance: Undecided
       Status: New

** Changed in: gdm3 (Ubuntu Jammy)
   Importance: Undecided => High

** Changed in: gdm3 (Ubuntu Jammy)
     Assignee: (unassigned) => Ghadi Rahme (ghadi-rahme)

** Changed in: gdm3 (Ubuntu Jammy)
       Status: New => In Progress

** Also affects: gdm3 (Ubuntu Mantic)
   Importance: High
     Assignee: Ghadi Rahme (ghadi-rahme)
       Status: In Progress

** Also affects: gdm3 (Ubuntu Lunar)
   Importance: Undecided
       Status: New

** Changed in: gdm3 (Ubuntu Lunar)
   Importance: Undecided => High

** Changed in: gdm3 (Ubuntu Lunar)
     Assignee: (unassigned) => Ghadi Rahme (ghadi-rahme)

** Changed in: gdm3 (Ubuntu Lunar)
       Status: New => In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to gdm3 in Ubuntu.
https://bugs.launchpad.net/bugs/2020641

Title:
  Installing or removing apps through snap-store launches another gdm
  session

Status in gdm3 package in Ubuntu:
  In Progress
Status in gdm3 source package in Jammy:
  In Progress
Status in gdm3 source package in Lunar:
  In Progress
Status in gdm3 source package in Mantic:
  In Progress

Bug description:
  gdm3 version: 42.0-1ubuntu7.22.04.2
  Ubuntu version: Ubuntu 22.04.2 LTS

  [Description]
  Installing or removing snap packages through NICE DCV on AWS will cause the 
user to be kicked out of their session. The issue happens on machines running 
nvidia GPUs (passed through to a VM) with the GRID driver installed as well as 
the normal nvidia driver.

  [Steps to reproduce]
  Simply install or remove any snap through the snap store will trigger the 
issue, for example:
  $ snap install skype

  also running any of the following commands will also trigger the issue:
  $ snap connect skype:opengl :opengl
  $ snap disconnect skype:opengl :opengl
  $ snap connect skype:camera :camera
  $ snap disconnect skype:camera :camera

  After further investigation I was able to pin down the issue to udev and 
could reproduce the issue by running the following command:
  $ sudo udevadm trigger 
--parent-match=/sys/devices/pci0000:00/0000:00:1e.0/drm/card0

  where "/sys/devices/pci0000:00/0000:00:1e.0/drm/card0" corresponds to
  the nvidia GPU of my instance.

  A more generic way of triggering the issue would be running:
  $ sudo udevadm trigger

  [Solution]

  I have investigated the issue and discovered that it lies within GDM3 in the 
"udev_is_settled" function (daemon/gdm-local-display-factory.c).
  In the case where the udev is settled the line "g_clear_signal_handler 
(&factory->uevent_handler_id, factory->gudev_client);" at the end of the 
function is triggered however this is not the case when the function returns 
early and will lead to the user being logged out. In its current implementation 
there are three different return points before "g_clear_signal_handler" is 
executed where the udev devices would already have settled.

  I have written a patch that fixes this issue by making sure the
  function "g_clear_signal_handler" is executed in all cases for which
  the udev is settled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdm3/+bug/2020641/+subscriptions


-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to     : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to