[Bug 68808] Re: slow kdm/kde startup since upgrade to edgy
Same on my Notebook System. I made an upgrade from Dapper to Edgy with setting everthing in /etc/apt/sources.list from dapper to edgy, sudo apt-get dist-upgrade on my Desktop and my Notebook. On the Desktop, everything works fine. The Notebook needs around 1 minute from closing the bootscreen to the first sign of X and another minute to starting KDE. Starting sudo Xorg from the commandline (without running kdm) needs one or two seconds. I have started /etc/init.d/kdm with -debug 255. The output is attached. -- slow kdm/kde startup since upgrade to edgy https://launchpad.net/bugs/68808 -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 103603] Re: X server at 100% CPU
After upgrading to Feisty I have the same Problem. When I left my computer for several hours alone the Desktop seems to work (Clock changes, Knewsticker is running) but when I move the mouse the Display freeze and the Keyboard stops working. In some cases the effect could happen after a few minutes. 2 days ago, I disabled the Screensaver, but the Bug still occurs. The xorg process is running with nearly 100% CPU Time. An strace shows, that it loops on this command: --- SIGALRM (Alarm clock) @ 0 (0) --- sigreturn() = ? (mask now []) dmesg shows something interesting: [ 153.572298] NET: Registered protocol family 4 [ 153.636382] NET: Registered protocol family 3 [ 153.727573] NET: Registered protocol family 5 [ 159.658624] ISO 9660 Extensions: Microsoft Joliet Level 3 [ 159.931681] ISO 9660 Extensions: RRIP_1991A [36590.032279] NETDEV WATCHDOG: eth1: transmit timed out [36590.032286] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=2035. [36590.480003] NVRM: Xid (0005:00): 16, Head Count 005eaf26 [36598.474546] NVRM: Xid (0005:00): 16, Head Count 005eaf27 [36606.469088] NVRM: Xid (0005:00): 16, Head Count 005eaf28 [36614.463632] NVRM: Xid (0005:00): 16, Head Count 005eaf29 [36622.458174] NVRM: Xid (0005:00): 16, Head Count 005eaf2a [36630.452719] NVRM: Xid (0005:00): 16, Head Count 005eaf2b [36638.447264] NVRM: Xid (0005:00): 16, Head Count 005eaf2c [36646.441807] NVRM: Xid (0005:00): 16, Head Count 005eaf2d [36654.436351] NVRM: Xid (0005:00): 16, Head Count 005eaf2e [36662.430896] NVRM: Xid (0005:00): 16, Head Count 005eaf2f [36670.425442] NVRM: Xid (0005:00): 16, Head Count 005eaf30 [36678.419985] NVRM: Xid (0005:00): 16, Head Count 005eaf31 [36686.414530] NVRM: Xid (0005:00): 16, Head Count 005eaf32 [36694.409075] NVRM: Xid (0005:00): 16, Head Count 005eaf33 [36702.403618] NVRM: Xid (0005:00): 16, Head Count 005eaf34 [36710.398162] NVRM: Xid (0005:00): 16, Head Count 005eaf35 [36936.795626] NETDEV WATCHDOG: eth1: transmit timed out [36936.795633] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=2036. [37134.660593] NETDEV WATCHDOG: eth1: transmit timed out [37134.660600] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=46288. [37144.653773] NETDEV WATCHDOG: eth1: transmit timed out Take a look at the lag between [ 159.931681] [36590.032279]. After killing xorg with kill -9 there are new entris in dmesg: [36936.795626] NETDEV WATCHDOG: eth1: transmit timed out [36936.795633] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=2036. [37134.660593] NETDEV WATCHDOG: eth1: transmit timed out [37134.660600] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=46288. [37144.653773] NETDEV WATCHDOG: eth1: transmit timed out [37144.653780] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=1289. [37450.445078] NETDEV WATCHDOG: eth1: transmit timed out [37450.445086] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=74039. [37460.438258] NETDEV WATCHDOG: eth1: transmit timed out [37460.438265] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=1542. [37722.259577] NETDEV WATCHDOG: eth1: transmit timed out [37722.259584] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=63042. [37741.246615] NETDEV WATCHDOG: eth1: transmit timed out [37741.246622] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=3294. [38081.014820] NETDEV WATCHDOG: eth1: transmit timed out [38081.014827] eth1: Tx timed out, excess collisions. TSR=0x1e, ISR=0x8, t=2045. [38150.386362] NVRM: RmInitAdapter failed! (0x12:0x2b:1544) [38150.386371] NVRM: rm_init_adapter(0) failed I tried to restart X but it doesn't work: (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (EE) NVIDIA(0): The NVIDIA kernel module does not appear to be receiving (EE) NVIDIA(0): interrupts generated by the NVIDIA graphics device (EE) NVIDIA(0): PCI:5:0:0. Please see Chapter 5: Common Problems in the (EE) NVIDIA(0): README for additional information. (EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device! (II) UnloadModule: "nvidia" Chapter5 says something about irg Sharing. Perhaps the root of this problem is a problem between the Network Card an the NVIDIA Adapter? Perhaps somebody has an Idea how I can check this? Compare lspci and ifconfig perhaps? -- X server at 100% CPU https://bugs.launchpad.net/bugs/103603 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 107080] Re: Wrong RAID UUID on PATA RAID5 partitions after Feisty Upgrade
As far as I know the UUID in mdadm.conf is the Device UUID for md0 which is used in /etc/fstab to mount the drive. The UUID reported by mdadm --examine is the UUID of the RAID Partition which must be the same for all Members of a raid. Could you please attach a copy of /etc/mdadm/mdadm.conf the output of sudo mdadm --query --detail /dev/md0 the output of sudo mdadm --examine for every member of the raid the output of cat /proc/mdstat Could you please describe your issues? -- Wrong RAID UUID on PATA RAID5 partitions after Feisty Upgrade https://bugs.launchpad.net/bugs/107080 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 110304] Re: Feisty crash possibly related to mdadm/raid5
Could you please give some more information on the Crash? Did your computer while booting? Or was your system up and running? After fixing Bug 107080 on my System, it has run normally aside from bug 103603 which is not connected to my RAID problems. -- Feisty crash possibly related to mdadm/raid5 https://bugs.launchpad.net/bugs/110304 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 110304] Re: Feisty crash possibly related to mdadm/raid5
Sorry. This means: "Did your computer crash while booting?" -- Feisty crash possibly related to mdadm/raid5 https://bugs.launchpad.net/bugs/110304 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 110304] Re: Feisty crash possibly related to mdadm/raid5
The Output of mdadm --query --detail shows that your RAID is currently rebuilding (Rebuild Status : 4% complete). The UUID and the Status of the drives seems to be OK. It's hard to guess what the reason for the crash was. -- Feisty crash possibly related to mdadm/raid5 https://bugs.launchpad.net/bugs/110304 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 103603] Re: X server at 100% CPU
This is now fixed for me. After removing the Network Card (eth1) the system runs stable. -- X server at 100% CPU https://bugs.launchpad.net/bugs/103603 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 73012] Re: Wrong Refresh Rate after installing NVIDIA driver (1.0-9629)
Same on my Computer with Kubuntu Feisty Fawn with latest updates and a NVidia 6600. Please find attached the needed information. I'm using the binary-only Nvidia Driver 1.0.9755. The xorg.conf is an unmodified and autogenerated version from nvidia-settings. $ sudo dpkg -l nvidia\* | grep ii ii nvidia-glx-new 1.0.9755+2.6.20.5-15.20 NVIDIA binary XFree86 4.x/X.Org 'new' driver ii nvidia-kernel-common 20051028+1ubuntu7 NVIDIA binary kernel module common files ** Attachment added: "xorg.conf" http://librarian.launchpad.net/7542439/xorg.conf -- Wrong Refresh Rate after installing NVIDIA driver (1.0-9629) https://bugs.launchpad.net/bugs/73012 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 107080] Wrong RAID UUID on PATA RAID5 partitions after Feisty Upgrade
Public bug reported: Binary package hint: mdadm I had created a RAID5 Array on Edgy: /dev/sda4 - SATA /dev/sdb1 - SATA /dev/hda1 - PATA /dev/hdb1 - PATA It works flawlessly for two months with several reboots. On Friday, 13 April 2007 I upgraded to Feisty (bad day?) and run into the problems with linux-image-2.6.14-23 which freezes on boot when scanning for the SATA Drives. I waited until Saturday and upgraded to linux- image-2.6.15-20 from a rescue System. After the upgrade the system recognizes my SATA Drives, but failed to boot, because it could not assemble the raid Array. The /dev/ entries changed from /dev/hd* to /dev/sd* as documented. I removed the mdadm Start Script completly from the intialramdisk so my system could start. Then I tried to assemble my raid Array manually: mdadm --assemble /dev/md0 /dev/sda4 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdc1 misses out due to wrong homehost mdadm: superblock on /dev/sdd1 doesn't match others - assembly aborted So I checked the Superblock of the Partitions: /dev/sda4: Magic : a92b4efc Version : 00.90.00 UUID : 0b4ff2c9:63c85083:5d9fb26f:21588147 (local to host desaster-area) Creation Time : Thu Feb 15 19:56:28 2007 Raid Level : raid5 Device Size : 156288256 (149.05 GiB 160.04 GB) Array Size : 468864768 (447.14 GiB 480.12 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Fri Apr 13 16:47:21 2007 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 2c9413b7 - correct Events : 0.598129 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 2 25442 active sync 0 0 000 active sync 1 1 3 651 active sync 2 2 25442 active sync 3 3 8 173 active sync /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 00.90.00 UUID : 0b4ff2c9:63c85083:5d9fb26f:21588147 (local to host desaster-area) Creation Time : Thu Feb 15 19:56:28 2007 Raid Level : raid5 Device Size : 156288256 (149.05 GiB 160.04 GB) Array Size : 468864768 (447.14 GiB 480.12 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Thu Apr 12 22:06:53 2007 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 2c930b41 - correct Events : 0.598128 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 173 active sync /dev/sdb1 0 0 310 active sync 1 1 3 651 active sync 2 2 842 active sync /dev/sda4 3 3 8 173 active sync /dev/sdb /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : df53562e:f0eccc5e:a85502b6:021653a8 Creation Time : Thu Feb 15 19:56:28 2007 Raid Level : raid5 Device Size : 156288256 (149.05 GiB 160.04 GB) Array Size : 468864768 (447.14 GiB 480.12 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Fri Apr 13 16:47:21 2007 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : b92f13a4 - correct Events : 0.598129 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 000 active sync 0 0 000 active sync 1 1 3 651 active sync 2 2 842 active sync /dev/sda4 3 3 8 173 active sync /dev/sdb1 /dev/sdd1: Magic : a92b4efc Version : 00.90.00 UUID : df53562e:f0eccc5e:5d9fb26f:21588147 (local to host desaster-area) Creation Time : Thu Feb 15 19:56:28 2007 Raid Level : raid5 Device Size : 156288256 (149.05 GiB 160.04 GB) Array Size : 468864768 (447.14 GiB 480.12 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Update Time : Thu Apr 12 22:06:53 2007 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 8dbaeaa9 - correct Events : 0.598128 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 3 651 active sync 0 0 310 active sync 1 1 3 651 active sync 2 2 842 active sync /dev/sda4 3 3 8 173 active sync /dev/sdb1 The SATA Drives ha
[Bug 107080] Re: Wrong RAID UUID on PATA RAID5 partitions after Feisty Upgrade
I succede in recovering my Partition. It's not clearly stated in the manual, but you can recreate a RAID which Superblocks are messed up. The most important part is to store the RAID Informations of each partion at a safe place: $ sudo mdadm --examine /dev/* // where * is every member of your RAID (/dev/sda4 i.e.) See above for examples. Then get the needed informations to create a new RAID with exactly the old Layout. 1. Order of Drives Number Major Minor RaidDevice State this 1 3 65 1 active sync Where Number is the Number of the Drive starting by zero. In my case, /dev/sdc1 was the first member (Number = 0). 2. Raid Level Raid Level : raid5 3. Layout Layout : left-symmetric 4. ChunkSize Chunk Size : 64K With this information, I can recreate the RAID. $ sudo mdadm /dev/md0 --stop // if it is running $ sudo mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/hda1 /dev/hdb1 /dev/sda4 /dev/sdb1 Then it should start. You can watch the progress with $ watch cat /proc/mdstat After around an hour my RAID was recovered. All the data was still accessible. Phew. I will hope nobody else will step in this problem. BTW: With the last kernel update (2.6.20.15-27) the naming scheme changed back to /dev/hd* for PATA drives. -- Wrong RAID UUID on PATA RAID5 partitions after Feisty Upgrade https://bugs.launchpad.net/bugs/107080 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 68808] Re: slow kdm/kde startup since upgrade to edgy
As there seems to be not real solution until now, a workaround is to install gdm with 'sudo apt-get install gdm' and select it instead of kdm. -- slow kdm/kde startup since upgrade to edgy https://launchpad.net/bugs/68808 -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 123460] Re: Rhythmbox:Error while saving song information:Internal GStreamer problem
I had a Rhythmbox opened in a background window and as I was editing tags with EasyTag, It crashed Rhythmbox and since then whenever I want to open it there a fairly long delay and it comes with the pop-up: Error while saving song information. Internal GStreamer problem; file a bug. I am using Ubuntu 10.04. -- Rhythmbox:Error while saving song information:Internal GStreamer problem https://bugs.launchpad.net/bugs/123460 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 123460] Re: Rhythmbox:Error while saving song information:Internal GStreamer problem
A re-install of rhythmbox didn't help but removing the xml file rhythmdb.xml (~/.local/share/rhythmbox/rhythmdb.xml) helped me start the application again (after which it re-scans the music database automatically). -- Rhythmbox:Error while saving song information:Internal GStreamer problem https://bugs.launchpad.net/bugs/123460 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 546105] Re: Simple Scan failed to scan: Error communicating with scanner
I am running Lucid and I also get this error message. My scanner is an Epson Perfection 610 which can't take multiple pages. The scan works as expected, it's only that I get a pop-up window after the scan is complete: Failed to scan, Error communicating with scanner, "Change Scanner" or "Close". -- Simple Scan failed to scan: Error communicating with scanner https://bugs.launchpad.net/bugs/546105 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs