Re: problems with cbl.abuseat.org

2022-08-03 Thread Claudio Kuenzler
On Thu, Jun 30, 2022 at 9:55 AM Jeremy Ardley wrote: > I'm using postfix as my MTA and lately I've been missing a significant > fraction from my usual mail > > e.g. email from linkedin and spamassassin list. > > Tracking it down I see they are all getting rejected by abuseat. e.g. > > Jun 30 14:2

Re: wtf just happened to my local staging web server

2022-05-04 Thread Claudio Kuenzler
On Wed, May 4, 2022 at 7:18 PM Gary Dale wrote: > May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP > Server... > May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: > AH00072: make_sock: could not bind to addre> > May 04 12:16:55 TheLibrarian apachectl[7935]

Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-09-16 Thread Claudio Kuenzler
On Wed, Jun 30, 2021 at 9:51 AM Paul Wise wrote: > Claudio Kuenzler wrote: > > > I currently suspect a Kernel bug in 5.10. > Thanks to everyone for hints and suggestions! At the end it turned out to be an issue with the hpwdt module. After blacklisting this module, no boot or s

Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-06-29 Thread Claudio Kuenzler
I tend to suspect it's unrelated, but if you add "nomodeset nofb" to > your boot command line it will turn off the graphics drivers. > Yes, I guess it is indeed unrelated. With buster I can see the same messages during boot: - *ERROR* Failed to load firmware! on drm - pcc_cpufreq_init: Too many CP

Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-06-29 Thread Claudio Kuenzler
> > Trace dump suggests that crash occurs while executing cpuidle module. > Try to boot with "intel_pstate=force" kernel parameter [1] to force > different CPU driver (if CPU supports it) and\or "cpuidle.off=1" to disable > cpuidle subsystem. > > Thank you Alexander and Georgi (thanks for the link!

Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-06-29 Thread Claudio Kuenzler
Hi Georgi I noticed that kernel logs you posted are between 62nd - 64th second > after kernel loading. Why is the boot process so slow? > Due to a disabled SATA device in BIOS, the kernel tries to do an ERST and SRST and does this until 60s after boot. That's OK, it's been the same on Buster, too

Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-06-29 Thread Claudio Kuenzler
I continue to troubleshoot but if anyone has experienced something similar or has some hints or can point to existing bugs please let me know. On Tue, Jun 29, 2021 at 10:04 AM Claudio Kuenzler wrote: > Meanwhile I was able to identify more by removing "quiet" from the grub > load

Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-06-29 Thread Claudio Kuenzler
8, 2021 at 8:32 PM Claudio Kuenzler wrote: > Hello! > > Currently testing the new Bullseye release (using > firmware-bullseye-DI-rc2-amd64-netinst.iso) and see a strange phenomenon on > a HP Proliant DL380 G7 server. > > During boot, the following messages show up in the conso

Bullseye (mostly) not booting on Proliant DL380 G7

2021-06-28 Thread Claudio Kuenzler
Hello! Currently testing the new Bullseye release (using firmware-bullseye-DI-rc2-amd64-netinst.iso) and see a strange phenomenon on a HP Proliant DL380 G7 server. During boot, the following messages show up in the console: [63.063844] pcc_cpufreq_init: Too many CPUs, dynamic performance scaling

Re: How to make dhclient reread its config? (Debian 10)

2020-08-25 Thread Claudio Kuenzler
On Wed, Aug 26, 2020 at 5:56 AM Victor Sudakov wrote: > Dear Colleagues, > > I've made some changes to /etc/dhcp/dhclient.conf, now I need to make > dhclient reread it (and apply the changes to /etc/resolv.conf). > > There seems to be no dhclient service in systemd, and I don't find > any info ab

Re: lost dig

2019-02-19 Thread Claudio Kuenzler
On Tue, Feb 19, 2019 at 12:55 PM tony wrote: > > > Isn't the alias defined in '~/.bashrc' or '~/.bash_aliases'? > > > no... > Maybe it's not an alias at all but rather an "alternative". Check "update-alternatives --get-selections" if there is an entry for dig.

Re: lost dig

2019-02-19 Thread Claudio Kuenzler
On 2/19/2019 12:10 PM, tony wrote: > > In my fiddling with DNS, I installed (as su) a python package from pypi > > called 'dig'. It turned out to not be what I expected, so I abandoned it. > > > > However, now when I enter 'dig' on the command line, it runs this python > > thing. So I uninstalled d

Status of LXC in Stretch?

2019-02-18 Thread Claudio Kuenzler
Dear all, LXC maintainers, It seems that there hasn't been much going on concerning the LXC package(s) in Debian 9 Stretch. The version is stuck at 2.0.7 without any patches backported since Jan 2018. Yet there are known (important) bugs which break LXC on Stretch. For example when using cgroup re

Re: OT: Current_Pending_Sector on /dev/sd?

2019-02-14 Thread Claudio Kuenzler
> ./check_smart.pl -g /dev/sd[a-z] -i ata > OK: [/dev/sda] - Device is clean| > > Is it ok that is only return one drive? > you have to use double-quotes because it's a regular expression within the perl plugin: ./check_smart -g "/dev/sd[a-z]" -i ata OK: [/dev/sda] - Device is clean --- [/dev/sd

Re: OT: Current_Pending_Sector on /dev/sd?

2019-02-13 Thread Claudio Kuenzler
On Wed, Feb 13, 2019 at 2:22 PM basti wrote: > hello, > I have a raid6 with 4 disks. 2 of them show Current_Pending_Sector 1. > Hi Basti are you using mdadm for the raid-6 or a hardware raid controller? > The disks has warranty till Apr. 2019 so I decide to replace them. > If there's only 1

Re: Bug with soft raid?

2019-02-13 Thread Claudio Kuenzler
Hello Steve, As some of the other responders already said, check your drives' SMART values. But a disk may fail without any indication in the SMART table. I've seen this a couple of years ago and documented it here: https://www.claudiokuenzler.com/blog/301/disk-failure-not-detected-by-smart-ata1-f

Re: hp server hardware monitoring

2014-07-30 Thread Claudio Kuenzler
On Wed, Jul 30, 2014 at 9:50 PM, Bonno Bloksma wrote: > Hi, > >> [...] > >> What may be relevant too is that on the g6 server Debian uses the > >> CCISS drivers for the raid hardware, the volume shows up as > >> /dev/cciss/c0d0 > >> On the g7 and g8 hardware the raid volume simply shows up as /de