Dragon Player doesn't

2021-09-07 Thread Gary Dale
I don't use Dragon Player normally but I was looking at it just now. 
When I right-click on a video file, select play with then choose Dragon 
Player to play it, it launches Dragon Player but doesn't play the file. 
When I select Play File from within Dragon Player, I can select a video 
to play, but again it doesn't play it.


Has anyone else experienced this problem and/or come across a fix for it?



Re: Dragon Player doesn't

2021-09-08 Thread Gary Dale

On 2021-09-07 21:21, piorunz wrote:

On 07/09/2021 18:04, Gary Dale wrote:

I don't use Dragon Player normally but I was looking at it just now.
When I right-click on a video file, select play with then choose Dragon
Player to play it, it launches Dragon Player but doesn't play the file.
When I select Play File from within Dragon Player, I can select a video
to play, but again it doesn't play it.

Has anyone else experienced this problem and/or come across a fix for 
it?


I dont' use this program, but if you run it via terminal, what does it 
say?


Good point. I get a lot of error messages that say "WARNING: bool 
Phonon::FactoryPrivate::createBackend() phonon backend plugin could not 
be loaded". There are other messages that also mention phonon, like this 
sequence:
WARNING: Phonon::createPath: Cannot connect  Phonon::MediaObject ( no 
objectName ) to  Phonon::VideoWidget ( no objectName ).
WARNING: bool Phonon::FactoryPrivate::createBackend() phonon backend 
plugin could not be loaded


WARNING: Phonon::createPath: Cannot connect  Phonon::MediaObject ( no 
objectName ) to  Phonon::AudioOutput ( no objectName ).


Along with numerous other messages, including several similar to:

kf.coreaddons: no metadata found in 
"/usr/lib/x86_64-linux-gnu/qt5/plugins/kf5/kio/metainfo.so" "Failed to 
extract plugin meta data from 
'/usr/lib/x86_64-linux-gnu/qt5/plugins/kf5/kio/metainfo.so'"






is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?

2021-09-10 Thread Gary Dale
I've got a Yahoo mail account (among others) that I use for a particular 
purpose. However it's been a while since I've been able to send e-mail 
from it using Yahoo's smtp servers. Instead I've been sending e-mail via 
another smtp server so the "From" address doesn't match the login 
domain. Gmail apparently now considers that to be sufficient reason to 
bounce the e-mail so I've been trying to get Thunderbird to use the 
Yahoo server to send mail for this account.


So far I haven't been able to come up with any combination of settings, 
including removing the account and recreating it in Thunderbird, that 
allow the mail to go through.


The messages I get either complain about the password or tell me "An 
error occurred while sending mail. The mail server responded: Request 
failed; Mailbox unavailable. Please check the message and try again."


Does anyone have Thunderbird and Yahoo working together?




Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?

2021-09-10 Thread Gary Dale

On 2021-09-10 18:32, jeremy ardley wrote:


On 11/09/2021 6:26 am, Jeremy Ardley wrote:


On 11/9/21 5:39 am, Gary Dale wrote:



Does anyone have Thunderbird and Yahoo working together?




I have it running on thunderbird. Both imap and smtp use ssl/tls and 
oauth2


smtp uses port 465 while imap uses port 993

I have some memory getting oauth2 to work may have been a bit of effort.


This may be relevant. You need to remove any stored passwords after 
you apply oauth2 to an account.


https://www.supertechcrew.com/thunderbird-oauth2-gmail/

Jeremy

I've tried that already but I'll give it another go. I actually removed 
all my stored Thunderbird passwords for Yahoo before I recreated the 
account in Thunderbird.




Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?

2021-09-10 Thread Gary Dale

On 2021-09-10 18:26, Jeremy Ardley wrote:


On 11/9/21 5:39 am, Gary Dale wrote:
I've got a Yahoo mail account (among others) that I use for a 
particular purpose. However it's been a while since I've been able to 
send e-mail from it using Yahoo's smtp servers. Instead I've been 
sending e-mail via another smtp server so the "From" address doesn't 
match the login domain. Gmail apparently now considers that to be 
sufficient reason to bounce the e-mail so I've been trying to get 
Thunderbird to use the Yahoo server to send mail for this account.


So far I haven't been able to come up with any combination of 
settings, including removing the account and recreating it in 
Thunderbird, that allow the mail to go through.


The messages I get either complain about the password or tell me "An 
error occurred while sending mail. The mail server responded: Request 
failed; Mailbox unavailable. Please check the message and try again."


Does anyone have Thunderbird and Yahoo working together?




I have it running on thunderbird. Both imap and smtp use ssl/tls and 
oauth2


smtp uses port 465 while imap uses port 993

I have some memory getting oauth2 to work may have been a bit of effort.

I had a similar problem with a Rogers account that stopped working. It's 
one of the reasons I stopped using Rogers... They turned their e-mail 
over to Yahoo and Yahoo doesn't really support e-mail.





Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?

2021-09-10 Thread Gary Dale

On 2021-09-10 18:11, Alexander V. Makartsev wrote:

On 11.09.2021 02:39, Gary Dale wrote:
I've got a Yahoo mail account (among others) that I use for a 
particular purpose. However it's been a while since I've been able to 
send e-mail from it using Yahoo's smtp servers. Instead I've been 
sending e-mail via another smtp server so the "From" address doesn't 
match the login domain. Gmail apparently now considers that to be 
sufficient reason to bounce the e-mail so I've been trying to get 
Thunderbird to use the Yahoo server to send mail for this account.


So far I haven't been able to come up with any combination of 
settings, including removing the account and recreating it in 
Thunderbird, that allow the mail to go through.


The messages I get either complain about the password or tell me "An 
error occurred while sending mail. The mail server responded: Request 
failed; Mailbox unavailable. Please check the message and try again."


Does anyone have Thunderbird and Yahoo working together?



Pretty sure, it's the issue with Yahoo¹ ², nothing to do with Thunderbird.
Yahoo is following the same path as GMail, forcing their users to use 
web browsers as mail clients.
In case of GMail you have to use same generated "app-password" for 
both IMAP and SMTP services.

I guess the same principle will be for Yahoo.


[1] 
https://help.yahoo.com/kb/account/temporary-access-insecure-sln27791.html
[2] 
https://help.yahoo.com/kb/account/generate-manage-third-party-passwords-sln15241.html

--
Yes, I've tried the app passwords without success. As near as I can 
tell, they are just a randomly generated secure password that is linked 
to a particular application as well as the account.




Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?

2021-09-10 Thread Gary Dale

On 2021-09-10 20:51, Gary Dale wrote:

On 2021-09-10 18:32, jeremy ardley wrote:


On 11/09/2021 6:26 am, Jeremy Ardley wrote:


On 11/9/21 5:39 am, Gary Dale wrote:



Does anyone have Thunderbird and Yahoo working together?




I have it running on thunderbird. Both imap and smtp use ssl/tls and 
oauth2


smtp uses port 465 while imap uses port 993

I have some memory getting oauth2 to work may have been a bit of 
effort.



This may be relevant. You need to remove any stored passwords after 
you apply oauth2 to an account.


https://www.supertechcrew.com/thunderbird-oauth2-gmail/

Jeremy

I've tried that already but I'll give it another go. I actually 
removed all my stored Thunderbird passwords for Yahoo before I 
recreated the account in Thunderbird.



Found it. I also needed to remove the smtp server for Yahoo. When 
Thunderbird set up the account, it simply reused the existing 
smtp.mail.yahoo.com server definition. Even when I set that up 
correctly, it wasn't working. However when I deleted it along with the 
pop account, it recreated it from scratch and now it seems to work.


I note that I now have 3 saved passwords for the account:
1) mailbox:// ... for the pop server
2) oauth:// 
3) smtp:// ...

The passwords are all the same - the very long computer generated one.




Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?

2021-09-10 Thread Gary Dale

On 2021-09-10 20:51, Gary Dale wrote:

On 2021-09-10 18:32, jeremy ardley wrote:


On 11/09/2021 6:26 am, Jeremy Ardley wrote:


On 11/9/21 5:39 am, Gary Dale wrote:



Does anyone have Thunderbird and Yahoo working together?




I have it running on thunderbird. Both imap and smtp use ssl/tls and 
oauth2


smtp uses port 465 while imap uses port 993

I have some memory getting oauth2 to work may have been a bit of effort.


This may be relevant. You need to remove any stored passwords after 
you apply oauth2 to an account.


https://www.supertechcrew.com/thunderbird-oauth2-gmail/

Jeremy

I've tried that already but I'll give it another go. I actually 
removed all my stored Thunderbird passwords for Yahoo before I 
recreated the account in Thunderbird.


I may have spoke too soon. Got this message "An error occurred while 
sending mail. The mail server responded:  Request failed; Mailbox 
unavailable. Please check the message and try again." when I tried to 
send an e-mail from the account. The only change I made to the settings 
was I set a reply-to...  The message went into my sent folder after I 
cleared the error (for the second time) but it isn't showing up in the 
online (yahoo.com) sent folder but an earlier test message I sent is there.


When I removed the reply-to header, the message went out without 
problems and showed up in the online sent folder as well.


It looks like at least part of my problem aws the use of a reply-to address.

>>>>>>>>>

Found it. I also needed to remove the smtp server for Yahoo. When 
Thunderbird set up the account, it simply reused the existing 
smtp.mail.yahoo.com server definition. Even when I set that up 
correctly, it wasn't working. However when I deleted it along with the 
pop account, it recreated it from scratch and now it seems to work.


I note that I now have 3 saved passwords for the account:
1) mailbox:// ... for the pop server
2) oauth:// 
3) smtp:// ...

The passwords are all the same - the very long computer generated one.





Jitsi-meet fails intermittently

2022-01-20 Thread Gary Dale
Tried to send this message a month ago but couldn't it accepted by the 
list server due to the way it authenticates. Trying it again now that 
I've switched hosts to one that claims it accepts the list server's 
null-message test.


I've also had to do the uninstall, reboot and reinstall one more time 
since then. This problem is becoming annoying because stuff running on 
stable I expect to work reliably.




I've been running a jitsi meet server for about 16 months now on 
Debian/Stable. Last Christmas it stopped working.



The server was running and accepting connections but participants 
couldn't see or hear each other. I fixed the problem by doing apt remove 
jitsi-meet && apt autoremove then rebooting and reinstalling. The 
removal and reinstall seem necessary as neither a simple reinstall nor a 
reboot fix the problem.


It's been working fine ever since until up until last night when I 
tested it prior to another meeting (I'd used it monthly between the 
previous failure and last night). I got the same symptoms so I applied 
the same fix and it started working again.


The jicofo.log and other jitsi logs show bridge failures, which explain 
the symptoms but don't point to cause. I note that jitsi-meet depends on 
at least:
  jitsi-meet-prosody jitsi-meet-web jitsi-meet-web-config 
jitsi-videobridge2 lua-bitop lua-event lua-expat

  lua-filesystem lua-sec lua-socket lua5.2 prosody

Probably other packages are involved as well but these are the ones that 
are unique to jitsi-meet on my server.


Since the autoremove appears to be a necessary part of the fix, I 
suspect that something is getting corrupted in a dependency. I think it 
is likely in the Debian package management (I do full-upgrades and 
autoremoves roughly every week but otherwise rarely touch the server 
software). However this is as far as my problem tracking skills take me.


Any ideas?





mdadm and whole disk array members

2021-03-22 Thread Gary Dale
I've spent a few days experimenting with using whole disks in a RAID 5 
array and have come to the conclusion that it simply doesn't work well 
enough to be used.


The main problem I had was that mdadm seems to have problems assembling 
the array when it uses entire disks instead of partitions. Each time I 
restarted my computer, I would have to recreate the array. This causes 
the boot process to halt because /etc/mdadm/mdadm.conf and /etc/fstab 
both identify an array that should be started and mounted. Fortunately 
the create command was still in the bash history so I got the create 
parameters right.


However, after I added another disk to the array, that made the original 
create command obsolete. Plus the kernel assigned different drive 
letters to the drives once I plugged in a new drive, so that I couldn't 
simply add the new drive to the create command.


Fortunately I still had a decade-old script that would cycle through all 
combinations until it found one that would result in a mountable array 
(I had the script due to some problems I was having back in 2010). 
Unfortunately it didn't find any it could mount no matter what the order 
of the drives (which included one "missing").


I've found many other people complaining about similar issues when using 
whole disks to create mdadm RAID arrays. Some of these complaints go 
back many years, so this isn't new.


I suggest that, since it appears the developers can't get this work 
reliably, that the option to use the whole disk be removed and mdadm 
insist on using partitions. At the very least, mdadm --create should 
issue a warning that using a whole device instead of a partition may 
create problems.




Re: mdadm and whole disk array members

2021-03-22 Thread Gary Dale

On 2021-03-22 18:49, Andy Smith wrote:

Hi Gary,

On Mon, Mar 22, 2021 at 06:20:56PM -0400, Gary Dale wrote:

I suggest that, since it appears the developers can't get this work
reliably, that the option to use the whole disk be removed and mdadm insist
on using partitions. At the very least, mdadm --create should issue a
warning that using a whole device instead of a partition may create
problems.

I've been using whole disks in mdadm arrays for more than 15 years
across many many servers on Debian stable and have never experienced
what you describe. There must be something else at play here.

I suggest you post a detailed description of your problem to the
linux-raid mailing list and hopefully someone can help debug it.

 https://raid.wiki.kernel.org/index.php/Linux_Raid#Mailing_list

Cheers,
Andy

It's not just me but a lot of other people have been having the same 
problem. It's been reported many times as I discovered after trying to 
use whole disks. Moreover, the fixes that I'd used in the past don't 
seem to work reliably without partitions.


There doesn't seem to be a downside to using partitions considering that 
partition tables have never taken up any significant amount of space. It 
was an interesting experiment / learning experience but I've decided 
that it's not worth going further so I'm going back to using disks with 
a single partition.





Re: mdadm and whole disk array members

2021-03-25 Thread Gary Dale

On 2021-03-23 08:29, deloptes wrote:

Gary Dale wrote:


It's not just me but a lot of other people have been having the same
problem. It's been reported many times as I discovered after trying to
use whole disks. Moreover, the fixes that I'd used in the past don't
seem to work reliably without partitions.

A friend told me that he found out it is a problem in some BIOSes with UEFI
that can not handle a boot of md UEFI partition.
Perhaps it also depends how they handle the raid of a whole disk.
Are you trying to boot from that raid?


No.



Re: mdadm and whole disk array members

2021-03-25 Thread Gary Dale

On 2021-03-23 08:44, deloptes wrote:

deloptes wrote:


A friend told me that he found out it is a problem in some BIOSes with
UEFI that can not handle a boot of md UEFI partition.
Perhaps it also depends how they handle the raid of a whole disk.
Are you trying to boot from that raid?

Forgot to ask what is in your /etc/mdadm/mdadm.conf and the IDs of the disks

IMO the problem is that if it is not a partition the mdadm can not assemble
as it is looking for a partition, but not sure how grub or whatever handle
it when you boot off the drive.


The drives use normal /dev/sd* ids. They are not being booted from. I 
had updated /etc/mdadm/mdadm/conf with the new information for the array 
after creating it.


When I did exactly the same thing after creating a single FD00 partition 
on the drives, everything worked.




Re: mdadm and whole disk array members

2021-03-25 Thread Gary Dale

On 2021-03-23 11:45, Reco wrote:

Hi.

On Tue, Mar 23, 2021 at 01:44:23PM +0100, deloptes wrote:

IMO the problem is that if it is not a partition the mdadm can not
assemble as it is looking for a partition,

My mdadm.conf says:

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan,
# using wildcards if desired.
#DEVICE partitions containers


And /proc/partitions always had whole disks, their partitions, lvm
volumes and whatever else can be presented as a block device by the
kernel.
So mdadm is perfectly capable of assembling whole disk arrays, and it
does so for me for more than 10 years.


but not sure how grub or whatever handle it when you boot off the
drive.

GRUB2 can definitely boot from mdadm's RAID1 as it has an appropriate
module for this specific task. Installing GRUB2 on mdadm array made of
whole disks is tricky though.

UEFI itself, on the other hand - definitely can not, unless you resort
to some dirty hacks. After all, UEFI requires so-called "EFI System
Partition" aka ESP.

Reco


From what I read in looking for solutions, the problem is common. I 
even tried one workaround of zapping any existing partition table on the 
drives. Nothing worked.


Perhaps it only works with virgin drives? Mine had been removed from 
another machine where they had been part of a different array. I zeroed 
the superblocks before creating the new array.




Re: mdadm and whole disk array members

2021-03-25 Thread Gary Dale

On 2021-03-25 21:14, Gary Dale wrote:

On 2021-03-23 08:44, deloptes wrote:

deloptes wrote:


A friend told me that he found out it is a problem in some BIOSes with
UEFI that can not handle a boot of md UEFI partition.
Perhaps it also depends how they handle the raid of a whole disk.
Are you trying to boot from that raid?
Forgot to ask what is in your /etc/mdadm/mdadm.conf and the IDs of 
the disks


IMO the problem is that if it is not a partition the mdadm can not 
assemble
as it is looking for a partition, but not sure how grub or whatever 
handle

it when you boot off the drive.


The drives use normal /dev/sd* ids. They are not being booted from. I 
had updated /etc/mdadm/mdadm/conf with the new information for the 
array after creating it.


When I did exactly the same thing after creating a single FD00 
partition on the drives, everything worked.


When I say "the same thing", I mean creating the array from the 
partitions instead of the whole drives.




Re: mdadm and whole disk array members

2021-03-26 Thread Gary Dale

On 2021-03-26 00:04, Felix Miata wrote:

Gary Dale composed on 2021-03-25 21:19 (UTC-0400):


  From what I read in looking for solutions, the problem is common. I
even tried one workaround of zapping any existing partition table on the
drives. Nothing worked.


"Zapped" exactly how? GPT tables are on both ends of the disks. Wiping the first
sectors won't get the job done.

sgdisk --zap wipes the partition tables.



Re: mdadm and whole disk array members

2021-03-26 Thread Gary Dale

On 2021-03-26 02:59, deloptes wrote:

Gary Dale wrote:


Perhaps it only works with virgin drives? Mine had been removed from
another machine where they had been part of a different array. I zeroed
the superblocks before creating the new array.

I doubt that - IMO should be either the BIOS or the drives, or a combination
of both

It's a Gigabyte ROG STRIX B550-F board. The drives are Seagate Ironwolf 
and WD Red.




Re: upgrade to testing

2021-06-15 Thread Gary Dale

On 2021-06-15 13:26, Wil wrote:

How do I upgrade from Debian stable to Debain testing?


It's not really an upgrade. It's more a switch in priorities. However to 
answer your question directly, as root do either


    sed -i -s 's/buster/bullseye/g' /etc/apt/sources.list

or

    sed -i -s 's/stable/testing/g' /etc/apt/sources.list

depending on how your source.list file refers to the current stable 
distribution.


After that, do

    apt update
    apt full-upgrade
    apt autoremove

then reboot.




wtf just happened to my local staging web server

2022-05-04 Thread Gary Dale
My Apache2 file/print/web server is running Bullseye. I had to restart 
it yesterday evening to replace a disk drive. Otherwise the last reboot 
was a couple of weeks ago - I recall some updates to Jitsi - but I don't 
think there were any updates since then.


Today I find that I can't get through to any of the sites on the server. 
Instead I get the Apache2 default web page. This happens with both 
Firefox and Chromium. This happens for all the staging sites (that I 
access as ".loc" through entries in my hosts file). My jitsi and 
nextcloud servers simply report failure to get to the server.


I verified that the site files (-available and -enabled) haven't changed 
in months.


I tried restarting the apache2 service and got an error so I tried 
stopping it then starting it again - same error:


root@TheLibrarian:~# service apache2 start
Job for apache2.service failed because the control process exited with 
error code.

See "systemctl status apache2.service" and "journalctl -xe" for details.
root@TheLibrarian:~# systemctl status apache2.service
●apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; 
vendor preset: enabled)
Active: failed(Result: exit-code) since Wed 2022-05-04 12:16:55 
EDT; 5s ago

  Docs: https://httpd.apache.org/docs/2.4/
   Process: 7932 ExecStart=/usr/sbin/apachectl start (code=exited, 
status=1/FAILURE)

   CPU: 29ms

May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP Server...
May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in 
use: AH00072: make_sock: could not bind to addre>
May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in 
use: AH00072: make_sock: could not bind to addre>
May 04 12:16:55 TheLibrarian apachectl[7935]: no listening sockets 
available, shutting down

May 04 12:16:55 TheLibrarian apachectl[7935]: AH00015: Unable to open logs
May 04 12:16:55 TheLibrarian apachectl[7932]: Action 'start' failed.
May 04 12:16:55 TheLibrarian apachectl[7932]: The Apache error log may 
have more information.
May 04 12:16:55 TheLibrarian systemd[1]: apache2.service: Control 
process exited, code=exited, status=1/FAILURE
May 04 12:16:55 TheLibrarian systemd[1]: apache2.service: Failed with 
result 'exit-code'.
May 04 12:16:55 TheLibrarian systemd[1]: Failed to start The Apache HTTP 
Server.


also

root@TheLibrarian:/var/log# journalctl -xe
░░The job identifier is 4527.
May 04 12:50:49 TheLibrarian apachectl[8232]: (98)Address already in 
use: AH00072: make_sock: could not bind to addre>
May 04 12:50:49 TheLibrarian apachectl[8232]: (98)Address already in 
use: AH00072: make_sock: could not bind to addre>
May 04 12:50:49 TheLibrarian apachectl[8232]: no listening sockets 
available, shutting down

May 04 12:50:49 TheLibrarian apachectl[8232]: AH00015: Unable to open logs
May 04 12:50:49 TheLibrarian apachectl[8229]: Action 'start' failed.
May 04 12:50:49 TheLibrarian apachectl[8229]: The Apache error log may 
have more information.
May 04 12:50:49 TheLibrarian systemd[1]: apache2.service: Control 
process exited, code=exited, status=1/FAILURE

░░Subject: Unit process exited
░░Defined-By: systemd
░░Support: https://www.debian.org/support
░░
░░An ExecStart= process belonging to unit apache2.service has exited.
░░
░░The process' exit code is 'exited' and its exit status is 1.
May 04 12:50:49 TheLibrarian systemd[1]: apache2.service: Failed with 
result 'exit-code'.

░░Subject: Unit failed
░░Defined-By: systemd
░░Support: https://www.debian.org/support
░░
░░The unit apache2.service has entered the 'failed' state with result 
'exit-code'.
May 04 12:50:49 TheLibrarian systemd[1]: Failed to start The Apache HTTP 
Server.

░░Subject: A start job for unit apache2.service has failed
░░Defined-By: systemd
░░Support: https://www.debian.org/support
░░
░░A start job for unit apache2.service has finished with a failure.
░░
░░The job identifier is 4527 and the job result is failed.


As I said, I do get the default Apache2 page saying "It works" but that 
appears to be optimistic. ps aux | grep apache2 fails to show the 
service, which confirms the systemctl message that it isn't running.


There is nothing in /var/log/apache2/error.log. The .1 log ends 
yesterday but only contains complaints about php7. Systemctl does report 
(above) "unable to open logs" so that would explain the lack of 
additional messages.  The apache2 directory and its files are root:adm 
with only root having write privileges.


I tried giving the adm group write privileges but that didn't work. 
Turns out the group is empty. Adding www-data to it didn't work either.


Any ideas on how to track down the cause of the failure(s)?

Thanks.


Re: wtf just happened to my local staging web server

2022-05-04 Thread Gary Dale

On 2022-05-04 13:21, Greg Wooledge wrote:

On Wed, May 04, 2022 at 01:01:58PM -0400, Gary Dale wrote:

May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP Server...
May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use:
AH00072: make_sock: could not bind to addre>
May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use:
AH00072: make_sock: could not bind to addre>

Something else is using the ports that Apache wants to use.

Assuming those ports are 80 and 443, you could use commands like this
to see what's using them:

lsof -i :80
lsof -i :443

If your configuration is telling Apache to use some other ports, then
substitute your port numbers.

Thanks. Somehow nginx got installed. Wondering if jitsi or nextcloud did 
that because I certainly didn't (doesn't seem likely though because they 
both failed).


I guess I should pay more attention to the packages that get installed 
when I do apt full-upgrade... Usually I just scan to see if there is 
anything that I should reboot over.




Re: wtf just happened to my local staging web server

2022-05-06 Thread Gary Dale

On 2022-05-05 03:57, Stephan Seitz wrote:

Am Do, Mai 05, 2022 at 09:30:42 +0200 schrieb Klaus Singvogel:

I think there are more.


Yes, I only know wtf as „what the fuck”.

Stephan

Actually, it's "what the frack" - a nod to the Battlestar Galactica 
TV/movie franchise, which uses frack as the expletive of choice.


These days "frack" also refers to a gas extraction process with terrible 
environmental consequences, thereby justifying its use as an expletive 
in the broader world. Fracking is derived from fracturing, the breaking 
of something, which is appropriate in the case of my staging server 
suddenly being broken.




Re: wtf just happened to my local staging web server

2022-05-07 Thread Gary Dale

On 2022-05-05 02:37, Erwan David wrote:

Le 04/05/2022 à 19:01, Gary Dale a écrit :
My Apache2 file/print/web server is running Bullseye. I had to 
restart it yesterday evening to replace a disk drive. Otherwise the 
last reboot was a couple of weeks ago - I recall some updates to 
Jitsi - but I don't think there were any updates since then.


Today I find that I can't get through to any of the sites on the 
server. Instead I get the Apache2 default web page. This happens with 
both Firefox and Chromium. This happens for all the staging sites 
(that I access as ".loc" through entries in my hosts file). My jitsi 
and nextcloud servers simply report failure to get to the server.


I verified that the site files (-available and -enabled) haven't 
changed in months.


I tried restarting the apache2 service and got an error so I tried 
stopping it then starting it again - same error:


root@TheLibrarian:~# service apache2 start


It looks like you started it, not restart, thus the running apache is 
not killed


[...]



May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP 
Server...
May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in 
use: AH00072: make_sock: could not bind to addre>
May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in 
use: AH00072: make_sock: could not bind to addre>


This is consistent with former apache still running at that time, and 
using the wanted ports.


If you read my original e-mail, I tried both restarting it and starting 
it. Also, in my original e-mail, I identified that Apache2 wasn't 
running by running ps aux output through grep. Again, this confirms the 
systemctl message - as Greg Wooledge mentions in his reply to you.


Greg Wooledge showed me how to diagnose the problem by identifying the 
process (nginx in this case) that was grabbing the ports Apache2 needed. 
Claudio Kuenzler also provided an alternative method of diagnosing the 
problem.


My problem is I'm not all that conversant in tracking down network 
issues, such as ports. I didn't know that lsof even had a port option. 
And I'm still getting used to systemctl / journalctl.


Anyway, thanks for your attempt to help.



Re: Debian license issue

2022-06-01 Thread Gary Dale

On 2022-06-01 09:14, Lidiya Pecherskaya wrote:

Hello,
Is it possible to get information on the type of license under which 
the Debian software is available?

Thanks in advance.


Most of the packages are distributed under a free license - usually GPL 
or MIT but sometimes others. Packages under the "non-free" section 
usually aren't - which is often because the source is not available.





Why can't I move the document root for a site in Apache 2?

2020-08-30 Thread Gary Dale
I'm running Apache 2.4.38-3+deb10u3 on a Debian/Stable server on an 
AMD64 machine.


When I create a virtual host under /var/www, everything works as 
expected. However, if I change the virtual host's document root to 
another folder on the same machine, I get


|Forbidden You don't have permission to access this resource. 
Apache/2.4.38 (Debian) Server at .local Port 80 |


where I use .local instead of the live site's actual TLD to refer to my 
local server.


I get the same thing if I replace the public_html folder with a link to 
the other folder. To be clear, the folder and files in it are owned by 
the same account & group. And I can cd to the other folder through the 
link, so it's working.


Also to be clear, when I go to .local with the site in 
/var/www/.local/public_html, it works.


A reason I want to move the sites is that /var is in my system 
partition, which runs off of a small SSD, while the other folder in on a 
RAID-6 array with lots of space.


When I search for the problem, I see a lot of "solutions" that say just 
change the document root of the vhost and restart Apache 2. However that 
isn't working in my case. This is likely Debian specific but all the 
Debian stuff only shows vhosts under /var/www, which isn't what I want.


Any ideas?



Re: Why can't I move the document root for a site in Apache 2?

2020-08-31 Thread Gary Dale

On 2020-08-30 13:24, john doe wrote:

On 8/30/2020 7:08 PM, john doe wrote:

On 8/30/2020 6:27 PM, Gary Dale wrote:

I'm running Apache 2.4.38-3+deb10u3 on a Debian/Stable server on an
AMD64 machine.

When I create a virtual host under /var/www, everything works as
expected. However, if I change the virtual host's document root to
another folder on the same machine, I get

|Forbidden You don't have permission to access this resource.
Apache/2.4.38 (Debian) Server at .local Port 80 |

where I use .local instead of the live site's actual TLD to refer to my
local server.

I get the same thing if I replace the public_html folder with a link to
the other folder. To be clear, the folder and files in it are owned by
the same account & group. And I can cd to the other folder through the
link, so it's working.

Also to be clear, when I go to .local with the site in
/var/www/.local/public_html, it works.

A reason I want to move the sites is that /var is in my system
partition, which runs off of a small SSD, while the other folder in 
on a

RAID-6 array with lots of space.

When I search for the problem, I see a lot of "solutions" that say just
change the document root of the vhost and restart Apache 2. However 
that

isn't working in my case. This is likely Debian specific but all the
Debian stuff only shows vhosts under /var/www, which isn't what I want.

Any ideas?




What are the permissions of the directory in question?

Do you have a directory directive for that location in apache2?

--
John Doe



That is, if you change the 'DocumentRoot' directive you also need to
modify or add a corresponding directory directive in apache2.
Look at '/etc/apache2/apache2.conf' for how it is done for '/var/www'.:

"
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted
"

I would change the path of 'OK. I had done that already. However I noticed that my AllowOverride was 
for "None" - the same as /var/www/. When I change it to All, I get


Forbidden

You don't have permission to access this resource.Server unable to read 
htaccess file, denying access to be safe

Apache/2.4.38 (Debian) Server at lionsclub.local Port 80


In fact I don't have a .htaccess file anywhere on my sites (as per 
Apache's recommendations). The AllowOverride directive apparently allows 
Apache to look for one, so the problem remains the same.


In response to your first question, the permissions are u:rwx g:rwx 
o:rx. This is slightly looser than the folder in /var/www which removes 
g:w. In both cases the files are owned by me and group:www-data.





Re: Why can't I move the document root for a site in Apache 2? [SOLVED]

2020-08-31 Thread Gary Dale

On 2020-08-31 15:14, Gary Dale wrote:

On 2020-08-30 13:24, john doe wrote:

On 8/30/2020 7:08 PM, john doe wrote:

On 8/30/2020 6:27 PM, Gary Dale wrote:

I'm running Apache 2.4.38-3+deb10u3 on a Debian/Stable server on an
AMD64 machine.

When I create a virtual host under /var/www, everything works as
expected. However, if I change the virtual host's document root to
another folder on the same machine, I get

|Forbidden You don't have permission to access this resource.
Apache/2.4.38 (Debian) Server at .local Port 80 |

where I use .local instead of the live site's actual TLD to refer 
to my

local server.

I get the same thing if I replace the public_html folder with a 
link to

the other folder. To be clear, the folder and files in it are owned by
the same account & group. And I can cd to the other folder through the
link, so it's working.

Also to be clear, when I go to .local with the site in
/var/www/.local/public_html, it works.

A reason I want to move the sites is that /var is in my system
partition, which runs off of a small SSD, while the other folder in 
on a

RAID-6 array with lots of space.

When I search for the problem, I see a lot of "solutions" that say 
just
change the document root of the vhost and restart Apache 2. However 
that

isn't working in my case. This is likely Debian specific but all the
Debian stuff only shows vhosts under /var/www, which isn't what I 
want.


Any ideas?




What are the permissions of the directory in question?

Do you have a directory directive for that location in apache2?

--
John Doe



That is, if you change the 'DocumentRoot' directive you also need to
modify or add a corresponding directory directive in apache2.
Look at '/etc/apache2/apache2.conf' for how it is done for '/var/www'.:

"
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted
"

I would change the path of 'OK. I had done that already. However I noticed that my AllowOverride 
was for "None" - the same as /var/www/. When I change it to All, I get


Forbidden

You don't have permission to access this resource.Server unable to 
read htaccess file, denying access to be safe

Apache/2.4.38 (Debian) Server at lionsclub.local Port 80


In fact I don't have a .htaccess file anywhere on my sites (as per 
Apache's recommendations). The AllowOverride directive apparently 
allows Apache to look for one, so the problem remains the same.


In response to your first question, the permissions are u:rwx g:rwx 
o:rx. This is slightly looser than the folder in /var/www which 
removes g:w. In both cases the files are owned by me and group:www-data.



OK. Found it. The folder containing all the sites needed to have the 
www-data group. I'm not sure why, since my directory structure is 
something like:


/
    /
    /
    /
    /
    /

My document root for the site is the full tree. For testing, I used the 
 in the  directive. When I 
changed the group ownership on the folder above it, things started 
working. However the  still has the old 
permissions...


The reason I do things this way is that I work on the sites locally in 
place - so my site/project folder contains all the files for that 
site/project, while the files that are needed on the public site reside 
in a single folder below the  (which usually 
contains multiple subfolders).


Thanks John!




Re: Why can't I move the document root for a site in Apache 2? [SOLVED]

2020-08-31 Thread Gary Dale

On 2020-08-31 15:59, Gary Dale wrote:

On 2020-08-31 15:14, Gary Dale wrote:

On 2020-08-30 13:24, john doe wrote:

On 8/30/2020 7:08 PM, john doe wrote:

On 8/30/2020 6:27 PM, Gary Dale wrote:

I'm running Apache 2.4.38-3+deb10u3 on a Debian/Stable server on an
AMD64 machine.

When I create a virtual host under /var/www, everything works as
expected. However, if I change the virtual host's document root to
another folder on the same machine, I get

|Forbidden You don't have permission to access this resource.
Apache/2.4.38 (Debian) Server at .local Port 80 |

where I use .local instead of the live site's actual TLD to refer 
to my

local server.

I get the same thing if I replace the public_html folder with a 
link to
the other folder. To be clear, the folder and files in it are 
owned by
the same account & group. And I can cd to the other folder through 
the

link, so it's working.

Also to be clear, when I go to .local with the site in
/var/www/.local/public_html, it works.

A reason I want to move the sites is that /var is in my system
partition, which runs off of a small SSD, while the other folder 
in on a

RAID-6 array with lots of space.

When I search for the problem, I see a lot of "solutions" that say 
just
change the document root of the vhost and restart Apache 2. 
However that

isn't working in my case. This is likely Debian specific but all the
Debian stuff only shows vhosts under /var/www, which isn't what I 
want.


Any ideas?




What are the permissions of the directory in question?

Do you have a directory directive for that location in apache2?

--
John Doe



That is, if you change the 'DocumentRoot' directive you also need to
modify or add a corresponding directory directive in apache2.
Look at '/etc/apache2/apache2.conf' for how it is done for '/var/www'.:

"
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted
"

I would change the path of 'OK. I had done that already. However I noticed that my AllowOverride 
was for "None" - the same as /var/www/. When I change it to All, I get


Forbidden

You don't have permission to access this resource.Server unable to 
read htaccess file, denying access to be safe

Apache/2.4.38 (Debian) Server at lionsclub.local Port 80


In fact I don't have a .htaccess file anywhere on my sites (as per 
Apache's recommendations). The AllowOverride directive apparently 
allows Apache to look for one, so the problem remains the same.


In response to your first question, the permissions are u:rwx g:rwx 
o:rx. This is slightly looser than the folder in /var/www which 
removes g:w. In both cases the files are owned by me and group:www-data.



OK. Found it. The folder containing all the sites needed to have the 
www-data group. I'm not sure why, since my directory structure is 
something like:


/
    /
    /
    /
    /
    /

My document root for the site is the full tree. For testing, I used 
the  in the  directive. When I 
changed the group ownership on the folder above it, things started 
working. However the  still has the old 
permissions...


The reason I do things this way is that I work on the sites locally in 
place - so my site/project folder contains all the files for that 
site/project, while the files that are needed on the public site 
reside in a single folder below the  
(which usually contains multiple subfolders).


Thanks John!

Just to be clear, the folder I had to change permissions on is the 
document root,




weird behaviour of quotes in bash variable assignments

2020-09-20 Thread Gary Dale
I have the same bash script on two different Debian/Buster AMD64 
servers. However on one it refused to run. I tracked it down quickly to 
a variable substitution problem.


The line causing the problem reads: report="/root/clamscan-report"

On one server  echo $report  prints  /root/clamscan-report  while on the 
other it prints  "/root/clamscan-report".


Needless to say clamscan can't print to the latter. I fixed it by 
removing the quotes on the one server but now the scripts are different 
between the two servers, which isn't what I want. More importantly, I 
don't understand why it refuses to remove the quotes.


Where does this behaviour (keeping the quotes) get set?



Re: weird behaviour of quotes in bash variable assignments

2020-09-20 Thread Gary Dale

On 2020-09-20 18:14, The Wanderer wrote:

On 2020-09-20 at 17:27, Gary Dale wrote:


I have the same bash script on two different Debian/Buster AMD64
servers. However on one it refused to run. I tracked it down quickly
to a variable substitution problem.

The line causing the problem reads: report="/root/clamscan-report"

On one server  echo $report  prints  /root/clamscan-report  while on
the other it prints  "/root/clamscan-report".

Needless to say clamscan can't print to the latter. I fixed it by
removing the quotes on the one server but now the scripts are
different between the two servers, which isn't what I want.

Given the lack of spaces or other potentially-problematic characters in
the path, why not remove them on both servers? Will there potentially be
cases where the path is different, and such characters may be present?


I'm in the habit of always quoting string constants in scripts. It's 
clearer and avoids potential problems with future edits...






More importantly, I don't understand why it refuses to remove the
quotes.

Where does this behaviour (keeping the quotes) get set?

First up: are you sure it's actually bash (rather than some other shell)
that's running the script in both cases?

Second, can you confirm that bash is the same version on both servers?
As reported by e.g 'bash --version'.

If both of those are confirmed, then it may be worth digging deeper. I'm
a bit reluctant to delve deep into the bash man page looking for
something like that without first ruling out other possibilities.

My first guess would be that one of the two might be using the shell
builtin command 'echo' and the other might be using /bin/echo, but
that's only a guess.


It's the same version of bash, as you would expect since both servers 
are running the same up-to-date version of Debian/Stable.


The echo command is reporting things accurately. I uncovered the problem 
by running the script manually and seeing the messages coming up showing 
that it was looking for a file enclosed in quotes. The other server was 
running the script without errors.




Re: weird behaviour of quotes in dash variable assignments

2020-09-20 Thread Gary Dale

On 2020-09-20 20:36, David Christensen wrote:

On 2020-09-20 15:14, The Wanderer wrote:

On 2020-09-20 at 17:27, Gary Dale wrote:


I have the same bash script on two different Debian/Buster AMD64
servers. However on one it refused to run. I tracked it down quickly
to a variable substitution problem.

The line causing the problem reads: report="/root/clamscan-report"

On one server  echo $report  prints  /root/clamscan-report while on
the other it prints  "/root/clamscan-report".

Needless to say clamscan can't print to the latter. I fixed it by
removing the quotes on the one server but now the scripts are
different between the two servers, which isn't what I want.


Given the lack of spaces or other potentially-problematic characters in
the path, why not remove them on both servers? Will there potentially be
cases where the path is different, and such characters may be present?


More importantly, I don't understand why it refuses to remove the
quotes.

Where does this behaviour (keeping the quotes) get set?


First up: are you sure it's actually bash (rather than some other shell)
that's running the script in both cases?

Second, can you confirm that bash is the same version on both servers?
As reported by e.g 'bash --version'.

If both of those are confirmed, then it may be worth digging deeper. I'm
a bit reluctant to delve deep into the bash man page looking for
something like that without first ruling out other possibilities.

My first guess would be that one of the two might be using the shell
builtin command 'echo' and the other might be using /bin/echo, but
that's only a guess.


And:


Is the environment identical on the two servers?

    server1$ env > env1.out

    server2$ env > env2.out

    (move one file to the other host)

    $ diff env1.out env2.out



The environments are identical. The server uses are similar - both run 
Samba 4 and have Exim 4 set up for sending mail to me. Both run KVM 
virtual machines. The server that works also runs bacula and borg while 
the problem server runs NFS and Apache2.






Do you have a  "shebang" line as the first line of the script? Please 
post.



I find that Bourne syntax shell scripts are more portable than Bash 
syntax shell scripts.  Have you tried '#!/bin/sh' ?


That's my shebang line. /bin/sh in both cases is a symlink to dash 
(sorry, keep forgetting that Debian has been using dash instead of bash 
for years now). The version of dash is identical between the two servers.






Have you tried single quotes?


Wouldn't want to. Single quotes alter the behaviour.



Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale

On 2020-09-20 22:55, David Christensen wrote:

On 2020-09-20 18:18, Gary Dale wrote:

On 2020-09-20 20:36, David Christensen wrote:



The environments are identical.



Have you tried '#!/bin/sh' ?


That's my shebang line.



Have you tried single quotes?


Wouldn't want to. Single quotes alter the behaviour.


Double quotes, single quotes, and no quotes have the same behavior on 
my machine:


2020-09-20 19:50:55 dpchrist@tinkywinky ~/sandbox/sh
$ cat debian-user-20200920-1727-gary-dale.sh
#!/bin/sh

doublequote="/root/clamscan-report"
echo $doublequote

singlequote='/root/clamscan-report'
echo $singlequote

noquote='/root/clamscan-report'
echo $noquote

2020-09-20 19:53:02 dpchrist@tinkywinky ~/sandbox/sh
$ /bin/sh -x debian-user-20200920-1727-gary-dale.sh
+ doublequote=/root/clamscan-report
+ echo /root/clamscan-report
/root/clamscan-report
+ singlequote=/root/clamscan-report
+ echo /root/clamscan-report
/root/clamscan-report
+ noquote=/root/clamscan-report
+ echo /root/clamscan-report
/root/clamscan-report


Please run the above script and the following commands, and post your 
console session -- prompts, commands, output:


2020-09-20 19:52:29 dpchrist@tinkywinky ~/sandbox/sh
$ cat /etc/debian_version
9.13

2020-09-20 19:52:38 dpchrist@tinkywinky ~/sandbox/sh
$ uname -a
Linux tinkywinky 4.9.0-13-amd64 #1 SMP Debian 4.9.228-1 (2020-07-05) 
x86_64 GNU/Linux


2020-09-20 19:52:43 dpchrist@tinkywinky ~/sandbox/sh
$ dpkg-query --show dash
dash    0.5.8-2.4

2020-09-20 19:52:51 dpchrist@tinkywinky ~/sandbox/sh
$ dpkg --verify dash

2020-09-20 19:52:56 dpchrist@tinkywinky ~/sandbox/sh
$ sha256sum /bin/sh
e803088e7938b328b0511957dcd0dd7b5600ec1940010c64dbd3814e3d75495f /bin/sh


David

In the simple case, the quotes are the same, but there are times when 
they have different behaviours. I avoid single-quotes so that when I use 
them, it's a clue that there is a reason.


Here's what I got with your script:

/root/clamscan-report
/root/clamscan-report
/root/clamscan-report

When I retried my script with the quotes, it started working. I have no 
idea what changed from earlier today. I certainly didn't update anything 
on either server...




Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale

On 2020-09-21 07:51, Greg Wooledge wrote:

On Sun, Sep 20, 2020 at 09:18:33PM -0400, Gary Dale wrote:

The line causing the problem reads: report="/root/clamscan-report"

There is nothing wrong with that alleged line.

There is an incredible lack of openness and forthrightness in this
thread.  What are you hiding?  Why are you hiding it?


Nothing. Just not sharing extraneous details.



Have you tried single quotes?

Wouldn't want to. Single quotes alter the behaviour.

False.  Either type of quotes is perfectly fine in this case.


The key words being "in this case". As I explained elsewhere in this 
thread, I avoid using single-quotes except when I need them. Otherwise 
anyone reading the script wouldn't know if the single quotes were needed 
or not. This is fairly standard practise, as far as I've seen, when 
writing scripts.





The fact that you make this claim tells me one of the following two
things is true:

  * You have no idea what you're doing, and you are GUESSING that
changing the quotes would cause an issue, but you did not actually
try it.

  * The whole discussion is founded on falsehoods.  There's something
you're not telling us.  The "line" in question is falsified, or it's
part of some larger context which is being hidden, and this context
is vitally important to understanding the problem.

I'm leaning toward the latter.

What you should do is produce the smallest possible script that still
exhibits the problem, and post that script, plus its output.  Obviously
you'd add something like  echo "$report"  to the script.  Even better
would be  printf '%s\n' "$report"  .  Better still would be
printf %s "$report" | hd  .  Or you could just add set -x to the script.

The smallest possible script was what I posted - the single line copied 
and pasted from the script it was running in. It was the first line 
after the shebang.





Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale



On 2020-09-21 08:18, Greg Wooledge wrote:

On Mon, Sep 21, 2020 at 07:55:45AM -0400, Cindy Sue Causey wrote:

'…' and "…" are known as neutral, vertical, straight, typewriter,
dumb, or ASCII quotation marks.

‘…’ and “…” are known as typographic, curly, curved, book, or smart
quotation marks.

Yes.  This is one of the possible causes for the behavior the OP was
reporting.  But if this is true, then it reveals that they were lying
when they claimed that the scripts were the same on both servers.


They function differently somehow, too. I don't remember if that
difference in functioning was by design or just bad placement in a
file. Maybe it's about the above reference to ASCII. Discovering the
difference between them was another one of those ah-ha moments tripped
over via a terminal window.

Remember, the computer can't actually *see* the characters the way
that you do.  To the computer, every character is just a number, or
a sequence of numbers.

To bash, the character " (byte value 0x22) has a special meaning, and
so does the character ' (byte value 0x27).  However, the characters
“ (byte values 0xe2809c) and ” (byte values 0xe2809d) have no special
meaning.  They're just some random data that the shell doesn't interpret.

unicorn:~$ x="foo"; echo "$x"
foo
unicorn:~$ x='foo'; echo "$x"
foo
unicorn:~$ x=“foo”; echo "$x"
“foo”

To beat a dead horse some more, if *this* was the OP's problem, then they
told multiple lies about it.  They did not paste the actual failing line
from the failing script (probably retyped it instead), and they did not
ACTUALLY COMPARE the two scripts to see whether they were different,
instead simply ASSUMING that the two scripts were identical, even though
they very clearly weren't.

An actual troubleshooting would have done something like using md5sum
on the script on each machine, and pasting the md5sum commands (including
the full script pathname) and their output to the mailing list.  Openness.

Or, hell, even "ls -l /full/pathname" would probably have revealed that
the scripts were not the same SIZE.  That would also have shown immediately
that the scripts were not "the same".

As an FYI, the scripts were the same on both servers because I had ssh 
sessions on both and copy/pasted the script from one to the other. (cat 

Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale



On 2020-09-21 12:43, Greg Wooledge wrote:

On Mon, Sep 21, 2020 at 12:34:20PM -0400, Gary Dale wrote:

As an FYI, the scripts were the same on both servers because I had ssh
sessions on both and copy/pasted the script from one to the other. (cat

Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale



On 2020-09-21 13:12, Greg Wooledge wrote:

On Mon, Sep 21, 2020 at 12:59:32PM -0400, Gary Dale wrote:

Did you try it with Konsole? I got

$ hd
a   b
  61 20 20 20 20 20 20 20  62 0a |a   b.|
000a

In other words, konsole *did* alter the contents.


Most shells (all that I am aware of) treat tabs and blanks as generic white
space - not relevant to the execution of the code.

Between arguments, sure.  Tabs and spaces are equivalent in those places.
But inside a quoted string argument?  No, they're not equivalent.

Who knows what other conversions might have taken place?

You were trying to troubleshoot a script that wasn't doing what you
expected.  And yet you just hand-waved everything, rather than getting
in there and doing the work to actually *check* things.

That's where you are making unwarranted assumptions. I tried the line 
outside of the script, hand entered rather than copied & pasted, and got 
the same result. There was something weird going on in the dash shell at 
the time. Just because I didn't report all the testing I'd done doesn't 
mean you should assume that I didn't do it.





Re: weird behaviour of quotes in bash variable assignments

2020-09-21 Thread Gary Dale



On 2020-09-21 16:28, Greg Wooledge wrote:

On Mon, Sep 21, 2020 at 01:19:22PM -0700, David Christensen wrote:

On 2020-09-21 00:52, to...@tuxteam.de wrote:

What does the builtin "shopt" say? Especially the value of `compat42'
is involved in quote removal.

Joy! Joy! Happy! Happy!  Another whole dimension for potential problems!

There is great confusion here, because the OP was so vague and misleading
about *everything*.  Including which shell was being used.

"compat42" and "shopt" are bashisms.  They are not present nor relevant
in dash.


RTFM dash(1) I don't see any 'compat' keywords... (?)

Correct.


STFW https://html.duckduckgo.com/html?q=dash%20compat42 I don't see 'compat'
anywhere on the first page of hits... (?)

Where are the above settings documented?

In bash(1).  They are bash settings.

To the best of my knowledge, there is no bash shopt or compat* setting
that will change the behavior of quotes in such a profound way as to
cause the problem that the OP imagined he had.

At this point, I'm still convinced that it was a curly-quote issue.  Of
course, we'll never know for sure, because the OP is incapable of presenting
simple facts like "here is the script which fails" and "here is its output".

I presented the line that failed, copied and pasted from the Konsole 
session. What more do you want, other than to complain?





Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale



On 2020-09-21 10:30, David Wright wrote:

On Mon 21 Sep 2020 at 08:18:52 (-0400), Greg Wooledge wrote:

On Mon, Sep 21, 2020 at 07:55:45AM -0400, Cindy Sue Causey wrote:

'…' and "…" are known as neutral, vertical, straight, typewriter,
dumb, or ASCII quotation marks.

‘…’ and “…” are known as typographic, curly, curved, book, or smart
quotation marks.

Yes.  This is one of the possible causes for the behavior the OP was
reporting.  But if this is true, then it reveals that they were lying
when they claimed that the scripts were the same on both servers.

[…]

To beat a dead horse some more, if *this* was the OP's problem, then they
told multiple lies about it.  They did not paste the actual failing line
from the failing script (probably retyped it instead), and they did not
ACTUALLY COMPARE the two scripts to see whether they were different,
instead simply ASSUMING that the two scripts were identical, even though
they very clearly weren't.

An actual troubleshooting would have done something like using md5sum
on the script on each machine, and pasting the md5sum commands (including
the full script pathname) and their output to the mailing list.  Openness.

Or, hell, even "ls -l /full/pathname" would probably have revealed that
the scripts were not the same SIZE.  That would also have shown immediately
that the scripts were not "the same".

I think we should apply Hanlon's razor rather than saying the OP lied.
After all, "compare" means diff or cmp to us, whereas many might just
use their eyeballs. And we all know that authors are the worst people
to check their own work. Proof-reading is a special skill.

Even their fix is poorly described. Did they just type the quotes back
in with an editor, in which case there's no guarantee that the scripts
are identical between machines, or did they transfer a working script
to the failing machine? The best line is save until last: "I certainly
didn't update anything on either server...". Well, yes, that's
*precisely* what you did: you updated the script.

Cheers,
David.

You are taking my quote out of context. I didn't change anything on the 
server to make the script start working. I updated the script to see if 
it would work after trying Greg's test. There were no program or setting 
updates on the server, and certainly nothing that updated dash. This is 
Debian/Stable we're talking about, after all.


Since it is a file server, there probably were changes to the files on 
its shares, but I'd hardly count that as an "update". Similarly, it was 
running cron jobs for backups and virus scans (unsuccessfully) but again 
I wouldn't call those "updates".





Re: weird behaviour of quotes in dash variable assignments

2020-09-21 Thread Gary Dale

On 2020-09-21 16:18, David Christensen wrote:

On 2020-09-21 09:34, Gary Dale wrote:

As an FYI, the scripts were the same on both servers because I had 
ssh sessions on both and copy/pasted the script from one to the 
other. (cat 

Re: weird behaviour of quotes in dash variable assignments

2020-09-22 Thread Gary Dale

On 2020-09-22 09:29, David Wright wrote:

On Mon 21 Sep 2020 at 20:50:29 (-0400), Gary Dale wrote:

On 2020-09-21 10:30, David Wright wrote:

On Mon 21 Sep 2020 at 08:18:52 (-0400), Greg Wooledge wrote:

On Mon, Sep 21, 2020 at 07:55:45AM -0400, Cindy Sue Causey wrote:

'…' and "…" are known as neutral, vertical, straight, typewriter,
dumb, or ASCII quotation marks.

‘…’ and “…” are known as typographic, curly, curved, book, or smart
quotation marks.

Yes.  This is one of the possible causes for the behavior the OP was
reporting.  But if this is true, then it reveals that they were lying
when they claimed that the scripts were the same on both servers.

[…]

To beat a dead horse some more, if *this* was the OP's problem, then they
told multiple lies about it.  They did not paste the actual failing line
from the failing script (probably retyped it instead), and they did not
ACTUALLY COMPARE the two scripts to see whether they were different,
instead simply ASSUMING that the two scripts were identical, even though
they very clearly weren't.

An actual troubleshooting would have done something like using md5sum
on the script on each machine, and pasting the md5sum commands (including
the full script pathname) and their output to the mailing list.  Openness.

Or, hell, even "ls -l /full/pathname" would probably have revealed that
the scripts were not the same SIZE.  That would also have shown immediately
that the scripts were not "the same".

I think we should apply Hanlon's razor rather than saying the OP lied.
After all, "compare" means diff or cmp to us, whereas many might just
use their eyeballs. And we all know that authors are the worst people
to check their own work. Proof-reading is a special skill.

Even their fix is poorly described. Did they just type the quotes back
in with an editor, in which case there's no guarantee that the scripts
are identical between machines, or did they transfer a working script
to the failing machine? The best line is save until last: "I certainly
didn't update anything on either server...". Well, yes, that's
*precisely* what you did: you updated the script.

   ↑↑

You are taking my quote out of context. I didn't change anything on
the server to make the script start working. I updated the script to



see if it would work after trying Greg's test. There were no program
or setting updates on the server, and certainly nothing that updated
dash. This is Debian/Stable we're talking about, after all.

Sorry, I thought you wrote that on Sunday afternoon, "I fixed it by
removing the quotes on the one server but now the scripts are
different between the two servers, which isn't what I want."
Then on Sunday evening, you wrote "When I retried my script with the
quotes, it started working."

The general opinion is that the script was faulty, probably in the
quotes used. The narrative says that you removed the quotes, and
later put them back. It seems fair to suggest that the quotes you
put back were not the same ones that you removed. They were replaced
in the same location, but you didn't put the old (removed) quotes
into a little two-character file, so that you could put precisely
the same ones back into the script, did you?
I thought about that but then there would be no way I could demonstrate 
it other than what I did - post the offending line via cut & paste, a 
method people have been arguing can change the quotes.



Since it is a file server, there probably were changes to the files on
its shares, but I'd hardly count that as an "update". Similarly, it
was running cron jobs for backups and virus scans (unsuccessfully) but
again I wouldn't call those "updates".

Nor I. No, I'm only talking about your script. Does it bear any
relation to the one posted in your blog? The first line (after
the shebang) of the one in the blog is the same line that's under
discussion here, and has curly quotes. I can't parse the second
line's curly quotes, and the fourth line uses an n-dash for a
hyphen *though the other hyphens are ok). The fifth line uses
curly single-quotes. More curly quotes follow.

I don't see any cause for our wasting time pondering on dash
without your posting an MWE that unambiguously demonstrates a
problem.
Yes, that's the script - copy-pasted from the working server then with 
the e-mail addresses changed (they are actually parameters to the 
working script, but why complicate things when explaining a basic 
script). As you noted, it has changed things - hopefully not to the 
point that people won't be able to make it work. I haven't found a way 
to stop Wordpress from doing the substitution but I note the raw text is 
still correct (once you remove the html).




Re: weird behaviour of quotes in dash variable assignments

2020-09-22 Thread Gary Dale

On 2020-09-22 00:25, David Christensen wrote:

On 2020-09-21 18:04, Gary Dale wrote:

The two servers are for different customers. I would not want to 
create a tunnel between them. Instead I have my normal ssh tunnels to 
each server from my workstation. However the script is only readable 
by root while my tunnels are for my non-root account. While I could 
copy the file to my non-root account (while root), chown it, copy it 
to my workstation then to the other server, where I'd move it to 
/root, that's a lot more work than cat, copy, paste, save.


Again, the method I used should not have created any changes in the 
script that would affect its operation. And to date I've seen no 
indication that it did. I still don't know why the script was leaving 
the quotes in nor why it started working.


You might want to consider ssh-agent and SSH agent forwarding. These 
allow you to access your version control server over SSH from remote 
hosts by using your workstation credentials; no credentials required 
on the remote host:



https://dev.to/levivm/how-to-use-ssh-and-ssh-agent-forwarding-more-secure-ssh-2c32 




David

I'm not sure that does anything for me. I would need to create a "root" 
key to get access to the file, which is something I refuse to do.


Right now the ssh tunnel requires a key on the remote server and there 
are no root keys so even if someone gains access, they still don't have 
root access.


There are other tools that work better for pushing things to multiple 
servers but all of these tools assume you are doing it often enough or 
to enough machines to make it worthwhile. That's not my situation.




Re: weird behaviour of quotes in bash variable assignments

2020-09-22 Thread Gary Dale

On 2020-09-22 01:48, Andrei POPESCU wrote:

On Lu, 21 sep 20, 17:22:26, Gary Dale wrote:

I presented the line that failed, copied and pasted from the Konsole
session. What more do you want, other than to complain?

In such cases it is best to attach[1] the smallest complete[2] script
demonstrating the behaviour.

Based on the information provided so far it is highly likely the issue
was caused by some detail you omitted because you decided it is not
relevant.

[1] as demonstrated, copy-pasting can alter content
[2] as discussed, the shebang line can make a big difference

Kind regards,
Andrei


Your first point makes it impossible for me to present anything because 
this list doesn't (AFAIK) allow attachments. I can only present a 
copy-pasted example.


Your second point would assume that I'm using a non-standard shebang. I 
use #!/bin/sh unless I need something that is only available in bash 
(can't recall a case of that). Moreover, the consensus seems to be that 
if there was an error on the shebang line, the default shell would be used.


The smallest complete script then is

report=”/root/clamscan-report”
echo $report

but as I explained, that works on one server but not the other. Hence my 
question wasn't "what's wrong with this line" but rather "how do I 
change the behaviour". I'm not expecting anyone else to be able to 
reproduce the problem because I can't even do it except on the one server.


And to do that, I didn't even need the shebang line. I could enter the 
line at the command prompt then echo it and see the same result. Since 
it wasn't the shebang line, and didn't even require being in a script, 
why should I have included the shebang line?




Re: ssh session times out annoyingly fast, why?

2020-09-22 Thread Gary Dale

On 2020-09-21 19:38, Britton Kerin wrote:

I'm using ssh from a debian box to a rasberry pi (sorta debian also :).

For some reason ssh sessions seem to time out pretty quickly.  I've
tried setting ClientAliveInterval and ClientAliveCountMax and also
ServerAliveInterval  and ServerAliveCountMax, but it doesn't seem to
make any difference.  Is there some other setting somewhere that
affects this?

Thanks,
Britton

My money is on a network issue. Lately my connection to a remote server 
seems to lock up quickly while I have a stable connection to a local 
server. Both servers are running Debian/Stable and I haven't fiddled 
with the ssh settings in a long time.




apt, update-iinitramfs - amdgpu firmware warning messages

2020-10-17 Thread Gary Dale
I'm running Bullseye on an AMD64 system with an older AMD HD7850 video 
card. I keep getting these messages from update-initramfs during my 
daily apt update && apt full-upgrade:


W: Possible missing firmware /lib/firmware/amdgpu/arcturus_gpu_info.bin 
for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_ta.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_asd.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_sos.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_rlc.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_mec2.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_mec.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_sdma.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/navi10_mes.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_vcn.bin for 
module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/arcturus_smc.bin for 
module amdgpu


Apparently this refers to a current or future AMD GPU and has nothing to 
do with my 8 year old card. I can't actually find any mention of 
Arcturus firmware in the Debian packages pages so it may be that the 
firmware hasn't made it into a package yet. It also looks to me like 
something recognizes that Arcturus GPUs exist but doesn't notice that my 
system doesn't have one or that the firmware may not exist.


I gather that update-initramfs is trying to throw everything in that may 
be needed should I change hardware, but am I expecting too much from it 
to not throw up spurious warnings like this?




ssh tunnelling testing

2020-12-07 Thread Gary Dale
I'm running Debian/Buster on various servers, including my home server. 
I'm trying to set up an ssh tunnel that I can use post-pandemic in case 
I need to access my home network remotely. I'm already doing this to 
various remote servers so I thought this should just work, since I can 
already access my home server locally using its 192.168... address 
(actually through the /etc/hosts file using the server's name).


I've set up port forwarding on both my routers (I have an inner network 
and an outer one, using the outer network for devices I don't really 
control). I can access my Apache2 server on the inner network by 
forwarding port 80 on the outer network to the WAN address of the inner 
router and forwarding that to my server. Pointing my browser to the 
external IP address of the outer router brings up the default page - 
which I can change so I know it's the actual local page.


However, when I try to ssh to the same address, it just times out.

I've compared the sshd.conf file on my local server to one on a remote 
server and they are identical. The only uncommented lines are:


PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem   sftp    /usr/lib/openssh/sftp-server


Any ideas on what's going wrong?



Re: ssh tunnelling testing

2020-12-07 Thread Gary Dale

On 2020-12-07 13:24, john doe wrote:

On 12/7/2020 6:38 PM, Gary Dale wrote:

I'm running Debian/Buster on various servers, including my home server.
I'm trying to set up an ssh tunnel that I can use post-pandemic in case
I need to access my home network remotely. I'm already doing this to
various remote servers so I thought this should just work, since I can
already access my home server locally using its 192.168... address


Is it a classe C private Ipv4 address?


I thought that was obvious.





(actually through the /etc/hosts file using the server's name).

I've set up port forwarding on both my routers (I have an inner network
and an outer one, using the outer network for devices I don't really
control). I can access my Apache2 server on the inner network by
forwarding port 80 on the outer network to the WAN address of the inner
router and forwarding that to my server. Pointing my browser to the
external IP address of the outer router brings up the default page -
which I can change so I know it's the actual local page > However, 
when I try to ssh to the same address, it just times out.


I've compared the sshd.conf file on my local server to one on a remote
server and they are identical. The only uncommented lines are:

PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem   sftp    /usr/lib/openssh/sftp-server


Any ideas on what's going wrong?


- This looks like your port forwarding is not working...
- What are the logs saying?
- Is the SSH server allowing access from the outside?


Note that it is unclear to me how you can test outside access from the
inside.

Your first point is what I am complaining about. The outer router 
doesn't have a log function and an ssh attempt never shows up on the 
inner router. As I explained in the initial post, I've set up the port 
forwarding to allow it and the sshd.conf file is identical to one that 
allows access from the outside.


I can test outside access from the inside by trying to connect to the 
external address. As with my browser example, the request goes to the 
device that has the particular IP address being sought. That is the 
external port on the outer router. I can also ssh to the external port 
on the inner router (which I can't think of a reason to do except for 
testing). Interestingly, this works but doesn't get logged.






Re: ssh tunnelling testing [solved]

2020-12-07 Thread Gary Dale

On 2020-12-07 14:03, john doe wrote:

On 12/7/2020 7:54 PM, Gary Dale wrote:

On 2020-12-07 13:24, john doe wrote:

On 12/7/2020 6:38 PM, Gary Dale wrote:





(actually through the /etc/hosts file using the server's name).

I've set up port forwarding on both my routers (I have an inner 
network

and an outer one, using the outer network for devices I don't really
control). I can access my Apache2 server on the inner network by
forwarding port 80 on the outer network to the WAN address of the 
inner

router and forwarding that to my server. Pointing my browser to the
external IP address of the outer router brings up the default page -
which I can change so I know it's the actual local page > However,
when I try to ssh to the same address, it just times out.

I've compared the sshd.conf file on my local server to one on a remote
server and they are identical. The only uncommented lines are:

PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem   sftp    /usr/lib/openssh/sftp-server


Any ideas on what's going wrong?


- This looks like your port forwarding is not working...
- What are the logs saying?
- Is the SSH server allowing access from the outside?


Note that it is unclear to me how you can test outside access from the
inside.


Your first point is what I am complaining about. The outer router
doesn't have a log function and an ssh attempt never shows up on the
inner router. As I explained in the initial post, I've set up the port
forwarding to allow it and the sshd.conf file is identical to one that
allows access from the outside.

I can test outside access from the inside by trying to connect to the
external address. As with my browser example, the request goes to the
device that has the particular IP address being sought. That is the
external port on the outer router. I can also ssh to the external port
on the inner router (which I can't think of a reason to do except for
testing). Interestingly, this works but doesn't get logged.





Sorry, I'm lost at your setup, the only thing that I can say is that
something looks to be  rong with regard to your firewall config.



The thing is the forwarding setup is the same for port 22 as it is for 
port 80. I know that the port 80 forwarding is working so why isn't the 
port 22 forwarding?


I still don't know the answer to that one, but when I changed the 
external port to something else (on the outer router), it started 
working. Now I just have to remember to set the -p option in ssh to connect.





Re: ssh tunnelling testing [solved]

2020-12-07 Thread Gary Dale

On 2020-12-07 14:23, john doe wrote:

On 12/7/2020 8:11 PM, Gary Dale wrote:

On 2020-12-07 14:03, john doe wrote:

On 12/7/2020 7:54 PM, Gary Dale wrote:

On 2020-12-07 13:24, john doe wrote:

On 12/7/2020 6:38 PM, Gary Dale wrote:





(actually through the /etc/hosts file using the server's name).

I've set up port forwarding on both my routers (I have an inner
network
and an outer one, using the outer network for devices I don't really
control). I can access my Apache2 server on the inner network by
forwarding port 80 on the outer network to the WAN address of the
inner
router and forwarding that to my server. Pointing my browser to the
external IP address of the outer router brings up the default page -
which I can change so I know it's the actual local page > However,
when I try to ssh to the same address, it just times out.

I've compared the sshd.conf file on my local server to one on a 
remote

server and they are identical. The only uncommented lines are:

PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem   sftp    /usr/lib/openssh/sftp-server


Any ideas on what's going wrong?


- This looks like your port forwarding is not working...
- What are the logs saying?
- Is the SSH server allowing access from the outside?


Note that it is unclear to me how you can test outside access from 
the

inside.


Your first point is what I am complaining about. The outer router
doesn't have a log function and an ssh attempt never shows up on the
inner router. As I explained in the initial post, I've set up the port
forwarding to allow it and the sshd.conf file is identical to one that
allows access from the outside.

I can test outside access from the inside by trying to connect to the
external address. As with my browser example, the request goes to the
device that has the particular IP address being sought. That is the
external port on the outer router. I can also ssh to the external port
on the inner router (which I can't think of a reason to do except for
testing). Interestingly, this works but doesn't get logged.





Sorry, I'm lost at your setup, the only thing that I can say is that
something looks to be  rong with regard to your firewall config.



The thing is the forwarding setup is the same for port 22 as it is for
port 80. I know that the port 80 forwarding is working so why isn't the
port 22 forwarding?

I still don't know the answer to that one, but when I changed the
external port to something else (on the outer router), it started
working.


Something is rong if it works that way.

You did not use the same rule for both port 80 and 22, if yes, this
would mean that port 22 and 80 are redirected to port 80, which is not
what you want.

In other words, you need one rule per redirect port.


I didn't say I used the same rule. I said the setup is the same. Any 
external traffic on that port is directed to the same port on the inner 
router. It's kind of difficult to get that wrong.


I suspect that my ISP is using port 22 for their own purposes but didn't 
bother excluding it in the router's programming.






Now I just have to remember to set the -p option in ssh to
connect.




To avoid the -p option:

$ cat ~/.ssh/config
Host sshserver
    HostName 
    Port 

$ ssh sshserver

I could, but it's not something I'm using often. If I forget, I'll be 
reminded when it fails to connect.




Re: ssh tunnelling testing

2020-12-07 Thread Gary Dale

On 2020-12-07 13:55, der.hans wrote:

Am 07. Dec, 2020 schwätzte Gary Dale so:

moin moin,

First off, try one or more -v to your ssh command to get more verbosity.

The -v will show you the step in building the connection that failed.

Also, try -G to see what configuration will be used without actually
opening a connection.

I'm running Debian/Buster on various servers, including my home 
server. I'm trying to set up an ssh tunnel that I can use 
post-pandemic in case I need to access my home network remotely. I'm 
already doing this to various remote servers so I thought this should 
just work, since I can already access my home server locally using 
its 192.168... address (actually through the /etc/hosts file using 
the server's name).


You can access it locally, so the ssh daemon is listening to the external
IP on your system rather than just localhost and basic authentication is
working.

Do you have a firewall on the ssh server? If so, does it allow ssh
connections from your internal router?

I've set up port forwarding on both my routers (I have an inner 
network and an outer one, using the outer network for devices I don't 
really control). I can access my Apache2 server on the inner network 
by forwarding port 80 on the outer network to the WAN address of the 
inner router and forwarding that to my server. Pointing my browser to 
the external IP address of the outer router brings up the default 
page - which I can change so I know it's the actual local page.


However, when I try to ssh to the same address, it just times out.


Internet <--> Outer Router <--> Inner Router <--> ssh/apache server

That's what you have?

You have port forwarding from 80 and 22 on the Outer Router going to the
Inner Router and from the Inner Router to your server?

Can you see the connection transverse your routers?

Also, if you have a reliable shell at a provider that allows incoming SSH
connections and SSH tunnels, you could setup an autossh connection to 
that

that builds a reverse tunnel to your internal server without needing to
open any firewall ports.

ciao,

der.hans

I've compared the sshd.conf file on my local server to one on a 
remote server and they are identical. The only uncommented lines are:


PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem   sftp    /usr/lib/openssh/sftp-server


Any ideas on what's going wrong?


All good advice - just a little late. Thanks.




Can't print to CUPS printer on my server

2020-12-08 Thread Gary Dale
I'm running Debian/Bullseye on my workstation and Debian/Buster on my 
server. I have an old HP CP-1215 color laserjet attached to the server 
by a USB cable. I can print a CUPS test page from the server but not 
from my workstation. When I try to print anything from my workstation to 
that printer, I get /"No suitable destination host found by cups-browsed."/


I've deleted and re-added the printer on the server and rebooted my 
workstation but I still get the same problem.


The printer is using the foomatic drivers. CUPS reports that it is 
2.3.3op1 on my workstation and 2.2.10 on my server.

//



Re: Can't print to CUPS printer on my server

2020-12-09 Thread Gary Dale

On 2020-12-08 13:29, Brian wrote:

On Tue 08 Dec 2020 at 12:27:18 -0500, Gary Dale wrote:


I'm running Debian/Bullseye on my workstation and Debian/Buster on my
server. I have an old HP CP-1215 color laserjet attached to the server by a
USB cable. I can print a CUPS test page from the server but not from my

The server is not the problem if printing from it is successful. Please
do

   avahi-browse -art > log1


-bash: avahi-browse: command not found - for both regular user and root



on the server and post log here as an attachment. avahi-browse is in the
avahi-utils package. Also give 'lpstat -t'.

device for CP1215: hp:/usb/HP_Color_LaserJet_CP1215?serial=LJ090T7
device for EPSON_Stylus_Photo_R300: usb://EPSON/Stylus%20Photo%20R300
device for ML-1210: usb://Samsung/ML-1210
device for PDF: cups-pdf:/
device for Samsung_C410_Series: 
usb://Samsung/C410%20Series?serial=ZEVQB8GF3A00HFJ

CP1215 accepting requests since Tue 08 Dec 2020 10:46:27 AM EST
EPSON_Stylus_Photo_R300 accepting requests since Tue 28 Apr 2015 
05:49:05 PM EDT

ML-1210 accepting requests since Thu 12 Jul 2012 03:12:20 PM EDT
PDF accepting requests since Mon 16 May 2016 05:35:30 PM EDT
Samsung_C410_Series accepting requests since Thu 20 Aug 2020 04:44:57 PM EDT
printer CP1215 is idle.  enabled since Tue 08 Dec 2020 10:46:27 AM EST
printer EPSON_Stylus_Photo_R300 is idle.  enabled since Tue 28 Apr 2015 
05:49:05 PM EDT

printer ML-1210 is idle.  enabled since Thu 12 Jul 2012 03:12:20 PM EDT
printer PDF is idle.  enabled since Mon 16 May 2016 05:35:30 PM EDT
printer Samsung_C410_Series is idle.  enabled since Thu 20 Aug 2020 
04:44:57 PM EDT





workstation. When I try to print anything from my workstation to that
printer, I get /"No suitable destination host found by cups-browsed."/

This is a cups-browsed issue. Give 'lpstat -t' on the client.

scheduler is running
no system default destination
members of class ColourLaser:
    unknown
device for ColourLaser: ///dev/null
device for EPSON_Stylus_Photo_R300_TheLibrarian: 
implicitclass://EPSON_Stylus_Photo_R300_TheLibrarian/
device for EPSON_XP-820_Series: 
usb://EPSON/XP-820%20Series?serial=554638593032343867&interface=1
device for HP_Color_LaserJet_CP1215_TheLibrarian: 
implicitclass://HP_Color_LaserJet_CP1215_TheLibrarian/

device for PDF_TheLibrarian: implicitclass://PDF_TheLibrarian/
device for Samsung_C410_Series: 
dnssd://Samsung%20C410%20Series%20(SEC30CDA71CB48A)._printer._tcp.local/

device for Samsung_C410_Series_SEC30CDA71CB48A_: ///dev/null
device for Samsung_C410_Series_TheLibrarian: 
implicitclass://Samsung_C410_Series_TheLibrarian/
device for Samsung_ML_1210_TheLibrarian: 
implicitclass://Samsung_ML_1210_TheLibrarian/

ColourLaser accepting requests since Fri Dec 11 23:27:13 2015
EPSON_Stylus_Photo_R300_TheLibrarian accepting requests since Wed Dec  9 
00:00:30 2020

EPSON_XP-820_Series accepting requests since Tue Dec  8 11:50:18 2020
HP_Color_LaserJet_CP1215_TheLibrarian accepting requests since Wed Dec  
9 00:00:28 2020

PDF_TheLibrarian accepting requests since Wed Dec  9 00:00:29 2020
Samsung_C410_Series accepting requests since Fri Sep 11 17:44:23 2020
Samsung_C410_Series_SEC30CDA71CB48A_ not accepting requests since Fri 
Aug 21 00:00:10 2020 -

    reason unknown
Samsung_C410_Series_TheLibrarian accepting requests since Wed Dec  9 
00:00:28 2020
Samsung_ML_1210_TheLibrarian accepting requests since Wed Dec  9 
00:00:29 2020

printer ColourLaser is idle.  enabled since Fri Dec 11 23:27:13 2015
printer EPSON_Stylus_Photo_R300_TheLibrarian is idle.  enabled since Wed 
Dec  9 00:00:30 2020

printer EPSON_XP-820_Series is idle.  enabled since Tue Dec  8 11:50:18 2020
printer HP_Color_LaserJet_CP1215_TheLibrarian is idle.  enabled since 
Wed Dec  9 00:00:28 2020

printer PDF_TheLibrarian is idle.  enabled since Wed Dec  9 00:00:29 2020
printer Samsung_C410_Series is idle.  enabled since Fri Sep 11 17:44:23 2020
printer Samsung_C410_Series_SEC30CDA71CB48A_ disabled since Fri Aug 21 
00:00:10 2020 -

    reason unknown
printer Samsung_C410_Series_TheLibrarian is idle.  enabled since Wed 
Dec  9 00:00:28 2020
printer Samsung_ML_1210_TheLibrarian is idle.  enabled since Wed Dec  9 
00:00:29 2020



I've deleted and re-added the printer on the server and rebooted my
workstation but I still get the same problem.

The printer is using the foomatic drivers. CUPS reports that it is 2.3.3op1
on my workstation and 2.2.10 on my server.
//

Executing

avahi-browse -art > log2

on the client and sending log2 here could be useful.


-bash: avahi-browse: command not found - for both regular user and root.



Re: Can't print to CUPS printer on my server

2020-12-09 Thread Gary Dale

On 2020-12-08 16:19, Joe Pfeiffer wrote:

Gary Dale  writes:


I'm running Debian/Bullseye on my workstation and Debian/Buster on my server. I 
have an old HP CP-1215 color laserjet attached to the server by a USB cable. I 
can print a CUPS test
page from the server but not from my workstation. When I try to print anything from my 
workstation to that printer, I get "No suitable destination host found by 
cups-browsed."

I've deleted and re-added the printer on the server and rebooted my workstation 
but I still get the same problem.

The printer is using the foomatic drivers. CUPS reports that it is 2.3.3op1 on 
my workstation and 2.2.10 on my server.

Did you set the printer to be shared?


Yes



Re: running microsoft team on debian 10.3

2020-12-09 Thread Gary Dale

On 2020-12-08 22:37, Dan Hitt wrote:
One of the local government agencies that i would like to interact 
with communicates using Microsoft Team.  The software actually has a 
debian package, which i have downloaded, but not installed yet.


I have a computer running debian 10.3, but it does not have a web cam 
or a mic.


So presumably i need to set up both of those items to make this work.

Does anybody have any experience using Microsoft Team on debian, and 
is there anything i need to be cautious about (of course apart from 
running software from a giant software company)?


Any advice about the web cam or mic?

TIA for any pointers.

dan


Any of the Logitech cameras with an integrated mic should work fine. 
They are readily available, reliable and reasonably priced. I've been 
using them for years without problems.




Re: ssh tunnelling testing [solved]

2020-12-09 Thread Gary Dale

On 2020-12-07 16:02, Gary Dale wrote:

On 2020-12-07 14:23, john doe wrote:

On 12/7/2020 8:11 PM, Gary Dale wrote:

On 2020-12-07 14:03, john doe wrote:

On 12/7/2020 7:54 PM, Gary Dale wrote:

On 2020-12-07 13:24, john doe wrote:

On 12/7/2020 6:38 PM, Gary Dale wrote:





(actually through the /etc/hosts file using the server's name).

I've set up port forwarding on both my routers (I have an inner
network
and an outer one, using the outer network for devices I don't 
really

control). I can access my Apache2 server on the inner network by
forwarding port 80 on the outer network to the WAN address of the
inner
router and forwarding that to my server. Pointing my browser to the
external IP address of the outer router brings up the default 
page -

which I can change so I know it's the actual local page > However,
when I try to ssh to the same address, it just times out.

I've compared the sshd.conf file on my local server to one on a 
remote

server and they are identical. The only uncommented lines are:

PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem   sftp    /usr/lib/openssh/sftp-server


Any ideas on what's going wrong?


- This looks like your port forwarding is not working...
- What are the logs saying?
- Is the SSH server allowing access from the outside?


Note that it is unclear to me how you can test outside access 
from the

inside.


Your first point is what I am complaining about. The outer router
doesn't have a log function and an ssh attempt never shows up on the
inner router. As I explained in the initial post, I've set up the 
port
forwarding to allow it and the sshd.conf file is identical to one 
that

allows access from the outside.

I can test outside access from the inside by trying to connect to the
external address. As with my browser example, the request goes to the
device that has the particular IP address being sought. That is the
external port on the outer router. I can also ssh to the external 
port

on the inner router (which I can't think of a reason to do except for
testing). Interestingly, this works but doesn't get logged.





Sorry, I'm lost at your setup, the only thing that I can say is that
something looks to be  rong with regard to your firewall config.



The thing is the forwarding setup is the same for port 22 as it is for
port 80. I know that the port 80 forwarding is working so why isn't the
port 22 forwarding?

I still don't know the answer to that one, but when I changed the
external port to something else (on the outer router), it started
working.


Something is rong if it works that way.

You did not use the same rule for both port 80 and 22, if yes, this
would mean that port 22 and 80 are redirected to port 80, which is not
what you want.

In other words, you need one rule per redirect port.


I didn't say I used the same rule. I said the setup is the same. Any 
external traffic on that port is directed to the same port on the 
inner router. It's kind of difficult to get that wrong.


I suspect that my ISP is using port 22 for their own purposes but 
didn't bother excluding it in the router's programming.






Now I just have to remember to set the -p option in ssh to
connect.




To avoid the -p option:

$ cat ~/.ssh/config
Host sshserver
    HostName 
    Port 

$ ssh sshserver

I could, but it's not something I'm using often. If I forget, I'll be 
reminded when it fails to connect.



My ISP has confirmed this seems to be a generic issue with that 
modem/router. While I suspect that a firmware update could fix it, I 
can't find any way to upgrade the firmware, which is another good reason 
for having an inner and outer network. I think it is utterly 
irresponsible for anyone to sell internet-connected hardware that can't 
have its firmware upgraded.




Re: Can't print to CUPS printer on my server

2020-12-09 Thread Gary Dale

On 2020-12-09 10:25, Brian wrote:

On Wed 09 Dec 2020 at 10:04:14 -0500, Gary Dale wrote:


On 2020-12-08 13:29, Brian wrote:

avahi-browse -art > log1

-bash: avahi-browse: command not found - for both regular user and root


on the server and post log here as an attachment. avahi-browse is in the
avahi-utils package.

How about installing avahi-utils? :)


I don't seem to need it for anything else. I'm leery of installing 
packages, especially on a server, that I don't use.


+br0 IPv6 THELIBRARIAN  Microsoft Windows 
Network local
+br0 IPv4 THELIBRARIAN  Microsoft Windows 
Network local
+br0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   Internet Printer
 local
+br0 IPv6 Samsung C410 Series @ TheLibrarianInternet Printer
 local
+br0 IPv6 PDF @ TheLibrarianInternet Printer
 local
+br0 IPv6 Samsung ML-1210 @ TheLibrarianInternet Printer
 local
+br0 IPv6 EPSON Stylus Photo R300 @ TheLibrarianInternet Printer
 local
+br0 IPv4 HP Color LaserJet CP1215 @ TheLibrarian   Internet Printer
 local
+br0 IPv4 Samsung C410 Series @ TheLibrarianInternet Printer
 local
+br0 IPv4 PDF @ TheLibrarianInternet Printer
 local
+br0 IPv4 Samsung ML-1210 @ TheLibrarianInternet Printer
 local
+br0 IPv4 EPSON Stylus Photo R300 @ TheLibrarianInternet Printer
 local
+ enp5s0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   Internet Printer
 local
+ enp5s0 IPv6 Samsung C410 Series @ TheLibrarianInternet Printer
 local
+ enp5s0 IPv6 PDF @ TheLibrarianInternet Printer
 local
+ enp5s0 IPv6 Samsung ML-1210 @ TheLibrarianInternet Printer
 local
+ enp5s0 IPv6 EPSON Stylus Photo R300 @ TheLibrarianInternet Printer
 local
+br0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   Secure Internet 
Printer local
+br0 IPv6 Samsung C410 Series @ TheLibrarianSecure Internet 
Printer local
+br0 IPv6 PDF @ TheLibrarianSecure Internet 
Printer local
+br0 IPv6 Samsung ML-1210 @ TheLibrarianSecure Internet 
Printer local
+br0 IPv6 EPSON Stylus Photo R300 @ TheLibrarianSecure Internet 
Printer local
+br0 IPv4 HP Color LaserJet CP1215 @ TheLibrarian   Secure Internet 
Printer local
+br0 IPv4 Samsung C410 Series @ TheLibrarianSecure Internet 
Printer local
+br0 IPv4 PDF @ TheLibrarianSecure Internet 
Printer local
+br0 IPv4 Samsung ML-1210 @ TheLibrarianSecure Internet 
Printer local
+br0 IPv4 EPSON Stylus Photo R300 @ TheLibrarianSecure Internet 
Printer local
+ enp5s0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   Secure Internet 
Printer local
+ enp5s0 IPv6 Samsung C410 Series @ TheLibrarianSecure Internet 
Printer local
+ enp5s0 IPv6 PDF @ TheLibrarianSecure Internet 
Printer local
+ enp5s0 IPv6 Samsung ML-1210 @ TheLibrarianSecure Internet 
Printer local
+ enp5s0 IPv6 EPSON Stylus Photo R300 @ TheLibrarianSecure Internet 
Printer local
+br0 IPv6 EPSON Stylus Photo R300 @ TheLibrarianUNIX Printer
 local
+br0 IPv6 Samsung ML-1210 @ TheLibrarianUNIX Printer
 local
+br0 IPv6 PDF @ TheLibrarianUNIX Printer
 local
+br0 IPv6 Samsung C410 Series @ TheLibrarianUNIX Printer
 local
+br0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   UNIX Printer
 local
+br0 IPv4 EPSON Stylus Photo R300 @ TheLibrarianUNIX Printer
 local
+br0 IPv4 Samsung ML-1210 @ TheLibrarianUNIX Printer
 local
+br0 IPv4 PDF @ TheLibrarianUNIX Printer
 local
+br0 IPv4 Samsung C410 Series @ TheLibrarianUNIX Printer
 local
+br0 IPv4 HP Color LaserJet CP1215 @ TheLibrarian   UNIX Printer
 local
+ enp5s0 IPv6 EPSON Stylus Photo R300 @ TheLibrarianUNIX Printer
 local
+ enp5s0 IPv6 Samsung ML-1210 @ TheLibrarianUNIX Printer
 local
+ enp5s0 IPv6 PDF @ TheLibrarianUNIX Printer
 local
+ enp5s0 IPv6 Samsung C410 Series @ TheLibrarianUNIX Printer
 local
+ enp5s0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   UNIX Printer
 local
+br0 IPv4 RT-ACRH13-54A4 [88:d7:f6:a7:54:a4]Workstation 
 local
=br0 IPv4 RT-ACRH13-54A4 [88:d7:f6:a7:54:a4]Workstation 
 local
   hostname = [RT-ACRH13-54A4.local]
   address = [192.168.1.1]
   port = [9]
   txt = []
+br0

Re: Can't print to CUPS printer on my server

2020-12-11 Thread Gary Dale

On 2020-12-09 12:31, Brian wrote:

On Wed 09 Dec 2020 at 11:29:28 -0500, Gary Dale wrote:


On 2020-12-09 10:25, Brian wrote:

On Wed 09 Dec 2020 at 10:04:14 -0500, Gary Dale wrote:


On 2020-12-08 13:29, Brian wrote:

 avahi-browse -art > log1

-bash: avahi-browse: command not found - for both regular user and root


on the server and post log here as an attachment. avahi-browse is in the
avahi-utils package.

How about installing avahi-utils? :)

I don't seem to need it for anything else. I'm leery of installing packages,
especially on a server, that I don't use.

I bet you have cups-browsed installed on the server. It's completely
unneeded and doesn't do anything to enhance the printing system there.
But that is BTW. :)
Perhaps I shouldn't trust package maintainers to not install unnecessary 
stuff? However, doesn't it actually discover network printers - they 
seem to be more common these days with both Wifi and versions. Not all 
my printers are plugged directly into the server.



= enp5s0 IPv6 HP Color LaserJet CP1215 @ TheLibrarian   Secure Internet 
Printer local
hostname = [TheLibrarian.local]
address = [fe80::feaa:14ff:fe9b:b835]
port = [631]
txt = ["printer-type=0x80901E" "printer-state=3" "Duplex=T" "Color=T" "TLS=1.2" "UUID=bd713eeb-c38d-39f4-40b6-a997738b33d1" "URF=DM3" 
"pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HP Color LaserJet CP1215)" "priority=0" "note=family room" 
"adminurl=https://TheLibrarian.local.:631/printers/CP1215"; "ty=HP Color LaserJet CP1215 Foomatic/foo2hp (recommended)" "rp=printers/CP1215" "qtotal=1" "txtvers=1"]

The printer is found at TheLibrarian.local and its resource path is
printers/CP1215, giving a URI of

   ipp://TheLibrarian.local:631/printers/CP1215

At present you are relying on cups-browsed on the client to discover,
auto-setup and manage the CP1215; it appears to be having an off day.
Let's manage the print queue ourselves. Execute

   lpadmin -p  -v  -E -m raw

The -p option can be anything you want, for example, cp1215.


cups-browsed seems to be having a bad week or two at least. It doesn't 
let me delete the erroneous path to the CP125 in addition to not 
detecting the correct path.


I ran the lpadmin command and CP1215 is now showing in my list of 
printers on my workstation. However it still isn't printing.


After a second, I remembered that is because I had to unplug the printer 
to set up a temporary powerline networking connection to my server 
yesterday (I'm finally getting around to replacing the last of my CAT-5 
cable with CAT-6 and having an issue with pulling a line to this 
particular room). This had nothing to do with my earlier issue however.


Anyway, hopefully a new CUPS update will get things working soon.

Thanks.



unable to connect via ftp to my sites

2021-01-11 Thread Gary Dale

I'm running Debian/Bullseye on an AMD64 machine.

I'm trying to update a site using FileZilla with the same settings I've 
been using but cannot get a connection. I've tried this on several sites 
with the same results. Here's the FileZilla dialogue of a session 
connect attempt:


Status:    Resolving address of 
Status:    Connecting to :21...
Status:    Connection established, waiting for welcome message...
Status:    Initializing TLS...
Status:    Verifying certificate...
Status:    TLS connection established.
Status:    Server does not support non-ASCII characters.
Status:    Logged in
Status:    Retrieving directory listing of "/"...
Command:    CWD /
Response:    250 OK. Current directory is /
Command:    PWD
Response:    257 "/" is your current location
Command:    TYPE I
Response:    200 TYPE is now 8-bit binary
Command:    PASV
Response:    227 Entering Passive Mode (,141,8).
Command:    MLSD
Response:    150 Accepted data connection
Error:    GnuTLS error -15: An unexpected TLS packet was received.
Error:    The data connection could not be established: ECONNABORTED - 
Connection aborted

Response:    226 72 matches total
Error:    Failed to retrieve directory listing

at which point the connection seems to be severed by FileZilla.

When I try a command line ftp session, I also find that I cannot do an 
"ls" after logging in.


However I can connect from my server which is running Debian/Buster. 
Something seems to be going wrong with GnuTLS once the connection is 
established on Bullseye. This is a new behaviour as it wasn't doing this 
last week.




FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-01-12 Thread Gary Dale

I'm running Debian/Bullseye on an AMD64 machine.

I'm trying to update a site using FileZilla with the same settings I've 
been using but cannot get a connection. I've tried this on several sites 
with the same results. Here's the FileZilla dialogue of a session 
connect attempt:


Status:    Resolving address of 
Status:    Connecting to :21...
Status:    Connection established, waiting for welcome message...
Status:    Initializing TLS...
Status:    Verifying certificate...
Status:    TLS connection established.
Status:    Server does not support non-ASCII characters.
Status:    Logged in
Status:    Retrieving directory listing of "/"...
Command:    CWD /
Response:    250 OK. Current directory is /
Command:    PWD
Response:    257 "/" is your current location
Command:    TYPE I
Response:    200 TYPE is now 8-bit binary
Command:    PASV
Response:    227 Entering Passive Mode (,141,8).
Command:    MLSD
Response:    150 Accepted data connection
Error:    GnuTLS error -15: An unexpected TLS packet was received.
Error:    The data connection could not be established: ECONNABORTED - 
Connection aborted

Response:    226 72 matches total
Error:    Failed to retrieve directory listing

at which point the connection seems to be severed by FileZilla.

When I try a command line ftp session, I also find that I cannot do an 
"ls" after logging in.


However I can connect from my server which is running Debian/Buster. 
Something seems to be going wrong with GnuTLS once the connection is 
established on Bullseye. This is a new behaviour as it wasn't doing this 
last week.





Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-01-13 Thread Gary Dale

On 2021-01-12 22:53, Philip Wyett wrote:

On Tue, 2021-01-12 at 21:27 -0500, Gary Dale wrote:

I'm running Debian/Bullseye on an AMD64 machine.

I'm trying to update a site using FileZilla with the same settings
I've
been using but cannot get a connection. I've tried this on several
sites
with the same results. Here's the FileZilla dialogue of a session
connect attempt:

Status:Resolving address of 
Status:Connecting to :21...
Status:Connection established, waiting for welcome message...
Status:Initializing TLS...
Status:Verifying certificate...
Status:TLS connection established.
Status:Server does not support non-ASCII characters.
Status:Logged in
Status:Retrieving directory listing of "/"...
Command:CWD /
Response:250 OK. Current directory is /
Command:PWD
Response:257 "/" is your current location
Command:TYPE I
Response:200 TYPE is now 8-bit binary
Command:PASV
Response:227 Entering Passive Mode (,141,8).
Command:MLSD
Response:150 Accepted data connection
Error:GnuTLS error -15: An unexpected TLS packet was received.
Error:The data connection could not be established: ECONNABORTED
-
Connection aborted
Response:226 72 matches total
Error:Failed to retrieve directory listing

at which point the connection seems to be severed by FileZilla.

When I try a command line ftp session, I also find that I cannot do
an
"ls" after logging in.

However I can connect from my server which is running Debian/Buster.
Something seems to be going wrong with GnuTLS once the connection is
established on Bullseye. This is a new behaviour as it wasn't doing
this
last week.



Hi Gary,

I can confirm this issue.

Please file a bug report against filezilla and it will be looked into
by myself once 3.52.0.5 has transitioned into unstable (imminent).

Regards

Phil

I don't think it is just a FileZilla problem as it also seems to crop up 
with the command-line ftp program.




Can't connect to workstation using ssh from a remote machine

2021-01-13 Thread Gary Dale
I'm running Bullseye on an AMD64 system on my workstation and Buster on 
an AMD64 system on my server.


I'm trying to establish an ssh connection from my server to my 
workstation to facilitate a nightly pull backup of /home run by the 
server. However the ssh request times out.


I can connect via ssh to my workstation locally, just not from other 
machines *i.e. ssh  works when run on  but not 
when run on a different computer).


Debugging output isn't any help. It shows the connection being attempted 
then eventually timing out.


Any ideas pm what is going on? I've noted that Bullseye seems to have a 
problem with GnuTLS right now (for ftp operations). Could that be related?




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-01-13 Thread Gary Dale

On 2021-01-13 14:54, Eike Lantzsch wrote:

On Wednesday, 13 January 2021 16:42:17 -03 Gary Dale wrote:

On 2021-01-12 22:53, Philip Wyett wrote:

On Tue, 2021-01-12 at 21:27 -0500, Gary Dale wrote:

I'm running Debian/Bullseye on an AMD64 machine.

I'm trying to update a site using FileZilla with the same settings
I've
been using but cannot get a connection. I've tried this on several
sites
with the same results. Here's the FileZilla dialogue of a session
connect attempt:

Status:Resolving address of 
Status:Connecting to :21...
Status:Connection established, waiting for welcome message...
Status:Initializing TLS...
Status:Verifying certificate...
Status:TLS connection established.
Status:Server does not support non-ASCII characters.
Status:Logged in
Status:Retrieving directory listing of "/"...
Command:CWD /
Response:250 OK. Current directory is /
Command:PWD
Response:257 "/" is your current location
Command:TYPE I
Response:200 TYPE is now 8-bit binary
Command:PASV
Response:227 Entering Passive Mode (,141,8).
Command:MLSD
Response:150 Accepted data connection
Error:GnuTLS error -15: An unexpected TLS packet was received.
Error:The data connection could not be established:
ECONNABORTED
-
Connection aborted
Response:226 72 matches total
Error:Failed to retrieve directory listing

at which point the connection seems to be severed by FileZilla.

When I try a command line ftp session, I also find that I cannot do
an
"ls" after logging in.

However I can connect from my server which is running
Debian/Buster.
Something seems to be going wrong with GnuTLS once the connection
is
established on Bullseye. This is a new behaviour as it wasn't doing
this
last week.

Hi Gary,

I can confirm this issue.

Please file a bug report against filezilla and it will be looked
into
by myself once 3.52.0.5 has transitioned into unstable (imminent).

Regards

Phil

I don't think it is just a FileZilla problem as it also seems to crop
up with the command-line ftp program.

You might also try lftp. But since it seems to be a TLS problem the
result might be the same.
Does TLS work when you download mail with your mail-client?

Kind regards
Eike
--
Eike Lantzsch ZP6CGE


I already did the ftp command, as noted in the initial e-mail. Ftp 
connects but can't get a remote directory listing, without which it 
can't seem to transfer files. Things work with Buster but not with Bullseye.


I'm not having any problems with Thunderbird (my e-mail client) with 
accounts connecting through SSL/TLS and StartTLS.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-01-13 Thread Gary Dale

On 2021-01-13 15:59, Eike Lantzsch wrote:

On Wednesday, 13 January 2021 17:33:12 -03 Gary Dale wrote:

On 2021-01-13 14:54, Eike Lantzsch wrote:

On Wednesday, 13 January 2021 16:42:17 -03 Gary Dale wrote:

On 2021-01-12 22:53, Philip Wyett wrote:

On Tue, 2021-01-12 at 21:27 -0500, Gary Dale wrote:

I'm running Debian/Bullseye on an AMD64 machine.

I'm trying to update a site using FileZilla with the same
settings
I've
been using but cannot get a connection. I've tried this on
several
sites
with the same results. Here's the FileZilla dialogue of a session
connect attempt:

Status:Resolving address of 
Status:Connecting to :21...
Status:Connection established, waiting for welcome message...
Status:Initializing TLS...
Status:Verifying certificate...
Status:TLS connection established.
Status:Server does not support non-ASCII characters.
Status:Logged in
Status:Retrieving directory listing of "/"...
Command:CWD /
Response:250 OK. Current directory is /
Command:PWD
Response:257 "/" is your current location
Command:TYPE I
Response:200 TYPE is now 8-bit binary
Command:PASV
Response:227 Entering Passive Mode (,141,8).
Command:MLSD
Response:150 Accepted data connection
Error:GnuTLS error -15: An unexpected TLS packet was
received.
Error:The data connection could not be established:
ECONNABORTED
-
Connection aborted
Response:226 72 matches total
Error:Failed to retrieve directory listing

at which point the connection seems to be severed by FileZilla.

When I try a command line ftp session, I also find that I cannot
do
an
"ls" after logging in.

However I can connect from my server which is running
Debian/Buster.
Something seems to be going wrong with GnuTLS once the connection
is
established on Bullseye. This is a new behaviour as it wasn't
doing
this
last week.

Hi Gary,

I can confirm this issue.

Please file a bug report against filezilla and it will be looked
into
by myself once 3.52.0.5 has transitioned into unstable (imminent).

Regards

Phil

I don't think it is just a FileZilla problem as it also seems to
crop
up with the command-line ftp program.

You might also try lftp. But since it seems to be a TLS problem the
result might be the same.
Does TLS work when you download mail with your mail-client?

Kind regards
Eike
--
Eike Lantzsch ZP6CGE

I already did the ftp command, as noted in the initial e-mail.

No, I don't think you did what I recommended. I wrote lftp.
ELL-EFF-TEE-PEE
That is a totally different program and far more potent than ftp.

Sorry, old eyes and a 4k monitor. I installed lftp and got the same 
problem I had with ftp. I can log in then when I try to "ls", I get an 
"unexpected TLS packet was received" error.





Re: Can't connect to workstation using ssh from a remote machine

2021-01-13 Thread Gary Dale

On 2021-01-13 15:48, Dan Ritter wrote:

Gary Dale wrote:

I'm running Bullseye on an AMD64 system on my workstation and Buster on an
AMD64 system on my server.

I'm trying to establish an ssh connection from my server to my workstation
to facilitate a nightly pull backup of /home run by the server. However the
ssh request times out.

I can connect via ssh to my workstation locally, just not from other
machines *i.e. ssh  works when run on  but not
when run on a different computer).

Debugging output isn't any help. It shows the connection being attempted
then eventually timing out.

Assuming their names are workstation and server, give us the
output of the following. Use ctrl-c to cancel things as
necessary.

>From workstation:
$ ping -c3 server
$ telnet server 22

>From server:
$ ping -c3 workstation
$ telnet workstation 22

-dsr-

I can get to the server from workstation using ssh. I just can't do it 
in the other direction. On server, I can ping workstation but:

$ telnet workstation  22
Trying 192.168.1.20...
telnet: Unable to connect to remote host: Connection timed out

Whereas the other way, I get a protocol mismatch error.



Re: Can't connect to workstation using ssh from a remote machine

2021-01-13 Thread Gary Dale

On 2021-01-13 19:03, Charles Curley wrote:

On Wed, 13 Jan 2021 15:27:07 -0500
Gary Dale  wrote:


I can connect via ssh to my workstation locally, just not from other
machines *i.e. ssh  works when run on  but
not when run on a different computer).

Firewall? Try "tail -f /var/log/syslog" on workstation while trying
from another machine.

No errors showing up related to a connection attempt, although I do get 
a lot of auth errors on  from people trying to break in but 
failing to provide the propercertificate.




Re: Can't connect to workstation using ssh from a remote machine

2021-01-13 Thread Gary Dale

On 2021-01-13 20:20, Dan Ritter wrote:

Gary Dale wrote:

On 2021-01-13 15:48, Dan Ritter wrote:

>From server:
$ ping -c3 workstation
$ telnet workstation 22

-dsr-


I can get to the server from workstation using ssh. I just can't do it in
the other direction. On server, I can ping workstation but:
$ telnet workstation  22
Trying 192.168.1.20...
telnet: Unable to connect to remote host: Connection timed out

Options:

. workstation is not running sshd
 ps auwx|grep ssh

It's running


. workstation is not running sshd on port 22
 ss -tlnp|grep 22

On port 22
 
. workstation's DNS is wrong/ that's not the right IP

. firewall or other packet filtering on one or the other or in
   between

-dsr-
The name resolution is correct. Server has a static IP while Workstation 
has a DHCP reservation - just redid it a week ago after installing a new 
MB. There are are two network switches between them (the router that 
handles dhcp is plugged into one of the switches as well).




Re: Can't connect to workstation using ssh from a remote machine

2021-01-14 Thread Gary Dale

On 2021-01-14 07:55, Greg Wooledge wrote:

On Wed, Jan 13, 2021 at 08:20:30PM -0500, Dan Ritter wrote:

Gary Dale wrote:

$ telnet workstation  22
Trying 192.168.1.20...
telnet: Unable to connect to remote host: Connection timed out

Options:
. workstation is not running sshd

No.  That would give you "connection refused" immediately, not a timeout.


. workstation is not running sshd on port 22

Same.


. workstation's DNS is wrong/ that's not the right IP
. firewall or other packet filtering on one or the other or in
   between

Those two are possible.

In the *general* case (not here, because there's additional information
here), you also have:

  * Machine is powered off/crashed.
  * Machine's network cable is unplugged or loose.
  * A router is down/malfunctioning between you and the machine.
  * Machine's networking is mis-configured.

These are ruled out in this particular case because the OP is typing
other commands on the target machine, including commands that invoke
network connections in the opposite direction.

IMHO the most likely scenario here is "firewall".

The strongest indicator of a firewall being the cause of the problem is
that you can ping a given IP address (or telnet to port X on that
IP address), but you cannot telnet to port Y on that same IP address.

If the telnet failure is a timeout rather than a connection refused,
while other network connections to the same IP work, then it's 100%
a firewall issue.  Everything else (crashed service, etc.) would give
you a connction refused.



You are right. Somehow I found that ufw is running on my workstation. I 
don't remember adding it but it was blocking all access except for some 
ports in the 1700 range. Allowing ssh seems to have fixed the problem.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-01-28 Thread Gary Dale

On 2021-01-20 10:44, songbird wrote:

Gary Dale wrote:
...

   the problem is still there with the recent version of Filezilla
that showed up in testing (3.52.0.5-1).


   i see there is a bug filed via GNUTLS:

   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=980119


   not sure what progress is actually being made.


   songbird

I note that the problem affects all ftp client's I've tried under 
testing but not the ones in the stable release. It's been weeks since I 
first reported this issue yet it still remains.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-01-28 Thread Gary Dale



On 2021-01-28 11:03, David Wright wrote:

On Thu 28 Jan 2021 at 10:17:50 (-0500), Gary Dale wrote:

On 2021-01-20 10:44, songbird wrote:

Gary Dale wrote:
...
the problem is still there with the recent version of Filezilla
that showed up in testing (3.52.0.5-1).

i see there is a bug filed via GNUTLS:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=980119

not sure what progress is actually being made.


I note that the problem affects all ftp client's I've tried under
testing but not the ones in the stable release. It's been weeks since
I first reported this issue yet it still remains.

As far as I can tell, the timeline is:

2021-01-12 your original report
2021-01-14 opened BTS #980119
back and forth replication
2021-01-20 verbose debug information posted
2021-01-23 forwarded to https://gitlab.com/gnutls/gnutls/-/issues/1152

Would that be reasonable for a Severity: Normal bug in testing?

Cheers,
David.


Given that it renders ftp unusable, I'd rate it as important, not normal.



Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-10 Thread Gary Dale

On 2021-01-28 11:03, David Wright wrote:

On Thu 28 Jan 2021 at 10:17:50 (-0500), Gary Dale wrote:

On 2021-01-20 10:44, songbird wrote:

Gary Dale wrote:
...
the problem is still there with the recent version of Filezilla
that showed up in testing (3.52.0.5-1).

i see there is a bug filed via GNUTLS:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=980119

not sure what progress is actually being made.


I note that the problem affects all ftp client's I've tried under
testing but not the ones in the stable release. It's been weeks since
I first reported this issue yet it still remains.

As far as I can tell, the timeline is:

2021-01-12 your original report
2021-01-14 opened BTS #980119
back and forth replication
2021-01-20 verbose debug information posted
2021-01-23 forwarded to https://gitlab.com/gnutls/gnutls/-/issues/1152

Would that be reasonable for a Severity: Normal bug in testing?

Cheers,
David.


Time passes and the bug is still in Debian/Testing.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-12 Thread Gary Dale

On 2021-02-12 09:12, songbird wrote:

Gary Dale wrote:
...

Time passes and the bug is still in Debian/Testing.


   please look at the end of:


   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=980119


   it looks like things are happening, but how quickly those
changes are applied and uploads happen and are approved may
take some time yet.

   like you i was kinda wondering if any fix at all was
happening or if anyone was even looking into it.

   i sure don't have the skills or expertise in either
filezilla or gnutls to track something like this down.  :(
all i can be is appreciative for those who do and say
thank you!  :)


   songbird


I appreciate the people doing this, but this is a serious issue. I have 
to resort to firing up a VM or resorting to the command line on my local 
server to update my web sites because I can't do it from Testing. I see 
it also impacts other programs that I (fortunately) don't use as much.


When faced with a major bug, shouldn't there be a procedure to pull back 
the testing version - like restoring the previous version with a 
bumped-up version number while working on the known buggy version in 
experimental (no need to punish people using SID)?




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-12 Thread Gary Dale

On 2021-02-12 14:12, Frank wrote:

Op 12-02-2021 om 18:19 schreef Gary Dale:

I appreciate the people doing this, but this is a serious issue. I have
to resort to firing up a VM or resorting to the command line on my local
server to update my web sites because I can't do it from Testing.

What file manager do you use?

I stopped using FileZilla for ftps years ago and only use MATE's caja
these days. Hasn't stopped working and I keep my (bullseye) system
up-to-date, so whatever TLS library caja is using, this bug doesn't
affect it.

Regards,
Frank

I can do the same with Dolphin but I find it clumsy. FileZIlla is made 
to let you transfer files between local and remote directories.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-12 Thread Gary Dale

On 2021-02-12 14:15, Paul Scott wrote:


On 2/12/21 12:12 PM, Frank wrote:

Op 12-02-2021 om 18:19 schreef Gary Dale:

I appreciate the people doing this, but this is a serious issue. I have
to resort to firing up a VM or resorting to the command line on my 
local

server to update my web sites because I can't do it from Testing.

What file manager do you use?

I stopped using FileZilla for ftps years ago and only use MATE's caja
these days. Hasn't stopped working and I keep my (bullseye) system
up-to-date, so whatever TLS library caja is using, this bug doesn't
affect it.



gFtp seems to fail also.  Do you know what works for Gnome on sid?

Thank you,

Paul

I didn't even know it was still being developed. I used it briefly after 
kBear was dropped, but found that it didn't seem to keep up with the 
state of the protocols. It stopped working for me when the various hosts 
I use improved their security. FileZilla handles more of the wrinkles...




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-12 Thread Gary Dale

On 2021-02-12 16:10, songbird wrote:

Gary Dale wrote:
...

I appreciate the people doing this, but this is a serious issue. I have
to resort to firing up a VM or resorting to the command line on my local
server to update my web sites because I can't do it from Testing. I see
it also impacts other programs that I (fortunately) don't use as much.

   i know it is frustrating to hit something like this when you
are trying to just get a website updated.  all i can say is that
if you are running testing you are taking this sort of happening
as a risk and if you do not want that risk then you should step
back to stable instead.  especially if you are doing this for
something that might be time critical or a production issue.

   i keep a stable partition for this reason and while i rarely
have needed it in the past this year i've had to use it twice.
once for the FileZilla issue as you are facing and another for
a program which hasn't been converted to python3 yet (and for
all i know it may not ever be).


I keep a VM for the same reason - I set it up several years ago after a 
Ghostscript issue caused a lot of pain for me in both Stable and 
Testing. I set the VM up for what was then OldStable as a workaround. I 
also have a laptop running Stable. In 20 years of running Debian, I've 
only encountered 2 issues that weren't fixed fairly quickly.


However, it's all a little clumsy. My main workstation is set up the way 
I like and I'm familiar with using it. The other options I rarely have 
to use.  I often find it easier to ssh to my (stable) server and use the 
command line for file transfers.






When faced with a major bug, shouldn't there be a procedure to pull back
the testing version - like restoring the previous version with a
bumped-up version number while working on the known buggy version in
experimental (no need to punish people using SID)?

   it didn't affect enough people for it to be noticed before
the affected packages went from sid to testing.  that's the
problem when you get particular older packages that only a
few people use once in a while.  it would have been nice to
have caught it in sid before testing, but, well...

Which is why I think it would be useful to have way to rollback a 
package when you can't fix it quickly. That way you aren't asking all 
the users to do it themselves and track the bug status individually. 
When the maintainers think they have a fix, it can go through the normal 
process...


I don't mind testing things and reporting issues, but I also don't like 
having my workflow disrupted for an extended period.


I admit I don't know what issues were behind the rollout of the current 
"testing" version of GnuTLS but it breaks a lot of programs, including 
(apparently) wget.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-16 Thread Gary Dale

On 2021-02-13 03:02, Andrei POPESCU wrote:

On Vi, 12 feb 21, 17:00:41, Gary Dale wrote:

Which is why I think it would be useful to have way to rollback a package
when you can't fix it quickly. That way you aren't asking all the users to
do it themselves and track the bug status individually. When the maintainers
think they have a fix, it can go through the normal process...

Debian doesn't support downgrading of packages.

When dpkg installs another version of a package (typically newer) it
basically overwrites the existing version and runs the corresponding
package scripts from the to be installed version.

A newer package may introduce changes that the older package (scripts)
can't deal with. In practice it does work in many cases, except for
those where it doesn't. Fixing them would require a time machine ;)

A roll-back, especially if automatic, could introduce more issues than
it fixes.

Someone(tm) has to determine on a case by case basis whether rolling
back makes sense and the system administrator is in the best position to
do so.

In theory the package Maintainer could provide a general "hint" that
system administrators could chose to ignore (at their own risk).

Currently the infrastructure for this doesn't exist[1] and, besides, I'd
rather have Maintainers focus on fixing the newer package instead.


 Volunteer time is precious!


[1] it would need support in the Debian archive software and APT at a
minimum.

Besides, there is already an arguably safer (though hackish) way to
achieve that by uploading a package with version+really.the.old.version
instead.

In this case the Maintainer can also take care to adjust the package
scripts accordingly.

Random example found on my system:

$ rmadison fonts-font-awesome
fonts-font-awesome | 4.2.0~dfsg-1  | oldoldstable | 
source, all
fonts-font-awesome | 4.7.0~dfsg-1  | oldstable| 
source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-1 | stable   | 
source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4~bpo10+1 | buster-backports | 
source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4 | testing  | 
source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4 | unstable | 
source, all


Kind regards,
Andrei


I hear you, but the issue is that if I revert to a previous version, 
then I have to hold it to stop the buggy version from clobbering it 
every day. And I have to monitor the Testing version for changes to see 
when a fix is potentially available so I can remove the hold.


Not just me but every user who is experiencing the bug also has to do this.

There is a kludge for this if the buggy version didn't contain critical 
security fixes - re-release the previous version with a slightly higher 
version number than the buggy one (e.g. 3.7.0-5a). When the bug is 
(finally) fixed, give the fixed version a slightly higher number still 
(e.g. 3.7.0.5b).


Again this would only be done where it appears that fixing the bug may 
take time (it's been over a month now). If I were to do the alternative 
- pull packages from Sid - I have no real indication if they fix it or 
introduce even worse problems.


I can only assume that the reason a fix hasn't made it down through Sid 
yet is that it's not simple. My suggestion isn't to make more work for 
maintainers but rather to take the time pressure off them without 
leaving us testers to jump through hoops.





php-pear etc. on an Apache2 server

2021-02-16 Thread Gary Dale
I'm running Buster on my local server which, among other things, I use 
for developing web sites before copying them onto a public host. I've 
recently been getting into a little php coding because there seem to a 
lot of sample code out there for things I want to do. Usually they work 
right away when I try running them after having installed 
libapache2-mod-php.


Right now I'm trying to get a script working that is actually fairly 
small and seems straightforward. It displays a form that allows you to 
send an e-mail with an attachment. I actually have some similar scripts 
working locally that do pretty much the same thing, but I'm trying to 
get this one to work because the host I use for one site that needs this 
type of form has broken the script I had been using (I also didn't like 
it because it seemed overly complicated and under-featured).


This script use the Pear libraries to handle the heavy lifting, which 
seems like a reasonable design decision. I installed the php-pear 
package and also php-mail-mime. Unfortunately, the script failed to 
work. It was uploading the file but failing to send the e-mail.


I was able to find the line that was failing but it puzzles me. The line is

        $message = new Mail_mime();

which should be working. I ran the (frequently recommended) php-info 
page (about mime. I'd expect that is something php handles though.


I got the script to send an e-mail by removing all the mime parts and 
just using the php mail() function. However that's not really useful. I 
need the mime bits to add the attachment.


Anyway, it looks like I need to do something more (besides restarting 
Apache2) to get php to use the php-mail-mime library but I'm not sure 
what. All the Debian information I've found just says "install the 
package php- and it will work". Can anyone help me here?




Re: networking.service fails

2021-02-17 Thread Gary Dale

On 2021-02-16 19:44, Dmitry Katsubo wrote:

Dear Debian community,

I am puzzled with the following problem. When my Debian 10.8 starts, the unit 
"networking.service" is
marked as failed with the following reason:

root@debian:~ # systemctl status networking.service
*— networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor 
preset: enabled)
Active: failed (Result: exit-code) since Tue 2021-02-16 08:56:16 CET; 5h 
27min ago
  Docs: man:interfaces(5)
   Process: 691 ExecStart=/sbin/ifup -a --read-environment (code=exited, 
status=1/FAILURE)
  Main PID: 691 (code=exited, status=1/FAILURE)

however network is working just fine. I looked into 
/usr/lib/systemd/system/networking.service where

TimeoutStartSec=5min

and also set a big timeout for br0 in /etc/network/interfaces:

auto lo
auto eth0
auto eth1
iface lo inet loopback
auto br0
iface br0 inet static
 ...
 bridge_ports eth0 eth1
 bridge_maxwait 60

but still the error occurs each time. Relative dmesg logs are:

Feb 16 08:56:16 debian systemd[1]: Starting Raise network interfaces...
Feb 16 08:56:16 debian ifup[691]: ifup: unknown interface eth0
Feb 16 08:56:16 debian ifup[691]: ifup: unknown interface eth1
Feb 16 08:56:16 debian ifup[691]: Waiting for br0 to get ready (MAXWAIT is 60 
seconds).
Feb 16 08:56:16 debian systemd[1]: networking.service: Main process exited, 
code=exited, status=1/FAILURE
Feb 16 08:56:16 debian systemd[1]: networking.service: Failed with result 
'exit-code'.
Feb 16 08:56:16 debian systemd[1]: Failed to start Raise network interfaces.
Feb 16 08:56:16.113716 debian systemd-udevd[387]: Using default interface 
naming scheme 'v240'.
Feb 16 08:56:16.113796 debian systemd-udevd[387]: link_config: autonegotiation 
is unset or enabled, the speed and duplex are not writable.
Feb 16 08:56:16.113851 debian systemd-udevd[387]: Could not generate persistent 
MAC address for br0: No such file or directory
Feb 16 08:56:16.115750 debian kernel: bridge: filtering via arp/ip/ip6tables is 
no longer available by default. Update your scripts to load br_netfilter if you 
need this
Feb 16 08:56:16.115828 debian kernel: br0: port 1(eth0) entered blocking state
Feb 16 08:56:16.115875 debian kernel: br0: port 1(eth0) entered disabled state
Feb 16 08:56:16.115929 debian kernel: device eth0 entered promiscuous mode
Feb 16 08:56:16.119800 debian kernel: r8169 :02:00.0: firmware: 
direct-loading firmware rtl_nic/rtl8168g-2.fw
Feb 16 08:56:16.120198 debian kernel: Generic PHY r8169-200:00: attached PHY 
driver [Generic PHY] (mii_bus:phy_addr=r8169-200:00, irq=IGNORE)
Feb 16 08:56:16.251795 debian kernel: br0: port 2(eth1) entered blocking state
Feb 16 08:56:16.251990 debian kernel: br0: port 2(eth1) entered disabled state
Feb 16 08:56:16.391879 debian kernel: br0: port 1(eth0) entered blocking state
Feb 16 08:56:16.391913 debian kernel: br0: port 1(eth0) entered forwarding state
Feb 16 08:56:16.516862 debian systemd[1]: Starting Hostname Service...
Feb 16 08:56:16.539520 debian systemd[1]: networking.service: Main process 
exited, code=exited, status=1/FAILURE
Feb 16 08:56:16.539612 debian systemd[1]: networking.service: Failed with 
result 'exit-code'.
Feb 16 08:56:16.539994 debian systemd[1]: Failed to start Raise network 
interfaces.
Feb 16 08:56:16.671750 debian kernel: br0: port 3(wlan0) entered blocking state
Feb 16 08:56:16.671808 debian kernel: br0: port 3(wlan0) entered disabled state
Feb 16 08:56:16.671844 debian kernel: device wlan0 entered promiscuous mode
Feb 16 08:56:16.671878 debian kernel: br0: port 3(wlan0) entered blocking state
Feb 16 08:56:16.671912 debian kernel: br0: port 3(wlan0) entered forwarding 
state
Feb 16 08:56:16.683579 debian hostapd[879]: wlan0: interface state 
UNINITIALIZED->ENABLED
Feb 16 08:56:16.683579 debian hostapd[879]: wlan0: AP-ENABLED

Any ideas where can I take a look? Thanks in advance!



Debian/Busteris still using Network Manager not systemd to control the 
network so I think the network.service shouldn't be used.


I don't know how many interfaces you have but you seem to be using a 
bridge to at least one of them. The network bridge would bring up the 
physical interface. The physical interface should be listed in 
interfaces as something like:


    iface eth0 inet manual

The listing of eth0 and eth1 (and even br0) may be obsolete. Try doing 
ifconfig as root to see what interfaces are actually7 active (since you 
say the network is running). These days they are frequently something 
like enp4s0.





Re: networking.service fails

2021-02-17 Thread Gary Dale

On 2021-02-17 08:28, Andrei POPESCU wrote:

On Mi, 17 feb 21, 00:01:01, Gary Dale wrote:

On 2021-02-16 19:44, Dmitry Katsubo wrote:

Dear Debian community,

I am puzzled with the following problem. When my Debian 10.8 starts, the unit 
"networking.service" is
marked as failed with the following reason:

root@debian:~ # systemctl status networking.service
*— networking.service - Raise network interfaces
 Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor 
preset: enabled)
 Active: failed (Result: exit-code) since Tue 2021-02-16 08:56:16 CET; 5h 
27min ago
   Docs: man:interfaces(5)
Process: 691 ExecStart=/sbin/ifup -a --read-environment (code=exited, 
status=1/FAILURE)
   Main PID: 691 (code=exited, status=1/FAILURE)

Debian/Busteris still using Network Manager not systemd to control the
network so I think the network.service shouldn't be used.

Well, systemd as init is starting everything so that necessarily
includes starting "the network", which in practice means starting
whatever network management framework is in use[1].

The 'networking.service' service is part of ifupdown, Debian's default
network management framework (Priority: important).

Network Manager is Priority: optional and typically installed as a
Depends/Recommends of Desktop Environments.

[1] this is applicable even for systemd's own network management
framework systemd-networkd, which is included in the 'systemd' Debian
package, but not activated by default.

Kind regards,
Andrei
Sorry, it was midnight when I replied. However the failure is likely 
still due to the interfaces misconfiguration - probably reporting a 
failure to raise a non-existent interface.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-17 Thread Gary Dale

On 2021-02-16 17:02, Philip Wyett wrote:

On Tue, 2021-02-16 at 16:45 -0500, Gary Dale wrote:

On 2021-02-13 03:02, Andrei POPESCU wrote:

On Vi, 12 feb 21, 17:00:41, Gary Dale wrote:

Which is why I think it would be useful to have way to rollback a
package
when you can't fix it quickly. That way you aren't asking all the
users to
do it themselves and track the bug status individually. When the
maintainers
think they have a fix, it can go through the normal process...

Debian doesn't support downgrading of packages.

When dpkg installs another version of a package (typically newer)
it
basically overwrites the existing version and runs the
corresponding
package scripts from the to be installed version.

A newer package may introduce changes that the older package
(scripts)
can't deal with. In practice it does work in many cases, except for
those where it doesn't. Fixing them would require a time machine ;)

A roll-back, especially if automatic, could introduce more issues
than
it fixes.

Someone(tm) has to determine on a case by case basis whether
rolling
back makes sense and the system administrator is in the best
position to
do so.

In theory the package Maintainer could provide a general "hint"
that
system administrators could chose to ignore (at their own risk).

Currently the infrastructure for this doesn't exist[1] and,
besides, I'd
rather have Maintainers focus on fixing the newer package instead.


  Volunteer time is precious!


[1] it would need support in the Debian archive software and APT at
a
minimum.

Besides, there is already an arguably safer (though hackish) way to
achieve that by uploading a package with
version+really.the.old.version
instead.

In this case the Maintainer can also take care to adjust the
package
scripts accordingly.

Random example found on my system:

$ rmadison fonts-font-awesome
fonts-font-awesome | 4.2.0~dfsg-1  |
oldoldstable | source, all
fonts-font-awesome | 4.7.0~dfsg-1  |
oldstable| source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-1 |
stable   | source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4~bpo10+1 | buster-
backports | source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4 |
testing  | source, all
fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4 |
unstable | source, all


Kind regards,
Andrei

I hear you, but the issue is that if I revert to a previous version,
then I have to hold it to stop the buggy version from clobbering it
every day. And I have to monitor the Testing version for changes to
see
when a fix is potentially available so I can remove the hold.

Not just me but every user who is experiencing the bug also has to do
this.

There is a kludge for this if the buggy version didn't contain
critical
security fixes - re-release the previous version with a slightly
higher
version number than the buggy one (e.g. 3.7.0-5a). When the bug is
(finally) fixed, give the fixed version a slightly higher number
still
(e.g. 3.7.0.5b).

Again this would only be done where it appears that fixing the bug
may
take time (it's been over a month now). If I were to do the
alternative
- pull packages from Sid - I have no real indication if they fix it
or
introduce even worse problems.

I can only assume that the reason a fix hasn't made it down through
Sid
yet is that it's not simple. My suggestion isn't to make more work
for
maintainers but rather to take the time pressure off them without
leaving us testers to jump through hoops.



Hi,

What appears to be the fixed version is in sid (3.7.0-7). It has to
pass in sid for 10 days before migration to testing, see below link.

https://tracker.debian.org/pkg/gnutls28

https://metadata.ftp-master.debian.org/changelogs//main/g/gnutls28/gnutls28_3.7.0-7_changelog

My testing with filezilla, shows all to be working once more, though
testing has been limited.

Regards

Phil

Confirmed. Seems to work. You need to install libnettle and libgnutls 
from Sid as well.




Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye

2021-02-17 Thread Gary Dale

On 2021-02-17 04:53, Andrei POPESCU wrote:

On Ma, 16 feb 21, 16:45:13, Gary Dale wrote:

I hear you, but the issue is that if I revert to a previous version, then I
have to hold it to stop the buggy version from clobbering it every day. And
I have to monitor the Testing version for changes to see when a fix is
potentially available so I can remove the hold.

Not just me but every user who is experiencing the bug also has to do this.

This is what 'aptitude forbid-version' is for.

Kind regards,
Andrei

Thanks. I wasn't aware of that option.



Re: php-pear etc. on an Apache2 server [resolved]

2021-02-18 Thread Gary Dale

On 2021-02-16 17:56, Gary Dale wrote:
I'm running Buster on my local server which, among other things, I use 
for developing web sites before copying them onto a public host. I've 
recently been getting into a little php coding because there seem to a 
lot of sample code out there for things I want to do. Usually they 
work right away when I try running them after having installed 
libapache2-mod-php.


Right now I'm trying to get a script working that is actually fairly 
small and seems straightforward. It displays a form that allows you to 
send an e-mail with an attachment. I actually have some similar 
scripts working locally that do pretty much the same thing, but I'm 
trying to get this one to work because the host I use for one site 
that needs this type of form has broken the script I had been using (I 
also didn't like it because it seemed overly complicated and 
under-featured).


This script use the Pear libraries to handle the heavy lifting, which 
seems like a reasonable design decision. I installed the php-pear 
package and also php-mail-mime. Unfortunately, the script failed to 
work. It was uploading the file but failing to send the e-mail.


I was able to find the line that was failing but it puzzles me. The 
line is


        $message = new Mail_mime();

which should be working. I ran the (frequently recommended) php-info 
page (about mime. I'd expect that is something php handles though.


I got the script to send an e-mail by removing all the mime parts and 
just using the php mail() function. However that's not really useful. 
I need the mime bits to add the attachment.


Anyway, it looks like I need to do something more (besides restarting 
Apache2) to get php to use the php-mail-mime library but I'm not sure 
what. All the Debian information I've found just says "install the 
package php- and it will work". Can anyone help me here?




The issue turned out to be that script had an incorrect include. It 
asked for Mail_Mime/mime.php when the actual include should have been 
Mail/mime.php. I suspect that the php package names may have changed 
since the author wrote the script and they never bothered to update it.




Re: Need Support for Dell XPS 15 7590, Hard Drive Make Micron 2300 NVMe 1024 GB

2021-02-18 Thread Gary Dale

On 2021-02-18 09:48, Steve McIntyre wrote:

zcor...@yahoo.com wrote:

Just received a new laptop, and both Debian Stable, and Debian
testing would not detect the hard drive.  Any possibility this can be
added to the to-do-list for developers?  I bought a Dell XPS 15 7590
(2019) edition.

Check in the BIOS settings - the drive may be configured in "RAID"
mode. If so, switching to "AHCI" will most likely solve your problem.

Good idea if it was a desktop and if the drive wasn't NVME (although 
2020 saw some desktops with dual NVME slots). Seems a long shot for a 
laptop (even a used one as this might be).




Re: rsync to NAS for backup

2021-02-18 Thread Gary Dale

On 2021-02-18 10:57, mick crane wrote:

On 2021-02-15 12:39, mick crane wrote:

On 2021-02-13 19:20, David Christensen wrote:

On 2021-02-13 01:27, mick crane wrote:

I made a mistake and instead of getting a PC for backup I got a NAS.
I'm struggling to get to grips with it.
If rsync from PC to NAS NAS changes the owner/group of files to 
me/users which is probably no good for backing up.

There's that problem then another that it won't let me login as root.
I asked on Synology forum but not getting a lot of joy.
https://community.synology.com/enu/forum/1/post/141137
Anybody used these things can advise ?


What is the model of the Synology NAS?  What options -- CPU, memory,
disks, bays, interfaces, PSU, whatever?  Support page URL?


Reading the forum post, it sounds like you damaged the sudoers file.
The fix would appear to be doing a Mode 2 reset per Synology's
instructions:

https://www.synology.com/en-global/knowledgebase/DSM/tutorial/General_Setup/How_to_reset_my_Synology_NAS 




Once the NAS has been reset, figure out how to meet your needs within
the framework provided by Synology.  Follow the User Guide. Follow
the Admin Guide.  Do not mess around "under the hood" with a terminal
and sudo.  Make Synology earn your money.


But if you want complete control, buy or build an x86_64/amd64 server,
install Debian, and have at it.




thanks for advices folks.
It was indeed user error with being in a rush and blurred eyesight
mistook "%" for "#"
We are making progress.
Appears that to retain permissions need root at both ends of rsync.
Have keys working with ssh for users to NAS ( not helped by default
permissions for .ssh files being wrong) and can su to root so now need
to get ssh working with keys with no passphrase for root and all
should be good.


further to this if it helps anybody can start sshd with -d switch and 
at same time do client with -vv switch then can see where is falling 
down. Having telnet available helps if break sshd_config can still 
telnet and mend it.

mick


rsync is a quick & dirty backup tactic but it's got limitations.

1) files may stay around forever in the backup even if you've deleted 
them from your main computer because you don't need them.


2) you only have one copy of a file and that only lasts until the next 
rsync. This limits your ability to restore from a backup before it is 
overwritten.



Using a real backup program, which can run on you main computer to 
backup to the NAS, lets you define a retention policy so files no longer 
needed can be purged while you have multiple backups of files you are 
currently working on.


rsync is not a good substitute for backups.






Re: rsync to NAS for backup

2021-02-18 Thread Gary Dale

On 2021-02-18 12:22, to...@tuxteam.de wrote:

On Thu, Feb 18, 2021 at 06:59:03PM +0200, Teemu Likonen wrote:

* 2021-02-18 11:13:25-0500, Gary Dale wrote:


rsync is a quick & dirty backup tactic but it's got limitations.

1) files may stay around forever in the backup even if you've deleted
them from your main computer because you don't need them.

2) you only have one copy of a file and that only lasts until the next
rsync. This limits your ability to restore from a backup before it is
overwritten.
rsync is not a good substitute for backups.

No, it's not. It is a fantastic tool for backups :-)


Rsync is great backup program with "--link-dest" option. Here is the
idea in simplified code:

[...]

Absolutely. Time travel!

Actually, I've implemented this at a customer's place. They were
delighted.

Where rsync shows some weaknesses is on big, fat files (think
videos, one or several GB).

Really huge directories (tens to hundreds of TB) were once a
challenge, too, but I hear that they refined the scanning
part in the meantime. No direct experience, though.

And, oh, Gary: if you want to delete files which disappeared
in the source, check out the --delete option.

But this time-staggered backup with --link-dest is really great.

Cheers


While you can twist any tool to fit a task, real backup programs don't 
need to be twisted and do a better job. For example backup retention 
policy is intuitive and easy to set. Some backup programs even factor 
out common blocks for de-duplication, which can save a lot of space. 
Hard-links only do that if the file name is the same.


And when you need to restore a file, backup programs usually let you see 
when the files changed then let you choose which version to restore.


As for the delete option, it makes the rsync script even more 
complicated. A backup program will simply expire the file at the end of 
the retention period.




Re: Need Support for Dell XPS 15 7590, Hard Drive Make Micron 2300 NVMe 1024 GB

2021-02-18 Thread Gary Dale

On 2021-02-18 12:25, Steve McIntyre wrote:

g...@extremeground.com wrote:

On 2021-02-18 09:48, Steve McIntyre wrote:

zcor...@yahoo.com wrote:

Just received a new laptop, and both Debian Stable, and Debian
testing would not detect the hard drive.  Any possibility this can be
added to the to-do-list for developers?  I bought a Dell XPS 15 7590
(2019) edition.

Check in the BIOS settings - the drive may be configured in "RAID"
mode. If so, switching to "AHCI" will most likely solve your problem.


Good idea if it was a desktop and if the drive wasn't NVME (although
2020 saw some desktops with dual NVME slots). Seems a long shot for a
laptop (even a used one as this might be).

I wish you were right, but even in the space year 2020 it's still a
thing! For an example, see:

   
https://www.dell.com/community/XPS/Pros-Cons-AHCI-vs-Raid-On-XPS13-9300-NVMe/td-p/7636984

Interesting. Thanks for the information.



Re: php-pear etc. on an Apache2 server [resolved]

2021-02-19 Thread Gary Dale

On 2021-02-18 09:06, Gary Dale wrote:

On 2021-02-16 17:56, Gary Dale wrote:
I'm running Buster on my local server which, among other things, I 
use for developing web sites before copying them onto a public host. 
I've recently been getting into a little php coding because there 
seem to a lot of sample code out there for things I want to do. 
Usually they work right away when I try running them after having 
installed libapache2-mod-php.


Right now I'm trying to get a script working that is actually fairly 
small and seems straightforward. It displays a form that allows you 
to send an e-mail with an attachment. I actually have some similar 
scripts working locally that do pretty much the same thing, but I'm 
trying to get this one to work because the host I use for one site 
that needs this type of form has broken the script I had been using 
(I also didn't like it because it seemed overly complicated and 
under-featured).


This script use the Pear libraries to handle the heavy lifting, which 
seems like a reasonable design decision. I installed the php-pear 
package and also php-mail-mime. Unfortunately, the script failed to 
work. It was uploading the file but failing to send the e-mail.


I was able to find the line that was failing but it puzzles me. The 
line is


        $message = new Mail_mime();

which should be working. I ran the (frequently recommended) php-info 
page (nothing about mime. I'd expect that is something php handles though.


I got the script to send an e-mail by removing all the mime parts and 
just using the php mail() function. However that's not really useful. 
I need the mime bits to add the attachment.


Anyway, it looks like I need to do something more (besides restarting 
Apache2) to get php to use the php-mail-mime library but I'm not sure 
what. All the Debian information I've found just says "install the 
package php- and it will work". Can anyone help me here?




The issue turned out to be that script had an incorrect include. It 
asked for Mail_Mime/mime.php when the actual include should have been 
Mail/mime.php. I suspect that the php package names may have changed 
since the author wrote the script and they never bothered to update it.


Further to above, when I went to move the script to my host, I 
discovered that the cPanel php-pear installer used the package names 
from the original script. Their Mail_Mime package was actually called 
that while their Mail package doesn't include mime.php.


My host's cPanel php-pear packages appear to be relatively recent as 
some of the documentation has dates from last year. Perhaps the 
different package names is a Debian thing?




Re: php-pear etc. on an Apache2 server [resolved]

2021-02-19 Thread Gary Dale

On 2021-02-19 09:17, Gary Dale wrote:

On 2021-02-18 09:06, Gary Dale wrote:

On 2021-02-16 17:56, Gary Dale wrote:
I'm running Buster on my local server which, among other things, I 
use for developing web sites before copying them onto a public host. 
I've recently been getting into a little php coding because there 
seem to a lot of sample code out there for things I want to do. 
Usually they work right away when I try running them after having 
installed libapache2-mod-php.


Right now I'm trying to get a script working that is actually fairly 
small and seems straightforward. It displays a form that allows you 
to send an e-mail with an attachment. I actually have some similar 
scripts working locally that do pretty much the same thing, but I'm 
trying to get this one to work because the host I use for one site 
that needs this type of form has broken the script I had been using 
(I also didn't like it because it seemed overly complicated and 
under-featured).


This script use the Pear libraries to handle the heavy lifting, 
which seems like a reasonable design decision. I installed the 
php-pear package and also php-mail-mime. Unfortunately, the script 
failed to work. It was uploading the file but failing to send the 
e-mail.


I was able to find the line that was failing but it puzzles me. The 
line is


        $message = new Mail_mime();

which should be working. I ran the (frequently recommended) php-info 
page (nothing about mime. I'd expect that is something php handles though.


I got the script to send an e-mail by removing all the mime parts 
and just using the php mail() function. However that's not really 
useful. I need the mime bits to add the attachment.


Anyway, it looks like I need to do something more (besides 
restarting Apache2) to get php to use the php-mail-mime library but 
I'm not sure what. All the Debian information I've found just says 
"install the package php- and it will work". Can anyone 
help me here?




The issue turned out to be that script had an incorrect include. It 
asked for Mail_Mime/mime.php when the actual include should have been 
Mail/mime.php. I suspect that the php package names may have changed 
since the author wrote the script and they never bothered to update it.


Further to above, when I went to move the script to my host, I 
discovered that the cPanel php-pear installer used the package names 
from the original script. Their Mail_Mime package was actually called 
that while their Mail package doesn't include mime.php.


My host's cPanel php-pear packages appear to be relatively recent as 
some of the documentation has dates from last year. Perhaps the 
different package names is a Debian thing?


To make my confusion complete, even though the package is called 
Mail_Mime, to access the mime.php procedure, I need to point the include 
to the Mail directory, just like I had to locally. I'm sure that there 
is a logical reason for this somewhere...


So my initial assumption was close to correct. Where the stuff installs 
isn't actually related to the package name.




Jitsi keeps failing when I want to use it

2021-03-06 Thread Gary Dale
I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It 
usually works fine. However every time I want to actually host a 
meeting, it decides to act up.


Yesterday evening I tested Jitsi with a meeting between my desktop 
system and my Android phone. Everything worked properly. I could see the 
video feed from both devices on both devices. Since it was working, I 
notified people of the meeting address.


Today, I tried it again pre-meeting only to find the usual issue cropped 
up. While both devices can connect to the meeting, I can only see the 
video feed from the local device (I can see my desktop feed on the 
desktop browser and the Android feed on the phone).


I tried creating a new meeting but I get the same problem. I've tried 
reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which 
worked in the past, but still no luck. Nor does rebooting the server help.


Any ideas anyone?



Re: Jitsi keeps failing when I want to use it

2021-03-06 Thread Gary Dale

On 2021-03-06 15:36, Henning Follmann wrote:

On Sat, Mar 06, 2021 at 01:38:07PM -0500, Gary Dale wrote:

I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually
works fine. However every time I want to actually host a meeting, it decides
to act up.

Yesterday evening I tested Jitsi with a meeting between my desktop system
and my Android phone. Everything worked properly. I could see the video feed
from both devices on both devices. Since it was working, I notified people
of the meeting address.

Today, I tried it again pre-meeting only to find the usual issue cropped up.
While both devices can connect to the meeting, I can only see the video feed
from the local device (I can see my desktop feed on the desktop browser and
the Android feed on the phone).

I tried creating a new meeting but I get the same problem. I've tried
reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which
worked in the past, but still no luck. Nor does rebooting the server help.

Any ideas anyone?



As usual, no logs => it didn't happen!

Without any information one could only guess.

-H

# tail -n 32 jicofo.log
Jicofo 2021-03-06 15:38:06.901 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Added participant jid= 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521, 
bridge=jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876
Jicofo 2021-03-06 15:38:06.902 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, 
conference=22383: [[null, null]]
Jicofo 2021-03-06 15:38:06.906 INFO: [136] 
org.jitsi.jicofo.discovery.DiscoveryUtil.log() Doing feature discovery 
for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521
Jicofo 2021-03-06 15:38:06.906 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Added participant jid= 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557, 
bridge=jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876
Jicofo 2021-03-06 15:38:06.907 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, 
conference=22383: [[null, null, null]]
Jicofo 2021-03-06 15:38:06.908 INFO: [137] 
org.jitsi.jicofo.discovery.DiscoveryUtil.log() Doing feature discovery 
for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557
Jicofo 2021-03-06 15:38:07.192 INFO: [136] 
org.jitsi.jicofo.discovery.DiscoveryUtil.log() Successfully discovered 
features for 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 in 285
Jicofo 2021-03-06 15:38:07.215 INFO: [136] 
org.jitsi.jicofo.AbstractChannelAllocator.log() Using 
jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876 
to allocate channels for: 
Participant[shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521]@423117319
Jicofo 2021-03-06 15:38:08.151 INFO: [137] 
org.jitsi.jicofo.discovery.DiscoveryUtil.log() Successfully discovered 
features for 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 in 1243
Jicofo 2021-03-06 15:38:08.155 INFO: [137] 
org.jitsi.jicofo.AbstractChannelAllocator.log() Using 
jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876 
to allocate channels for: 
Participant[shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557]@2134948675
Jicofo 2021-03-06 15:38:08.344 INFO: [136] 
org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending 
session-initiate to: 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521
Jicofo 2021-03-06 15:38:08.366 INFO: [137] 
org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending 
session-initiate to: 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557
Jicofo 2021-03-06 15:38:08.600 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Got session-accept from: 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521
Jicofo 2021-03-06 15:38:08.614 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Received session-accept 
from shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 with 
accepted sources:Sources{ video: [ssrc=2796851205 ssrc=3366031894 
ssrc=2837324841 ] audio: [ssrc=682686965 ] }@887521678
Jicofo 2021-03-06 15:38:08.619 WARNING: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() No jingle session yet for 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557
Jicofo 2021-03-06 15:38:09.449 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Got session-accept from: 
shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557
Jicofo 2021-03-06 15:38:09.458 INFO: [32] 
org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Received session-accept 
from shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 with 
accepted sources:Sources{ video: [ssrc=682624931 ssrc=616139617 
ssrc=4223476417 ssrc=2608752843 ssrc=22331310 ssrc=4245847487 ] audio: 
[ssrc=4124579793 ] }@703862439
Jicofo 2021-03-06 15:38:41.565 INFO: [32] 
org.jitsi.jicofo.ChatRoomRoleAndP

Re: Jitsi keeps failing when I want to use it

2021-03-06 Thread Gary Dale
My phone lost its wifi so it only connects via the mobile data. In any 
event, the connection is through a public IP address.


I've noticed one thing that puzzles me a little (after I tried removing 
and reinstalling Jitsi) and that is that the Debian/Buster package 
installs jitsi-videobridge while the jitsi install guide for 
Debian/Ubuntu talks about jitsi-videobridge2.



On 2021-03-06 15:16, Dan Ritter wrote:

Gary Dale wrote:
I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It 
usually
works fine. However every time I want to actually host a meeting, it 
decides

to act up.

Yesterday evening I tested Jitsi with a meeting between my desktop system
and my Android phone. Everything worked properly. I could see the 
video feed
from both devices on both devices. Since it was working, I notified 
people

of the meeting address.

Today, I tried it again pre-meeting only to find the usual issue 
cropped up.
While both devices can connect to the meeting, I can only see the 
video feed
from the local device (I can see my desktop feed on the desktop 
browser and

the Android feed on the phone).

I tried creating a new meeting but I get the same problem. I've tried
reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which
worked in the past, but still no luck. Nor does rebooting the server 
help.

Is your phone connected via wifi to your LAN or to a mobile data
service? Try it both ways.

I suspect a STUN/TURN NAT issue.

-dsr-




Re: Jitsi keeps failing when I want to use it

2021-03-06 Thread Gary Dale

On 2021-03-06 15:36, Henning Follmann wrote:

On Sat, Mar 06, 2021 at 01:38:07PM -0500, Gary Dale wrote:

I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually
works fine. However every time I want to actually host a meeting, it decides
to act up.

Yesterday evening I tested Jitsi with a meeting between my desktop system
and my Android phone. Everything worked properly. I could see the video feed
from both devices on both devices. Since it was working, I notified people
of the meeting address.

Today, I tried it again pre-meeting only to find the usual issue cropped up.
While both devices can connect to the meeting, I can only see the video feed
from the local device (I can see my desktop feed on the desktop browser and
the Android feed on the phone).

I tried creating a new meeting but I get the same problem. I've tried
reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which
worked in the past, but still no luck. Nor does rebooting the server help.

Any ideas anyone?



As usual, no logs => it didn't happen!

Without any information one could only guess.

-H


# tail -n 32 jvb.log
2021-03-06 15:40:29.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:40:39.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:40:49.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:40:59.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:41:09.285 INFO: [23] VideobridgeExpireThread.expire#140: 
Running expire()
2021-03-06 15:41:09.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:41:19.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:41:29.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:41:39.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:41:49.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:41:59.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:42:09.285 INFO: [23] VideobridgeExpireThread.expire#140: 
Running expire()
2021-03-06 15:42:09.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:42:19.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:42:29.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:42:39.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:42:49.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:42:59.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:43:09.285 INFO: [23] VideobridgeExpireThread.expire#140: 
Running expire()
2021-03-06 15:43:09.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:43:19.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:43:29.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:43:39.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:43:49.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:43:59.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:44:09.285 INFO: [23] VideobridgeExpireThread.expire#140: 
Running expire()
2021-03-06 15:44:09.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:44:19.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.1S. Sticky failure: false
2021-03-06 15:44:29.306 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:44:39.305 INFO: [24] HealthChecker.run#170: Performed a 
successful health check in PT0.11S. Sticky failure: false
2021-03-06 15:44:49.306 INFO: [24] Hea

Re: Jitsi keeps failing when I want to use it

2021-03-06 Thread Gary Dale

On 2021-03-06 15:44, Gary Dale wrote:
My phone lost its wifi so it only connects via the mobile data. In any 
event, the connection is through a public IP address.


I've noticed one thing that puzzles me a little (after I tried 
removing and reinstalling Jitsi) and that is that the Debian/Buster 
package installs jitsi-videobridge while the jitsi install guide for 
Debian/Ubuntu talks about jitsi-videobridge2.



On 2021-03-06 15:16, Dan Ritter wrote:

Gary Dale wrote:
I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It 
usually
works fine. However every time I want to actually host a meeting, it 
decides

to act up.

Yesterday evening I tested Jitsi with a meeting between my desktop 
system
and my Android phone. Everything worked properly. I could see the 
video feed
from both devices on both devices. Since it was working, I notified 
people

of the meeting address.

Today, I tried it again pre-meeting only to find the usual issue 
cropped up.
While both devices can connect to the meeting, I can only see the 
video feed
from the local device (I can see my desktop feed on the desktop 
browser and

the Android feed on the phone).

I tried creating a new meeting but I get the same problem. I've tried
reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which
worked in the past, but still no luck. Nor does rebooting the server 
help.

Is your phone connected via wifi to your LAN or to a mobile data
service? Try it both ways.

I suspect a STUN/TURN NAT issue.

-dsr-


BTW: I didn't change anything on the jitsi server between it working and 
it not working. I did ssh to it in the morning to check for apt updates 
but there weren't any.





Re: Jitsi keeps failing when I want to use it [RESOLVED}

2021-03-06 Thread Gary Dale

On 2021-03-06 15:54, Gary Dale wrote:

On 2021-03-06 15:44, Gary Dale wrote:
My phone lost its wifi so it only connects via the mobile data. In 
any event, the connection is through a public IP address.


I've noticed one thing that puzzles me a little (after I tried 
removing and reinstalling Jitsi) and that is that the Debian/Buster 
package installs jitsi-videobridge while the jitsi install guide for 
Debian/Ubuntu talks about jitsi-videobridge2.



On 2021-03-06 15:16, Dan Ritter wrote:

Gary Dale wrote:
I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It 
usually
works fine. However every time I want to actually host a meeting, 
it decides

to act up.

Yesterday evening I tested Jitsi with a meeting between my desktop 
system
and my Android phone. Everything worked properly. I could see the 
video feed
from both devices on both devices. Since it was working, I notified 
people

of the meeting address.

Today, I tried it again pre-meeting only to find the usual issue 
cropped up.
While both devices can connect to the meeting, I can only see the 
video feed
from the local device (I can see my desktop feed on the desktop 
browser and

the Android feed on the phone).

I tried creating a new meeting but I get the same problem. I've tried
reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), 
which
worked in the past, but still no luck. Nor does rebooting the 
server help.

Is your phone connected via wifi to your LAN or to a mobile data
service? Try it both ways.

I suspect a STUN/TURN NAT issue.

-dsr-


BTW: I didn't change anything on the jitsi server between it working 
and it not working. I did ssh to it in the morning to check for apt 
updates but there weren't any.


OK, it turns out the problem was with my phone. I have no idea why it 
worked yesterday but not today but when I tried it with another device, 
things worked.




ssh local port forwarding stopped working

2019-05-28 Thread Gary Dale

I'm running Debian/Testing on an AMD64 machine.

I follow what I believe is a fairly conventional way of connecting to 
remote machines. Firstly I establish an SSH tunnel using a command like:


  ssh  -L 5902::5900

where the remote server public IP is that of the router (DD-WRT) with 
port 22 forwarded to the local IP of a remote Debian/Stable server. The 
remote workstation IPs are in the 192.168.1.* range. The SSH connection 
works fine.


Then I connect to localhost:5902 using a VNC viewer (tried a few).  I've 
been doing this for a decade with no significant problems.


However I haven't been able to do this since at least yesterday 
(previous remote login was a week ago. It worked). No matter which 
remote machine I try to connect to, I never get to the password prompt. 
Instead the connection attempt eventually attempt times out.


I can log onto a KVM virtual machine running on the remote server using 
the Virtual Machine Manager GUI. From there I can connect to the other 
(real) machines using the Tight VNC viewer.


Since I can connect to the remote workstations from the VM, the problem 
cannot be with their service setup. And since the problem isn't resolved 
by using a different VNC viewer from my local workstation, the problem 
can't be the VNC client. This just leaves the ssh tunnel - specifically 
the port forwarding - as the only common element.




Re: netinst bad display after first screen

2019-05-31 Thread Gary Dale

On 2019-05-31 4:40 p.m., Blair, Charles E III wrote:

I have downloaded the current netinst and
burned it to a DVD.

When I boot it, the first screen I see shows
the usual beginning with choices "Graphics Install,"
"Install," "Advanced".

I choose "Install" and press F10.

The monitor then shows a row of what look like
tiny screen images at the top, with the rest of
the monitor all black.  Crtl-Alt-F1, etc makes
changes in the tiny images at the top, but the
all-black rest of screen is unchanged.

At this point, I'm just guessing.  I tried
repeating the process, except that I changed


linux /install.amd/vmlinuz vga=788 --- quiet

to


linux /install.amd/vmlinuz vga=normal fb=false --- quiet

and pressing F10.  This gave me an all-black screen,
no tiny row.

I really hope somebody can tell me what "magic words"
I should be using. (I tried "nomodeset" but that didn't help)

  
Did you check the sha256 code against the netinst.iso file? Also did you 
verify that the DVD burned correctly?


Finally, why did you select "Install" then press F10? You should just 
hit "enter".





KVM snapshot management

2019-05-31 Thread Gary Dale

I'm running a VM host server with Debian/Stable on AMD64.

I recently converted an important VM over to qcow2 so I could take 
advantage of snapshots. I don't think I need to be able to revert back 
more than a week, so I thought that naming each snapshot with the day of 
week number would be reasonable as it is easy to implement and to 
understand which day it is for. I established a bash script to be run 
from /etc/crontab to handle this.


After shutting down the VM, I delete the previous week's snapshot, which 
should be the oldest. My understanding is this merges it back into the 
base image while the new snapshot that I create next is now 6 snapshots 
away from the base. I think this gives me the ability to revert back up 
to a week if needed.


Since the snapshots are named using the DOW, the new snapshot has the 
same name as the one I just deleted. The names also cycle over a period 
of a week.


Is my understanding of the way snapshots work correct and is my approach 
reasonable?


The core part of my script is below. This is working in that it does 
what I described above WRT handling snapshots (for the first week, of 
course, the snapshot-delete fails since there isn't one).


virsh shutdown $VM
state=$(virsh list --all | grep " $VM " | awk '{ print $3}')
while ([ "$state" != "" ] && [ "$state" == "running" ]); do
  sleep 10
  state=$(virsh list --all | grep " $VM " | awk '{ print $3}')
done;
DOW=$(date +%u)
virsh snapshot-delete    --domain $VM --snapshotname ${VM}S$DOW
virsh snapshot-create-as --domain $VM --name ${VM}S$DOW
virsh start $VM



Re: df shows wrong disk size

2019-06-01 Thread Gary Dale
I suggest trying gparted to read the partition table on your drive. 
There may be a problem and gparted is usually pretty good at finding 
partition table errors.



On 2019-06-01 1:41 p.m., Ross Boylan wrote:

df says my volume is 3G, but everything else says it's 4G.  What's
going on and how can I correct it?

This question concerns the total reported space, not the free space.

The volume is an LVM logical volume on buster with an ext4 file
system.  I originally mistakenly created it as 4TB in size.
Then I took it offline, resized the file system to 3G, resized the
logical volume to 4G, and  then auto-resized (that is, ran resize2fs
without specifying an explicit size) the file system to 4G.  When df
showed the size as 3G I thought it might be temporary, but it it
reports the same value after reboot.

(If you're wondering: I resized to 3G first out of concern that the
file system requires a slightly larger "partition" than its own size,
and I didn't want to risk cutting off the end of the file system.)

I thought there might have been a huge amount of reserved space or
journal from the original 4TB size, but the values in dumpe2fs appear
normal to me.

Running buster.

Thanks.
Ross

# df -h /var/local/cache/
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/vgbarley-cache  3.0G  721M  2.1G  26% /var/local/cache
root@barley:~/tempserver/root# lvs vgbarley
   LV  VG   Attr   LSize   Pool Origin Data%  Meta%  Move
Log Cpy%Sync Convert
   cache   vgbarley -wi-ao   4.00g
## etc
# resize2fs /dev/vgbarley/cache
resize2fs 1.44.5 (15-Dec-2018)
The filesystem is already 1048576 (4k) blocks long.  Nothing to do!
# So both LVM and e2fs utilities see 4G , even thouogh df reports 3G

# somewhat later
# dumpe2fs -h /dev/vgbarley/cache
dumpe2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   
Last mounted on:  /var/local/cache
Filesystem UUID:  0601d7dc-2efe-46c7-9cac-205a761b70ef
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent 64bit flex_bg sparse_super large_file
huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options:user_xattr acl
Filesystem state: clean
Errors behavior:  Continue
Filesystem OS type:   Linux
Inode count:  131072
Block count:  1048576
Reserved block count: 52428
Free blocks:  621488
Free inodes:  122857
First block:  0
Block size:   4096
Fragment size:4096
Group descriptor size:64
Reserved GDT blocks:  1024
Blocks per group: 32768
Fragments per group:  32768
Inodes per group: 4096
Inode blocks per group:   256
Flex block group size:16
Filesystem created:   Mon May 27 11:54:50 2019
Last mount time:  Thu May 30 17:06:02 2019
Last write time:  Thu May 30 17:06:02 2019
Mount count:  2
Maximum mount count:  -1
Last checked: Mon May 27 14:17:18 2019
Check interval:   0 ()
Lifetime writes:  35 GB
Reserved blocks uid:  0 (user root)
Reserved blocks gid:  0 (group root)
First inode:  11
Inode size:  256
Required extra isize: 32
Desired extra isize:  32
Journal inode:8
Default directory hash:   half_md4
Directory Hash Seed:  24162063-f4a6-4420-b79b-3ad4f9b71ab7
Journal backup:   inode blocks
Checksum type:crc32c
Checksum: 0x48ff013b
Journal features: journal_64bit journal_checksum_v3
Journal size: 1024M
Journal length:   262144
Journal sequence: 0x05be
Journal start:1
Journal checksum type:crc32c
Journal checksum: 0xb7b54059






error when printing to Samsung C410 series colour laser printer

2019-06-11 Thread Gary Dale
I can print fine from Stretch but not from Buster. When I try to print 
to this printer from Buster, I get a short listing of a text error. For 
example, I just tried to print a PDF and got


SPL-C ERROR - Undefined Command

    POSITION : 0x21958 (137560)

    SYSTEM : src/xl_pa

    LINE : 298

    VERSION : SPL-C 5.59.01 06-19-2013

I get different errors when printing LibreOffice documents of various 
types. The errors persist whether I print to the printer via its network 
interface or USB cable via a Stretch CUPS server. However I can print 
from a Stretch computer.


My usual workaround is to create a PDF, then ssh to the CUPS server and 
print via lp.


This only seems to affect this one printer. It's been going on for 
months now, so I figured I should report it since it isn't going away on 
its own. The driver I use is the Samsung C410 Series driver, which is 
probably the one from the HP driver download page.




Re: error when printing to Samsung C410 series colour laser printer

2019-06-13 Thread Gary Dale

On 2019-06-13 5:06 a.m., Brian wrote:

On Tue 11 Jun 2019 at 16:59:35 -0400, Gary Dale wrote:


I can print fine from Stretch but not from Buster. When I try to print to
this printer from Buster, I get a short listing of a text error. For
example, I just tried to print a PDF and got

SPL-C ERROR - Undefined Command

     POSITION : 0x21958 (137560)

     SYSTEM : src/xl_pa

     LINE : 298

     VERSION : SPL-C 5.59.01 06-19-2013

I get different errors when printing LibreOffice documents of various types.
The errors persist whether I print to the printer via its network interface
or USB cable via a Stretch CUPS server. However I can print from a Stretch
computer.

My usual workaround is to create a PDF, then ssh to the CUPS server and
print via lp.

This only seems to affect this one printer. It's been going on for months
now, so I figured I should report it since it isn't going away on its own.
The driver I use is the Samsung C410 Series driver, which is probably the
one from the HP driver download page.

Your setup would appear to be:

1. The C410 is connected to a CUPS server via USB (but the connection
can also be via wireless).

2. The server has the Samsung ULD software installed and is advertising
shared queues.

3. The buster client contacts the server over wireless and is running
cups-browsed.

Please post what you get with 'lpstat -l -e' from the client. You should
be able to recognise your print queue from the output, so follow up with
'lpoptions -p '.

Not quite. The network connection is wired. The C410 only connects 
wirelessly using WPS, which I have disabled on the router.


$ lpstat -l -e
Samsung_C410_Series permanent 
ipp://localhost/printers/Samsung_C410_Series 
dnssd://Samsung%20C410%20Series%20(SEC30CDA71CB48A)._printer._tcp.local/
Samsung_C410_Series_SEC30CDA71CB48A_ network none 
ipp://Samsung%20C410%20Series%20(SEC30CDA71CB48A)._ipp._tcp.local/
Samsung_C410_Series_TheLibrarian network none 
ipps://Samsung%20C410%20Series%20%40%20TheLibrarian._ipps._tcp.local/cups
Samsung_C410_Series_TheLibrarian_3 permanent 
ipp://localhost/printers/Samsung_C410_Series_TheLibrarian_3 file:///dev/null


The printer is defined  on the server (TheLibrarian) twice - once as a 
network printer and once as a USB printer. It's defined once on my 
workstation as a network printer, so I can avoid going through the server.


$ lpoptions -p Samsung_C410_Series
copies=1 
device-uri=dnssd://Samsung%20C410%20Series%20(SEC30CDA71CB48A)._printer._tcp.local/ 
finishings=3 job-cancel-after=10800 job-hold-until=no-hold 
job-priority=50 job-sheets=none,none marker-change-time=1560454516 
marker-colors=#00,#00,#FF00FF,#00,none,none,none,none,none,none 
marker-levels=201,178,74,62,89,60,61,89,50,0 marker-names='Black\ Toner\ 
S/N\ :CRUM-14031169715,Cyan\ Toner\ S/N\ :CRUM-14031169678,Magenta\ 
Toner\ S/N\ :CRUM-14031182177,Yellow\ Toner\ S/N\ 
:CRUM-14031182186,Transfer\ Roller,Transfer\ Belt,Fuser\ Life,Pick-up\ 
Roller,Imaging\ Unit,Waste\ Toner' 
marker-types=toner,toner,toner,toner,other,other,fuser,other,other,other 
number-up=1 PageSize=Letter printer-commands=none printer-info='Samsung 
C410 Series' printer-is-accepting-jobs=true printer-is-shared=false 
printer-is-temporary=false printer-location='family room' 
printer-make-and-model='Samsung C410 Series' printer-state=3 
printer-state-change-time=1560454516 printer-state-reasons=none 
printer-type=2101324 
printer-uri-supported=ipp://localhost/printers/Samsung_C410_Series




strange behaviour with RDP clients connecting to Windows 10 machines

2019-06-13 Thread Gary Dale
I recently had the unpleasant experience of "upgrading" a pair of 
Windows 7/Pro computers to Windows 10/Pro - the 9 month old version, not 
the latest install image. That's when I discovered the Windows 10 
"feature" that it doesn't allow VNC connections when the monitor is 
turned off (or not attached).


I enabled Remote Desktop services on both machines and can connect to 
them when the monitor is off using RDP. However, even though I installed 
the same version of Windows 10/Pro on each, they behave differently. One 
was a long drawn out slug fest to get the "upgrade" to work (it still is 
having problems with roaming profiles) while the other was a 
straightforward.


I connect using an ssh tunnel to a stretch server with local port 
forwarding so my connection is to localhost:3388. The ssh command 
specifies the machine I want to connect to (e.g. ssh  -L 
3388:10.0.0.25:3389). This way I never expose a Windows machine to the 
Internet. The only way in is through the server.


The machine that gave me the trouble on the upgrade connects without 
problem using KDRC but not Remmina or Vinagre. The other computer 
connects without a problem using Vinagre but not KDRC or Remmina.


Remmina gives a message saying "Unable to connect to RDP server 
localhost" when I try connecting to either machine. Vinagre on the first 
machine brings up an authentication dialog then crashes. KDRC does the 
same thing when I try to connect to the second machine.


I'm probably going to "upgrade" both to the latest Windows 10 version in 
the coming weeks, but for now I am puzzled about why I have to use 2 
different programs to connect to 2 machines that are largely identical.




Re: strange behaviour with RDP clients connecting to Windows 10 machines

2019-06-14 Thread Gary Dale

On 2019-06-14 1:49 a.m., john doe wrote:

On 6/14/2019 5:46 AM, Gary Dale wrote:

I recently had the unpleasant experience of "upgrading" a pair of
Windows 7/Pro computers to Windows 10/Pro - the 9 month old version, not
the latest install image. That's when I discovered the Windows 10
"feature" that it doesn't allow VNC connections when the monitor is
turned off (or not attached).

I enabled Remote Desktop services on both machines and can connect to
them when the monitor is off using RDP. However, even though I installed
the same version of Windows 10/Pro on each, they behave differently. One
was a long drawn out slug fest to get the "upgrade" to work (it still is
having problems with roaming profiles) while the other was a
straightforward.

I connect using an ssh tunnel to a stretch server with local port
forwarding so my connection is to localhost:3388. The ssh command
specifies the machine I want to connect to (e.g. ssh  -L
3388:10.0.0.25:3389). This way I never expose a Windows machine to the
Internet. The only way in is through the server.

The machine that gave me the trouble on the upgrade connects without
problem using KDRC but not Remmina or Vinagre. The other computer
connects without a problem using Vinagre but not KDRC or Remmina.

Remmina gives a message saying "Unable to connect to RDP server
localhost" when I try connecting to either machine. Vinagre on the first
machine brings up an authentication dialog then crashes. KDRC does the
same thing when I try to connect to the second machine.

I'm probably going to "upgrade" both to the latest Windows 10 version in
the coming weeks, but for now I am puzzled about why I have to use 2
different programs to connect to 2 machines that are largely identical.


Not realy an answer, depending on what you need vnc for, one alternative
would be to use Cygwin as an ssh server on the Windows boxes.

I would not spend my time on non-up-to-date systems when Windows is
involved, one can only hope that the issues you are facing are fixed on
up-to-date Windows systems! :)


1) The Windows boxes are up to date. Microsoft releases a new version of 
Windows 10 every 6 months but continues to release updates for the older 
versions. This is similar to what Ubuntu does except that Microsoft 
doesn't label some versions as LTS and so far haven't dropped support 
for any version of Windows 10.


2) I don't want to directly connect to a Windows box. That would require 
opening up more ports on the router and giving hackers more targets.




--
John Doe






Re: strange behaviour with RDP clients connecting to Windows 10 machines

2019-06-14 Thread Gary Dale

On 2019-06-14 3:45 a.m., Curt wrote:

On 2019-06-14, john doe  wrote:

Not realy an answer, depending on what you need vnc for, one alternative
would be to use Cygwin as an ssh server on the Windows boxes.


He might try removing ~/.config/freerdp/known_hosts (after backing it
up) or commenting out or deleting the offending host key.

On my system it is known_hosts2 but I get the same results. Don't 
forget, I'm connecting to localhost while the ssh tunnel provides the 
route to the real host.




  1   2   3   4   5   6   7   8   9   10   >