Re: Backup Times on a Linux desktop

2019-11-04 Thread Alessandro Baggi

On 02/11/19 20:24, Konstantin Nebel wrote:

Hi,

this is basically a question, what you guys prefer and do. I have a Linux
destkop and recently I decided to buy a raspberry pi 4 (great device) and
already after a couple days I do not know how I lived without it. So why
Raspberrypi.

In the past I decided not to do backups on purpose. I decided that my data on
my local Computer is not important and to store my important stuff in a
nextcloud I host for myself and do backups of that. And for a long period of
time I was just fine with it.

Now i attached a 4 tb drive to my pi and I decided what the heck, why not
doing backups now.

So now I am thinking. How should I approach backups. On windows it does
magically backups and remind me when they didnt run for a while. I like that
attitude.

On linux with all that decision freedom it can be good and bad cause you have
to think about things :D

(SKIP THIS IF U DONT WANT TO READ TOO MUCH) ;)
So I could do the backup on logout for example but I am not sure if that is
not annoying so I'd like to have your opinion. Oh and yeah. I like to turn off
my computer at night. So a backup running in night is not really an option
unless I do wake on lan and run backup and then turn off. But right now I have
dual boot and Windows on default (for games, shame on me) and I might switch
cause first Gaming on Linux is really becoming rly good and second I could buy
second GPU for my Linux and then forward my GPU to a windows VM running my
games in 3d... Especially after buying Ryzen 3900X (that a monster of cpu)

Whoever read till the end Im thankful and ready to hear your opinion.


Cheers
Konstantin



Hi Konstantin,
In my linux experience I found several solution for backup.
First of all rsync.

Scripted rsync is well suited for your situation. Remember that rsync is 
not a backup tool/system alone, it is very helpfull when you need to 
sync file between hosts. Over this you can user --backup option that 
saves the last copy before it will be overwritten by the new copy in a 
different dir. You can use SSH to add encryption during transfer. If you 
add a catalog and configuration you can use it for multiple client.
In then past I ran my scripted rsync backup tool, with catalog, 
prejob/postjob script etc.



Then I encountered bacula. bacula is a beast, complex, hard to configure 
in the first time but it is very powerfull. It permit pooling, 
scheduling, mailing, encryption, multiple clients, prejob/postjob script 
on server and on client, storage on tape or disks, has its own scheduler 
like cron that works very well, volume recycling, Client GUI, Windows 
Client, Web Interface and much more.
I used it for several server and works great. In some situation I prefer 
run rsync to a local machine before run backup because on large datasets 
it requires more times and more overhead on network bandwidth plus all 
operation like stopping services + create lvm snapshot ecc With 
large datasets rsync permit to sync file very quickly so I can block my 
service for very small amount of time and the perform backup locally on 
synced dataset.



There are also other backup tool like rsnapshot (based on rsync) and I 
think this is the best solution for you. There is bareOS (a clone of 
bacula), amanda, restic, duplicity, BackupPC and borg.


Borg seems very promising but I performs only push request at the moment 
and I need pull request. It offers deduplication, encryption and much more.


One word on deduplication: it is a great feature to save space, with 
deduplication compression ops (that could require much time) are avoided 
but remember that with deduplication for multiple backups only one 
version of this files is deduplicated. So if this file get corrupted 
(for every reason) it will be compromised on all  previous backups jobs 
performed, so the file is lost. For this I try to avoid deduplication on 
important backup dataset.


My 2 cents.





Re: Got a puzzle here

2019-11-04 Thread Greg Wooledge
On Fri, Nov 01, 2019 at 11:06:26PM +0100, to...@tuxteam.de wrote:
> That will depend on whether apache is compiled with tcpwrappers (that's
> the library implementing the hosts.{allow,deny} policies). I don't
> know whether Debian's distribution does that (perhaps others will).

It's not.

arc3:~$ dpkg -l \*apache\* | grep '^.i'
ii  apache2  2.4.38-3+deb10u3 i386 Apache 
HTTP Server
ii  apache2-bin  2.4.38-3+deb10u3 i386 Apache 
HTTP Server (modules and other binary files)
ii  apache2-data 2.4.38-3+deb10u3 all  Apache 
HTTP Server (common files)
ii  apache2-utils2.4.38-3+deb10u3 i386 Apache 
HTTP Server (utility programs for web servers)
ii  libapache2-mod-authnz-pam1.2.0-1  i386 PAM 
authorization checker and PAM Basic Authentication provider
ii  libapache2-mod-php   2:7.3+69 all  
server-side, HTML-embedded scripting language (Apache 2 module) (default)
ii  libapache2-mod-php5  5.6.30+dfsg-0+deb8u1 i386 
server-side, HTML-embedded scripting language (Apache 2 module)
ii  libapache2-mod-php7.07.0.33-0+deb9u3  i386 
server-side, HTML-embedded scripting language (Apache 2 module)
ii  libapache2-mod-php7.37.3.9-1~deb10u1  i386 
server-side, HTML-embedded scripting language (Apache 2 module)
arc3:~$ for i in apache2 apache2-bin apache2-data apache2-utils; do apt-cache 
show "$i" | grep wrap; done
arc3:~$ 



Re: Got a puzzle here

2019-11-04 Thread Greg Wooledge
On Fri, Nov 01, 2019 at 06:46:25PM -0400, Gene Heskett wrote:
> I'll make sure its installed. Right now. But that is a problem:
> root@coyote:etc$ apt install tcpwrappers

... no, Gene.

TCP wrappers is a *library*, and its package name in Debian is libwrap0.

wooledg:~$ apt-cache search tcp wrappers
fakeroot - tool for simulating superuser privileges
libfakeroot - tool for simulating superuser privileges - shared libraries
libauthen-libwrap-perl - module providing access to the TCP Wrappers library
python-tcpwrap - Python interface for libwrap0 (TCP wrappers)
ruby-proxifier - add support for HTTP or SOCKS proxies
sendmail - powerful, efficient, and scalable Mail Transport Agent (metapackage)
sendmail-bin - powerful, efficient, and scalable Mail Transport Agent
libwrap0 - Wietse Venema's TCP wrappers library
libwrap0-dev - Wietse Venema's TCP wrappers library, development files
ucspi-tcp - command-line tools for building TCP client-server applications
ucspi-tcp-ipv6 - command-line tools for building TCP client-server applications 
(IPv6)

(At least learn how to use the basic Debian utilities.)

A given program is either built with libwrap, or it isn't.  You can't
just install it and have it affect programs that aren't built to use it.

(It actually has a second mode of operation, though -- in a service
manager like inetd or xinetd, you can use TCP wrappers as an actual
wrapper program that inetd invokes.  Then the wrapper can validate
whether it wants to continue this connection or not, and if it chooses
to allow the connection, it will exec the actual daemon that it's
wrapping, e.g. in.ftpd or in.telnetd or some other relic of the bronze
age.)

(None of this applies to Apache, which is NOT linked with libwrap0, and
which is NOT launched by a service manager.  It's a standalone daemon
that does its own socket listening, so there's no place to insert a
chain-loading wrapper program.)



Re: Backup Times on a Linux desktop

2019-11-04 Thread Jonathan Dowland



I'll respond on the issue of triggering the backup, rather than the
specific backup software itself, because my solution for triggering
is separate from the backup software I use (rdiff-backup).

I trigger (some) backup jobs via systemd units, that are triggered by
the insertion of my removeable backup drive. So, I would suggest that
instead of doing a network backup to your 4T drive on the other side of
your pi, you could attach the drive directly to your Computer when you
want to initiate a backup. This doesn't address your desire to have it
happen in the background, though, because you would still need to
remember (or prompt yourself) to attach the drive. I provide the details
anyway just in case they are interesting.

My "backup-exthdd.service" is what performs the actual backup job:

   [Unit]
   OnFailure=status-email-user@%n.service blinkstick-fail.service
   Requires=systemd-cryptsetup@extbackup.service
   After=systemd-cryptsetup@extbackup.service

   [Service]
   Type=oneshot
   ExecStart=/bin/mount /extbackup
   ExecStart=
   ExecStop=/bin/umount /extbackup
   ExecStop=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color green

   [Install]
   
WantedBy=dev-disk-by\x2duuid-e0eed9b6\x2d03f1\x2d41ed\x2d80a4\x2dc7cc4ff013c3.device

(the mount and umount Execs there shouldn't be needed, they should be
addressed by systemd unit dependencies, but in practice they were
necessary when I set this up. This was a while ago and systemd may
perform differently now.)

My external backup disk has an encrypted partition on it. So, the job
above actually depends upon the decrypted partition. The job
"systemd-cryptsetup@extbackup.service" handles that. The skeleton of the
job was written by systemd-cryptsetup-generator automatically, based on
content in /etc/crypttab; I then had to adapt it further. The entirety
of it is:

   [Unit]
   Description=Cryptography Setup for %I
   SourcePath=/etc/crypttab
   DefaultDependencies=no
   Conflicts=umount.target
   BindsTo=dev-mapper-%i.device
   IgnoreOnIsolate=true
   After=systemd-readahead-collect.service systemd-readahead-replay.service 
cryptsetup-pre.target
   Before=cryptsetup.target
   
BindsTo=dev-disk-by\x2duuid-e0eed9b6\x2d03f1\x2d41ed\x2d80a4\x2dc7cc4ff013c3.device
   
After=dev-disk-by\x2duuid-e0eed9b6\x2d03f1\x2d41ed\x2d80a4\x2dc7cc4ff013c3.device
   Before=umount.target
   StopWhenUnneeded=true

   [Service]
   Type=oneshot
   RemainAfterExit=yes
   TimeoutSec=0
   ExecStart=/lib/systemd/systemd-cryptsetup attach 'extbackup' 
'/dev/disk/by-uuid/e0eed9b6-03f1-41ed-80a4-c7cc4ff013c3' '/root/exthdd.key' 
'luks,noauto'
   ExecStop=/lib/systemd/systemd-cryptsetup detach 'extbackup'

So when the remote disk device with the UUID
e0eed9b6-03f1-41ed-80a4-c7cc4ff013c3 appears on the system, its
appearance causes systemd to start the "backup-exthdd.service" job,
which depends upon the bits to enable the encrypted volume.

(the "blinkstick-fail.service" and ExecStop=/usr/local/bin/blinkstick…
line relate to a notification system I have: this is my headless NAS,
and the "blinkstick" is a little multicolour LED attached via USB. In
normal circumstances it is switched off. When a job is running it
changes to a particular colour; when the job finished successfully, it's
green - indicating I can unplug the drive (it's all unmounted etc.), if
anything goes wrong it turns red.)



Re: Backup Times on a Linux desktop

2019-11-04 Thread Jonathan Dowland

On Sun, Nov 03, 2019 at 02:47:46AM -0500, Gene Heskett wrote:

Just 4 or 5 days ago, I had to recover the linuxcnc configs from a backup
of the pi3, making a scratch dir here at home, then scanned my database
for the last level0 of the pi3b, pulled that out with amrecover then
copied what I needed back to the rpi4 now living in my Sheldon Lathes
control box. File moving done by an old friend, mc, and sshfs mounts.
Totally painless,


As a former Amanda user in a professional setting (thankfully now deep
in my past), I read most of this with a mixed sense of nostalgia (oh yes
I remember that) and pleasure that I am no longer having to put up with
it, although once I got to "totally painless" I almost spat out my tea. 


Since you are all set up already and it's working great for you I
wouldn't suggest you change anything but for anyone who isn't already
invested in Amanda, the process you describe there is considerably more
awkward than that offered by many more modern tools.



Re: neovim and less

2019-11-04 Thread Jonathan Dowland

On Sat, Nov 02, 2019 at 02:27:19PM -0400, R Ransbottom wrote:

Issuing a ex command like

   :! perldoc -f close

or

   :! cat some_file | less

brings me directly to the end of the file output, leaving me with
the nvim message:

   Press ENTER or type command to continue

requiring me to navigate to the start of the output.  Less does not
do this when invoked from bash.


Side-stepping the issue a little, but you could consider opening a new
scratch buffer and then reading the output of your shell command into
it, then just navigating that buffer in nvim directly, rather than using
an external pager. E.g.

   :ene
   :r! perldoc -f close

Or more simply for your cat|less example, simply open the file in nvim
directly, in a new pane if you wish

   :sp
   :e some_file



Re: Got a puzzle here

2019-11-04 Thread Gene Heskett
On Monday 04 November 2019 08:45:42 Greg Wooledge wrote:

> On Fri, Nov 01, 2019 at 11:06:26PM +0100, to...@tuxteam.de wrote:
> > That will depend on whether apache is compiled with tcpwrappers
> > (that's the library implementing the hosts.{allow,deny} policies). I
> > don't know whether Debian's distribution does that (perhaps others
> > will).
>
> It's not.

Oh fudge, no wonder my mechinations with /etc/hosts.deny have zero long 
term effect.

Does apache2 have its own module that would prevent its responding to an 
ipv4 address presented in a .conf file as "xx.xx.xx.xx/24" format? These 
bots are not just indexing the site, they are downloading the whole site 
non-stop, repeatedly and have been for over a week now, burning up what 
little upload bandwidth I have, blocking access from folks who might 
have a legit reason to want this data.  The classic definition of a 
DDOS.

I've a request in to join the apache2 mailing list.  I've also emailed 
postmaster@offender's, but the only answer has been from yandex.ru, in 
russian of course.  That to me is kin of swahili.

> arc3:~$ dpkg -l \*apache\* | grep '^.i'
> ii  apache2  2.4.38-3+deb10u3 i386
> Apache HTTP Server ii  apache2-bin 
> 2.4.38-3+deb10u3 i386 Apache HTTP Server (modules and
> other binary files) ii  apache2-data
> 2.4.38-3+deb10u3 all  Apache HTTP Server (common files) ii
>  apache2-utils2.4.38-3+deb10u3 i386
> Apache HTTP Server (utility programs for web servers) ii 
> libapache2-mod-authnz-pam1.2.0-1  i386 PAM
> authorization checker and PAM Basic Authentication provider ii 
> libapache2-mod-php   2:7.3+69 all 
> server-side, HTML-embedded scripting language (Apache 2 module)
> (default) ii  libapache2-mod-php5  5.6.30+dfsg-0+deb8u1
> i386 server-side, HTML-embedded scripting language (Apache 2
> module) ii  libapache2-mod-php7.07.0.33-0+deb9u3  i386
> server-side, HTML-embedded scripting language (Apache 2
> module) ii  libapache2-mod-php7.37.3.9-1~deb10u1  i386
> server-side, HTML-embedded scripting language (Apache 2
> module) arc3:~$ for i in apache2 apache2-bin apache2-data
> apache2-utils; do apt-cache show "$i" | grep wrap; done arc3:~$




Re: Got a puzzle here

2019-11-04 Thread Greg Wooledge
On Mon, Nov 04, 2019 at 09:08:36AM -0500, Gene Heskett wrote:
> Does apache2 have its own module that would prevent its responding to an 
> ipv4 address presented in a .conf file as "xx.xx.xx.xx/24" format?

Well, looking at your larger issue, you might find it more useful
to block these bots based on their user-agent strings.

The first thing you want to do is actually find a log entry from one
of these bots, so you know what you're dealing with.  If you're not
logging user-agent, then you'll want to turn that on first.

Once you have that information, you can google "apache block user agent"
or whatever search terms work best for you.

I'm using nginx on my (real) web site, so I don't have the Apache-specific
knowledge you're looking for.  I do block one type of bot based on
its user-agent.  It's pretty simple in nginx:

greg@remote:/etc/nginx$ cat sites-enabled/mywiki.wooledge.org 
server {
listen 80;
listen 443 ssl;
server_name mywiki.wooledge.org;

if ($http_user_agent ~ SemrushBot) {
return 403;
}
...



Re: Want to install info node for elisp in emacs on stretch

2019-11-04 Thread Andrei POPESCU
On Ma, 15 oct 19, 17:10:49, Dan Hitt wrote:
> 
> I will keep my oar out of the water about the complex beast, but since 
> i'm in oldstable, does that mean i need to upgrade before too long?  
> (I've been using debian 9 since February 2017.)

If your expectation is to receive security updates from Debian you need 
to upgrade before the end of such support[1].

If your expectation is to receive some security support, possibly not 
for all packages you are using, then stretch will most likely be 
included in the LTS project for extended security suport[3].

In this case you can continue using stretch for additional 2 years.

If you don't expect or need any security support then you can continue 
using stretch for as long as you like, provided you have working 
compatible hardware.

In this case please disconnect the system from the internet when it 
doesn't receive security updates anymore.


[1] one year after the release of the next stable, or the next stable 
release, whichever comes first[2]

[2] for the avoidance of doubt, the next Debian release will most likely 
happen in 2021

[3] https://www.debian.org/lts


Hope this explains,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Got a puzzle here

2019-11-04 Thread tomas
On Mon, Nov 04, 2019 at 09:08:36AM -0500, Gene Heskett wrote:
> On Monday 04 November 2019 08:45:42 Greg Wooledge wrote:
> 
> > On Fri, Nov 01, 2019 at 11:06:26PM +0100, to...@tuxteam.de wrote:
> > > That will depend on whether apache is compiled with tcpwrappers
> > > (that's the library implementing the hosts.{allow,deny} policies). I
> > > don't know whether Debian's distribution does that (perhaps others
> > > will).
> >
> > It's not.
> 
> Oh fudge, no wonder my mechinations with /etc/hosts.deny have zero long 
> term effect.
> 
> Does apache2 have its own module that would prevent its responding to an 
> ipv4 address presented in a .conf file as "xx.xx.xx.xx/24" format?

More or less (your request is too specific, the /24 can be an arbitrary
netmask). This has come up already in this thread.

See, e.g. https://httpd.apache.org/docs/2.4/howto/access.html for several
ways to skin that cat.

I can't tell you how to actually weave those configuration snippets
into the Debian-provided config -- it's a long time since I "did"
Apache myself.

I know that Debian breaks down the config into multiple files to
ease separate package configuration. All lives somewhere under
/etc/apache2, there are subdirectories for configuration snippets
(conf-available and conf-enabled -- the latter being just a link
farm to the former, to ease dis- and enabling of individual config
items: there are commands for that (a2enconf, a2disconf) -- likewise
for different sites (if your Apache is serving several sites).

It's bound to be a panoramic ride. Apache config is a heck of
a dungeon. But I think this is where you should start.

Cheers
-- t


signature.asc
Description: Digital signature


Re: Backup Times on a Linux desktop

2019-11-04 Thread deloptes
Alessandro Baggi wrote:

> Borg seems very promising but I performs only push request at the moment
> and I need pull request. It offers deduplication, encryption and much
> more.
> 
> One word on deduplication: it is a great feature to save space, with
> deduplication compression ops (that could require much time) are avoided
> but remember that with deduplication for multiple backups only one
> version of this files is deduplicated. So if this file get corrupted
> (for every reason) it will be compromised on all  previous backups jobs
> performed, so the file is lost. For this I try to avoid deduplication on
> important backup dataset.

Not sure if true - for example you make daily, weekly and monthly backups
(classical) Lets focus on the daily part. On day 3 the files is broken.
You have to recover from day 2. The file is not broken for day 2 - correct?!

> but remember that with deduplication for multiple backups only one
> version of this files is deduplicated.

I do not know how you come to the conclusion regarding this. This is not how
deduplication works. At least not according my understanding. The
documentation describes the process of backing up and deduplication such
that file chunks are being read and compared. If they are different the new
chunk is backuped. Remember this is done for each backup. If you want to
restore a previous one obviously the file will be reconstructed based on
previously store/backuped information.

regards





Re: Got a puzzle here

2019-11-04 Thread Gene Heskett
On Monday 04 November 2019 08:50:20 Greg Wooledge wrote:

> On Fri, Nov 01, 2019 at 06:46:25PM -0400, Gene Heskett wrote:
> > I'll make sure its installed. Right now. But that is a problem:
> > root@coyote:etc$ apt install tcpwrappers
>
> ... no, Gene.
>
> TCP wrappers is a *library*, and its package name in Debian is
> libwrap0.
>
Already installed, claims latest version.

> wooledg:~$ apt-cache search tcp wrappers
> fakeroot - tool for simulating superuser privileges
> libfakeroot - tool for simulating superuser privileges - shared
> libraries libauthen-libwrap-perl - module providing access to the TCP
> Wrappers library python-tcpwrap - Python interface for libwrap0 (TCP
> wrappers) ruby-proxifier - add support for HTTP or SOCKS proxies
> sendmail - powerful, efficient, and scalable Mail Transport Agent
> (metapackage) sendmail-bin - powerful, efficient, and scalable Mail
> Transport Agent libwrap0 - Wietse Venema's TCP wrappers library
> libwrap0-dev - Wietse Venema's TCP wrappers library, development files
> ucspi-tcp - command-line tools for building TCP client-server
> applications ucspi-tcp-ipv6 - command-line tools for building TCP
> client-server applications (IPv6)
>
> (At least learn how to use the basic Debian utilities.)
>
> A given program is either built with libwrap, or it isn't.  You can't
> just install it and have it affect programs that aren't built to use
> it.
>
> (It actually has a second mode of operation, though -- in a service
> manager like inetd or xinetd, you can use TCP wrappers as an actual
> wrapper program that inetd invokes.  Then the wrapper can validate
> whether it wants to continue this connection or not, and if it chooses
> to allow the connection, it will exec the actual daemon that it's
> wrapping, e.g. in.ftpd or in.telnetd or some other relic of the bronze
> age.)
>
> (None of this applies to Apache, which is NOT linked with libwrap0,
> and which is NOT launched by a service manager.  It's a standalone
> daemon that does its own socket listening, so there's no place to
> insert a chain-loading wrapper program.)

If its not built to use libwrap0, then I assume it has its own module to 
similarly restrict its response to a specified incoming source address?

And it is?

Thanks.

Gene Heskett



Re: Got a puzzle here

2019-11-04 Thread tomas
On Mon, Nov 04, 2019 at 09:44:56AM -0500, Gene Heskett wrote:

[...]

> If its not built to use libwrap0, then I assume it has its own module to 
> similarly restrict its response to a specified incoming source address?
> 
> And it is?

See above :)

Much more flexible than tcpwrappers. And once you got that up and
running, you might escalate to fail2ban (hint: get your Apache to
recognize reliably those clients you don't want -- be it by host
name, IP address, user agent string, whatever else or a combination
thereof; then teach your apache to mumble something in its log
file which fail2ban understands. Then unleash fail2ban).

But first things first -- and perhaps the Apache config is enough
for your needs.

Cheers
-- t


signature.asc
Description: Digital signature


Re: Backup Times on a Linux desktop

2019-11-04 Thread Linux-Fan

deloptes writes:


Alessandro Baggi wrote:

> Borg seems very promising but I performs only push request at the moment
> and I need pull request. It offers deduplication, encryption and much
> more.
>
> One word on deduplication: it is a great feature to save space, with
> deduplication compression ops (that could require much time) are avoided
> but remember that with deduplication for multiple backups only one
> version of this files is deduplicated. So if this file get corrupted
> (for every reason) it will be compromised on all  previous backups jobs
> performed, so the file is lost. For this I try to avoid deduplication on
> important backup dataset.

Not sure if true - for example you make daily, weekly and monthly backups
(classical) Lets focus on the daily part. On day 3 the files is broken.
You have to recover from day 2. The file is not broken for day 2 - correct?!


[...]

I'd argue that you are both right about this. It just depends on where the
file corruption occurs.

Consider a deduplicated system which stores backups in /fs/backup and reads
the input files from /fs/data. Then if a file in /fs/data is corrupted, you
could always extract it from the backup successfully. If that file were
changed and corrupted, the backup system would no longer consider it a
"duplicate" and thus store the corrupted content of the file as a new
version. Effectively, while the newest version of the file is corrupted and
thus not useful, it is still possible to recover the old version of the file
from the (deduplicated or not) backup.

The other consideration is a corruption on the backup storage volume like
some files in /fs/backup go bad. In a deduplicated setting, if a single
piece of data in /fs/backup corresponds to a lot of restored files with the
same contents, all of these files are no longer successfully recoverable,
because the backup's internal structure contains corrupted data.

In a non-deduplicated (so to say: redundant) backup system, if parts of the
backup store become corrupted, the damage is likely (but not necessarily)
restricted to only some files upon restoration and as there is no
deduplication, it is likely that the "amount of data non-restorable" is
somehow related to the "amount of data corrupted"...

as these considerations about a corrupted backup store are mostly on such a
blurry level as described, the benefit from avoiding deduplication because
of the risk of losing more files upon corruption of the backup store is
possibly limited. However, given some concrete systems, the picture might
change entirely. A basic file-based (e.g. rsync) backup is as tolerant to
corruption as the original "naked" files. For any system maintaining its own
filesystem, the respective system needs to be studied extensively to find
out how partial corruption affects restorability. In theory, it could have
additional redundancy data to restore files even in the presence of a
certain level of corruption (e.g. in percent bytes changed or similar).

This whole thing was actually a reason for writing my own system: File-based
rsync-backup was slow, space inefficient and did not provide encryption.
However, more advanced systems (like borg, obnam?) split files into
multiple chunks and maintain their own filesystem. For me it is not really
obvious how a partially corrupted backup restores with these systems. For
my tool, I chose an approach between these: I store only "whole" files and
do not deduplicate them in any way. However, I put multipls small files into
archives such that I can compress and encrypt them. In my case, a partial
corruption would exactly lose the files from the corrupted archives which
establishes a relation between the amount of data corrupted and lost
(although in the worst case: "each archive slightly corrupted", all is
lost... to avoid that one needs error correction, but my tool does not do it
[yet?])

HTH
Linux-Fan



Re: changing desktop manager from gnome to xfce in debian

2019-11-04 Thread Linux-Fan

David writes:


On Mon, 4 Nov 2019 at 08:11, Dan Hitt  wrote:
>
> I still would like a programmatic/command-line way of detecting what the
desktop.

Try this command, shown here with the output I get on my system
which uses the LXDE desktop. Whatever output you get will depend
on whatever environment variables your desktop sets.

$ env | grep -i DESKTOP
DESKTOP_SESSION=LXDE
XDG_SESSION_DESKTOP=lightdm-xsession
XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0
XDG_CURRENT_DESKTOP=LXDE
XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session0


Thanks for sharing this. I did not know such a simple means of getting
information about a variety of window managers / desktop environments
exists. It even works for i3 (although only one variable is being set and
does not correspond to any of the listed ones from above):

$ env | grep -i DESKTOP
DESKTOP_STARTUP_ID=i3/|usr|bin|materm/2155-5-masysma-9_TIME1931107

Btw. I like writing `grep -i DESKTOP`. `-i` makes it case-insensitive but by
writing `DESTKOP` it is being hinted that this value is supposed to often
occur in all caps :)

Thanks again
Linux-Fan



Re: Backup Times on a Linux desktop

2019-11-04 Thread Alessandro Baggi

On 04/11/19 15:41, deloptes wrote:

Not sure if true - for example you make daily, weekly and monthly backups
(classical) Lets focus on the daily part. On day 3 the files is broken.
You have to recover from day 2. The file is not broken for day 2 - correct?!


If I'm not wrong deduplication "is a technique for eliminating duplicate 
copies of repeating data".


I'm not a borg expert and it performs deduplication on data chunk.

Suppose that you backup 2000 files in a day and inside this backup a 
chunk is deduped and referenced by 300 files. If the deduped chunk is 
broken I think you will lost it on 300 referenced files/chunks. This is 
not good for me.


if your main dataset has a broken file, no problem, you can recovery 
from backups.


If your saved deduped chunk is broken all files that has reference to it 
could be broken. I think also that the same chunk will be used for 
successive backups (always for deduplication) so this single chunk could 
be used from backup1 to backupN.


It has also integrity check but don't know if check this. I read also 
that integrity check on bigsized dataset could require too much time.


In my mind a backup is a copy of file in window time and if needed in 
another window time another copy could be picked but it could not be a 
reference to a previous copy. Today there are people that make backups 
on tape (expensive) for reliability. I run backups on disks. Disks are 
cheap so compression (that require time in backup and restore) and 
deduplication (that add complexity) are not needed for me and they don't 
affect really my free disk space because I can add a disk.


Rsnapshot uses hardlink that is similar.

All this solutions are valid if them fit your needs. You must choose how 
important are data inside your backups and if losing a chunk deduped 
could make damage to your backup dataset in a timeline.


Ah if you have multiple server to backup, I prefer bacula because can 
pull data from hosts and can backup multiple server from the same point 
(maybe using for each client a separated bacula-sd daemon with dedicated 
storage).




Re: Backup Times on a Linux desktop

2019-11-04 Thread Joel Roth
On Sat, Nov 02, 2019, Konstantin Nebel wrote:
> So now I am thinking. How should I approach backups. On windows it does
> magically backups and remind me when they didnt run for a while. I like that
> attitude.
(...) 
>  I like to turn off
> my computer at night. So a backup running in night is not really an option
> unless I do wake on lan and run backup and then turn off. 
(...)

Someone already recommended setting up a cron job for triggering backups
on a regular schedule. That takes care of the automagic part.

These days I use rsync with the --link-dest option to make
complete Time-Machine(tm) style backups using hardlinks to
avoid file duplication in the common case.  In this
scenario, the top-level directory is typically named based
on date and time, e.g. back-2019.11.04-05:32:06.

I usually make backups while the system is running, although
I'm not sure it's considered kosher. It takes around 10% of
CPU on my i5 system.

> Whoever read till the end Im thankful and ready to hear your opinion.
 
> Cheers
> Konstantin

--
Joel Roth



Re: MP30-AR0 arm64 sdcard slot not detected

2019-11-04 Thread Andrei POPESCU
On Vi, 18 oct 19, 18:18:13, Michael Howard wrote:
> I've just re-installed debian (stretch) on the Gigabyte MP30-AR0 board using
> the installer netinst iso (any later install images fail) and the sdcard
> slot is not showing up. The kernel is vmlinuz-4.9.0-11-arm64 and I have also
> rebuilt it ensuring all the MMC options I should need are selected.

You might want to ask on debian-arm.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: [SOLVED] Buster: any graphical browser not depending on systemd?

2019-11-04 Thread Andrei POPESCU
On Sb, 19 oct 19, 18:02:12, to...@tuxteam.de wrote:
> 
> leaves sysvinit-core in place. I double-checked (with apt -s and capturing
> the output).

I browsed a little bit through firefox-esr's dependencies with aptitude, 
but couldn't find a dependency chain to systemd-sysv.

Could you post that output of 'apt -s'?

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Got a puzzle here

2019-11-04 Thread Gene Heskett
On Monday 04 November 2019 09:29:48 Greg Wooledge wrote:

> On Mon, Nov 04, 2019 at 09:08:36AM -0500, Gene Heskett wrote:
> > Does apache2 have its own module that would prevent its responding
> > to an ipv4 address presented in a .conf file as "xx.xx.xx.xx/24"
> > format?
>
> Well, looking at your larger issue, you might find it more useful
> to block these bots based on their user-agent strings.
>
> The first thing you want to do is actually find a log entry from one
> of these bots, so you know what you're dealing with.  If you're not
> logging user-agent, then you'll want to turn that on first.
>
> Once you have that information, you can google "apache block user
> agent" or whatever search terms work best for you.
>
> I'm using nginx on my (real) web site, so I don't have the
> Apache-specific knowledge you're looking for.  I do block one type of
> bot based on its user-agent.  It's pretty simple in nginx:
>
> greg@remote:/etc/nginx$ cat sites-enabled/mywiki.wooledge.org
> server {
> listen 80;
> listen 443 ssl;
> server_name mywiki.wooledge.org;
>
> if ($http_user_agent ~ SemrushBot) {
> return 403;
> }
> ...

And that looks like nginx is a lot easier to program than apache2.  The 
above makes sense. Once I'm functioning again after tomorrows heart 
valve work, I'll investigate that, probably on a fresh drive and a 
buster 10.1 install.  Thanks Greg.

Cheers, Gene Heskett




Re: Easiest Way to forward an email Message from Linux to a Mac

2019-11-04 Thread Martin McCormick
Bob Weber  writes:
> Why not create a user on the Linux box to receive such emails and have the
> MAC client connect to that user on the Linux box.  You might have to
> install a pop server (popa3d ... easiest to install and configure) or imac
> server (dovecot-imapd ... harder to configure and probably more than you
> need) on the Linux box if one isn't installed already.

It looked for a bit like this should just work in that I entered:

apt-get install dovecot-imapd

The following NEW packages will be installed:
  dovecot-core dovecot-imapd libexttextcat-2.0-0 libexttextcat-data
  liblua5.3-0 libstemmer0d ssl-cert
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 5,344 kB/5,750 kB of archives.
After this operation, 14.2 MB of additional disk space will be used.
Do you want to continue? [Y/n] yes

Then the wheels flew off:

Err:1 http://ftp.us.debian.org/debian buster/main i386 dovecot-core i386 1:2.3.4
.1-5
  404  Not Found [IP: 208.80.154.15 80]
Err:2 http://ftp.us.debian.org/debian buster/main i386 dovecot-imapd i386 1:2.3.
4.1-5
  404  Not Found [IP: 208.80.154.15 80]
E: Failed to fetch http://ftp.us.debian.org/debian/pool/main/d/dovecot/dovecot-c
ore_2.3.4.1-5_i386.deb  404  Not Found [IP: 208.80.154.15 80]
E: Failed to fetch http://ftp.us.debian.org/debian/pool/main/d/dovecot/dovecot-i
mapd_2.3.4.1-5_i386.deb  404  Not Found [IP: 208.80.154.15 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-mis
sing?


I ran the following to be safe:

wb5agz martin tmp $ sudo apt-get purge dovecot-imapd
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'dovecot-imapd' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

At least nothing got partly installed but stuff is
obviously not quite right.

Is it safe to try that suggestion?

Martin



Re: Easiest Way to forward an email Message from Linux to a Mac

2019-11-04 Thread Greg Wooledge
On Mon, Nov 04, 2019 at 12:06:44PM -0600, Martin McCormick wrote:
> Then the wheels flew off:
> 
> Err:1 http://ftp.us.debian.org/debian buster/main i386 dovecot-core i386 
> 1:2.3.4
> .1-5
>   404  Not Found [IP: 208.80.154.15 80]

Either you didn't run "apt-get update" first, or your mirror is out of
sync.  The current version of dovecot-core in buster is 1:2.3.4.1-5+deb10u1.



Re: Backup Times on a Linux desktop

2019-11-04 Thread deloptes
Alessandro Baggi wrote:

> If I'm not wrong deduplication "is a technique for eliminating duplicate
> copies of repeating data".
> 
> I'm not a borg expert and it performs deduplication on data chunk.
> 
> Suppose that you backup 2000 files in a day and inside this backup a
> chunk is deduped and referenced by 300 files. If the deduped chunk is
> broken I think you will lost it on 300 referenced files/chunks. This is
> not good for me.
> 

Look at the explanation by Linux-Fan. I think it is pretty good. It fits one
scenario, however if your backup system (disks or whatever) is broken - it
can not be considered as backup system at all.

I think deduplication is a great thing nowdays - People need to backup TBs,
take care of retention etc. I do not share your concerns at all.

> if your main dataset has a broken file, no problem, you can recovery
> from backups.
> 
> If your saved deduped chunk is broken all files that has reference to it
> could be broken. I think also that the same chunk will be used for
> successive backups (always for deduplication) so this single chunk could
> be used from backup1 to backupN.
> 

This is not true.

> It has also integrity check but don't know if check this. I read also
> that integrity check on bigsized dataset could require too much time.
> 
> In my mind a backup is a copy of file in window time and if needed in
> another window time another copy could be picked but it could not be a
> reference to a previous copy. Today there are people that make backups
> on tape (expensive) for reliability. I run backups on disks. Disks are
> cheap so compression (that require time in backup and restore) and
> deduplication (that add complexity) are not needed for me and they don't
> affect really my free disk space because I can add a disk.
> 

I think it depends how far you want to go - how precious is the data.
Magnetic disk and tapes can be destroyed by EMP or similar. SSD despite its
price can fail and if it fails - it can not recover anything.
So ... there are some rules in securely preserving backups - but all of this
is very expensive.

> Rsnapshot uses hardlink that is similar.
> 
> All this solutions are valid if them fit your needs. You must choose how
> important are data inside your backups and if losing a chunk deduped
> could make damage to your backup dataset in a timeline.
> 

No unless the corruption is on the backup server, but if it happens ... well
you should consider the backup server broken - I do not think it has
anything with deduplication.

> Ah if you have multiple server to backup, I prefer bacula because can
> pull data from hosts and can backup multiple server from the same point
> (maybe using for each client a separated bacula-sd daemon with dedicated
> storage).




Re: Easiest Way to forward an email Message from Linux to a Mac

2019-11-04 Thread Martin McCormick
Greg Wooledge  writes:
> Either you didn't run "apt-get update" first, or your mirror is out of
> sync.  The current version of dovecot-core in buster is 
> 1:2.3.4.1-5+deb10u1.

Thank you.  It was the former.  I failed to run apt-get
update but I didn't just forget.  Ever since I upgraded to
buster, I see a line in syslog that goes:

Nov  4 06:10:01 wb5agz systemd[1]: Starting Daily apt upgrade and clean 
activities...

Before buster, I would run sudo apt-get update and then sudo
apt-get upgrade.  I thought the cron job had done this
automatically this morning when it ran.  After running apt-get
update, all was quite well.

Martin



Re: Backup Times on a Linux desktop

2019-11-04 Thread Joel Roth
On Mon, Nov 04, 2019, Charles Curley wrote:
> On Mon, 4 Nov 2019 06:01:54 -1000
> Joel Roth  wrote:
> 
> > These days I use rsync with the --link-dest option to make
> > complete Time-Machine(tm) style backups using hardlinks to
> > avoid file duplication in the common case.  In this
> > scenario, the top-level directory is typically named based
> > on date and time, e.g. back-2019.11.04-05:32:06.
> 
> Take a look at rsnapshot. You have pretty well described it.

Looks like a featureful, capable, and thoroughly debugged
front end to rsync with the --link-dest option. 

Thanks, I'll fool around with this. 

Also for the explanations about file integrity issues when
databases are involved. 

--
Joel Roth



Re: Backup Times on a Linux desktop

2019-11-04 Thread Charles Curley
On Mon, 4 Nov 2019 06:01:54 -1000
Joel Roth  wrote:

> These days I use rsync with the --link-dest option to make
> complete Time-Machine(tm) style backups using hardlinks to
> avoid file duplication in the common case.  In this
> scenario, the top-level directory is typically named based
> on date and time, e.g. back-2019.11.04-05:32:06.

Take a look at rsnapshot. You have pretty well described it.


> 
> I usually make backups while the system is running, although
> I'm not sure it's considered kosher. It takes around 10% of
> CPU on my i5 system.

It's kosher except in a few places where referential integrity is an
issue. The classic here is a database that extends across multiple
files, which means almost all of them.

Referential integrity means keeping the data consistent. Suppose you
send an INSERT statement to a SQL database, and it affects multiple
files. The database writes to the first file. Then your backup comes
along and grabs the files for backup. Then your database writes the
other files. Your backups are broken, and you won't know it until you
restore and test.

There are work-arounds. Shut the database down during backups, or make
it read only during backups. Or tell it to accept writes from clients
but not actually write them out to the files until the backup is over.

Obviously this requires some sort of co-ordination between the backup
software and the software maintaining the files.

Or use Sqlite, which I believe avoids this issue entirely.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Cannot open Chromium browser in same workspace (from which it was launched)

2019-11-04 Thread Cindy Sue Causey
On 11/3/19, Vipul  wrote:
> Hi,
>
> I'm facing an issue with **Chromium** web browser (Debian's official
> build). Chromium browser isn't opening in same workspace (workspace from
> which I launched it). It always opens in workspace in which its last
> window was closed, no matter whether that workspace exist? or not.
>   To fix this problem, I've to launch it twice, in first launch, let it
> open in workspace it wants to open; and if this isn't the same workspace
> in which I'm working, launch it again.
>
> Some instances of problem:
> 1). Let's suppose, last window of Chromium is closed in workspace-X. If
> I launched Chromium from workspace-Y; if **Y != X** (both X and Y are
> not equal), my workspace automatically switched to **workspace-X** and a
> Chromium window opens.
> Fix:
> Launch Chromium twice times from **workspace-Y**.
>
> 2). Let's suppose, last window of Chromium is closed in workspace-X. If
> I launched Chromium from workspace-Y and total number of workspace
> currently you're using is Z and **X > Z+1**, it opens in workspace-Y,
> which is an expected behavior.


Hi, Vipul.. This is just a "wild guess", but try looking at what the
desktops think Chromium should be doing. After you've opened Chromium
in any given desktop, try right-clicking over the top of where it
shows on however you have that panel set up.

As an example, *my* primary panel sits at the top of my screen. I
would right-click over Chromium's active entry there.

Some (but likely not all) releases and desktop environments and all
that good shtuff... let you decide if any given program stays open in
only one particular desktop, e.g. the next one to the right, OR in all
of them *IF* you have more than one desktop available.

What I'm thinking is that it's possible you, or someone else if anyone
else uses your computer, accidentally or otherwise may have chosen
that option *IF* it's available on your setup.

If that *CHOICE* for a program's desktop residence is not available, I
unfortunately can't think of anything else that might be causing that
just this second. :)

Good luck solving what's going on.

PS This is one of the very few things that I've never seen a secondary
way of achieving that same effect. If anyone knows of a different way
to do the above, it's always nice to have at least two ways to get to
whatever needs done on any operating system, really. #ThankYou in
advance if anyone knows of anything!

PPS *IF* there is a secondary way to choose a program's desktop
residence, maybe that's how it got changed, too, *IF* that turns out
to be what happened. I'm imagining an errant cursor going click happy
behind the user's back. Has happened to me on more than one.. or
three.. occasions. *Those darn dogs (again)!*

Cindy :)
-- 
Cindy-Sue Causey
Talking Rock, Pickens County, Georgia, USA

* runs with birdseed *



installed nginx, now what? Need srartup tut, nginx site won't let me download any docs.

2019-11-04 Thread Gene Heskett
Greetings;

I guess the subject says it all.

Thanks all.
Cheers, Gene Heskett



Re: installed nginx, now what? Need srartup tut, nginx site won't let me download any docs.

2019-11-04 Thread Brian
On Mon 04 Nov 2019 at 16:14:17 -0500, Gene Heskett wrote:

> I guess the subject says it all.

Any other user submitting a mail like this to the list would be slated.
And quite correctly. As usual, you've cocked up somewhere.

-- 
Brian.



Re: Got a puzzle here

2019-11-04 Thread tomas
On Mon, Nov 04, 2019 at 12:48:00PM -0500, Gene Heskett wrote:
> On Monday 04 November 2019 09:29:48 Greg Wooledge wrote:
> 
> > On Mon, Nov 04, 2019 at 09:08:36AM -0500, Gene Heskett wrote:

[...]

> And that looks like nginx is a lot easier to program than apache2.

Nearly anything is -- except perhaps Sendmail.

Cheers
-- t


signature.asc
Description: Digital signature


Re: [SOLVED] Buster: any graphical browser not depending on systemd?

2019-11-04 Thread tomas
On Mon, Nov 04, 2019 at 07:04:34PM +0200, Andrei POPESCU wrote:
> On Sb, 19 oct 19, 18:02:12, to...@tuxteam.de wrote:
> > 
> > leaves sysvinit-core in place. I double-checked (with apt -s and capturing
> > the output).
> 
> I browsed a little bit through firefox-esr's dependencies with aptitude, 
> but couldn't find a dependency chain to systemd-sysv.

There is no dependency. Confusion was due to a decision of Apt's
resolver (which I didn't wholly understand).

Cheers
-- t


signature.asc
Description: Digital signature


auxiliary mail client for HTML

2019-11-04 Thread Russell L. Harris

Several times a week I receive a HTML email with numerous links.  Mutt
(or neoMutt, which I am using until I upgrade my Debian installation)
seems not to be a good solution for such messages.

What is a decent, simple GUI client which I can point at my maildir
structure to read such messages and be able to open on the links with
a click?

I do not require SMTP; I plan to use Mutt for any response I send.



Re: neovim and less

2019-11-04 Thread R Ransbottom
Hi Jonathan,


I was more on the path of confirming a bug.  I got the same behavior on
vim and vim-nox on gnome terminal and the console tty.  Updating
everything updateable and finally rebooting fixed it.

So I suspect some kernel issue, but I don't know.  For all my efforts,
I am no wiser just happier--if I'm happier maybe I am--no that's just
stupid. Ha!

Thanks for the suggestions.

rir

On Mon, Nov 04, 2019 at 02:01:01PM +, Jonathan Dowland wrote:
> On Sat, Nov 02, 2019 at 02:27:19PM -0400, R Ransbottom wrote:
> > Issuing a ex command like
> > 
> >:! perldoc -f close
> > 
> > or
> > 
> >:! cat some_file | less
> > 
> > brings me directly to the end of the file output, leaving me with
> > the nvim message:
> > 
> >Press ENTER or type command to continue
> > 
> > requiring me to navigate to the start of the output.  Less does not
> > do this when invoked from bash.
> 
> Side-stepping the issue a little, but you could consider opening a new
> scratch buffer and then reading the output of your shell command into
> it, then just navigating that buffer in nvim directly, rather than using
> an external pager. E.g.
> 
>:ene
>:r! perldoc -f close
> 
> Or more simply for your cat|less example, simply open the file in nvim
> directly, in a new pane if you wish
> 
>:sp
>:e some_file
> 



Re: auxiliary mail client for HTML

2019-11-04 Thread Jude DaShiell
On Mon, 4 Nov 2019, Russell L. Harris wrote:

> Date: Mon, 4 Nov 2019 18:22:58
> From: Russell L. Harris 
> To: debian-user@lists.debian.org
> Subject: auxiliary mail client for HTML
> Resent-Date: Mon,  4 Nov 2019 23:43:57 + (UTC)
> Resent-From: debian-user@lists.debian.org
>
> Several times a week I receive a HTML email with numerous links.  Mutt
> (or neoMutt, which I am using until I upgrade my Debian installation)
> seems not to be a good solution for such messages.
>
> What is a decent, simple GUI client which I can point at my maildir
> structure to read such messages and be able to open on the links with
> a click?
>
> I do not require SMTP; I plan to use Mutt for any response I send.
>
urlscan and a macro to bring urlscan up once a link got highlighted would
help if you still want to use mutt or neomutt.
>
>

--



Re: auxiliary mail client for HTML

2019-11-04 Thread Russell L. Harris

On Mon, Nov 04, 2019 at 09:46:14PM -0500, Jude DaShiell wrote:

urlscan and a macro to bring urlscan up once a link got highlighted would
help if you still want to use mutt or neomutt.


I am using urlscan.  I would be happy to forward to you one or two
sample messages; each has a dozen links, and urlscan is not much help
in deciding which link to select.  Sometimes I can access the links
displayed by urlscan, but sometimes none of the links work.

I am not looking for a replacement for Mutt.  I simply wish to have
another client which is able to look at the same maildir and display
the message.  Again, any reply I make always is in plain text, via
Mutt.

I installed Thunderbird -- what a huge truck-load of stuff!  But the
configuration wizard would not allow me simply to point Thunderbird to
the maildir to which getmail delivers incoming messages.



Re: auxiliary mail client for HTML

2019-11-04 Thread Jude DaShiell
On Tue, 5 Nov 2019, Russell L. Harris wrote:

> Date: Mon, 4 Nov 2019 22:04:57
> From: Russell L. Harris 
> To: Jude DaShiell 
> Cc: debian-user@lists.debian.org
> Subject: Re: auxiliary mail client for HTML
> Resent-Date: Tue,  5 Nov 2019 03:51:38 + (UTC)
> Resent-From: debian-user@lists.debian.org
>
> On Mon, Nov 04, 2019 at 09:46:14PM -0500, Jude DaShiell wrote:
> >urlscan and a macro to bring urlscan up once a link got highlighted would
> >help if you still want to use mutt or neomutt.
>
> I am using urlscan.  I would be happy to forward to you one or two
> sample messages; each has a dozen links, and urlscan is not much help
> in deciding which link to select.  Sometimes I can access the links
> displayed by urlscan, but sometimes none of the links work.
>
> I am not looking for a replacement for Mutt.  I simply wish to have
> another client which is able to look at the same maildir and display
> the message.  Again, any reply I make always is in plain text, via
> Mutt.
>
> I installed Thunderbird -- what a huge truck-load of stuff!  But the
> configuration wizard would not allow me simply to point Thunderbird to
> the maildir to which getmail delivers incoming messages.
>
>
Could those urls be in different formats in those messages?
That will pose problems for some of these url grabbers.

>

--



Re: Want to install info node for elisp in emacs on stretch

2019-11-04 Thread Dan Hitt
On Mon, Nov 4, 2019 at 6:31 AM Andrei POPESCU 
wrote:

> On Ma, 15 oct 19, 17:10:49, Dan Hitt wrote:
> >
> > I will keep my oar out of the water about the complex beast, but since
> > i'm in oldstable, does that mean i need to upgrade before too long?
> > (I've been using debian 9 since February 2017.)
>
> .
>
> If you don't expect or need any security support then you can continue
> using stretch for as long as you like, provided you have working
> compatible hardware.
>
> In this case please disconnect the system from the internet when it
> doesn't receive security updates anymore.
>
>
Hi Andre,

Thanks for your mail and the references.

I did upgrade to buster.  When buster itself is no longer getting security
support, then i'm assuming sufficient disconnection would be just to make
sure the host itself is not connected to a cable modem, although it might
be on a local network in which some hosts are  connected (provided that all
of the other hosts are current).

dan


Re: auxiliary mail client for HTML

2019-11-04 Thread Mark Rousell
On 05/11/2019 03:04, Russell L. Harris wrote:
> I installed Thunderbird -- what a huge truck-load of stuff!  But the
> configuration wizard would not allow me simply to point Thunderbird to
> the maildir to which getmail delivers incoming messages.

Thunderbird has *experimental* maildir (actually maildir-like) support
but it is only intended to be used as Thunderbird's private mail store.
It is not intended to point to a maildir put on disk by some other program.

I emphasised "experimental" above because the maildir support in TB is
not complete and is not yet reliable in all scenarios.

So I don't think TB would be a solution to access your current maildir
structure.

Set up a local IMAP server instead? :-)

-- 
Mark Rousell
 
 
 



Re: auxiliary mail client for HTML

2019-11-04 Thread Russell L. Harris

On Tue, Nov 05, 2019 at 04:10:17AM +, Mark Rousell wrote:

Set up a local IMAP server instead? :-)


I found a HOWTO:
https://www.linux.com/news/how-build-local-imap-server/
but I have not read though it.

Is it necessary to route all my mail through the local IMAP server?
Mail with getmail and Mutt now is running nicely, and I am hesitant to
monkey with a system which I understand and with which I am comfortable.

I am thinking that, inasmuch as I have web hosting for my weather
station, and the web hosting agreement includes email (which I have
not bothered to set up, because I have not had need for it), the
easiest solution is to set up an email account on the URL of the
weather station web site, forward problematic messages to that
account, then configure a GUI mail client for that mail account, or
else use the webmail interface of the ISP.  



Re: raspberry pi installation with debian installer

2019-11-04 Thread Gunnar Wolf
basti dijo [Thu, Oct 31, 2019 at 09:58:11PM +0100]:
> Hello Mailinglist,
> Hello Gunnar,

Hi, and thanks for the explicit mention :-]

> I get the debian installer running on my rpi3.
> This post is just to inform about the general possibility and for
> documentation propose on debian wiki.

OK, this is quite exciting news! It's great to see the Raspberries
being closer to a first-tier architecture in Debian. TBH, I believe
for almost all RPi users it will be easier to use the installed images
— But yes, I can perfectly understand many will feel this to be better
and more official.

Given you already did all this legwork... Could you add this
information to the Wiki yourself? It's always better if the person
that did the work and has the hands-on knowledge does it.

> test with arm64 mode on rpi3b+
> 
> you need:
> - sdcard with binary blob vfat partition (I use it from
> https://wiki.debian.org/RaspberryPiImages)
> - usb stick for arm64 installer
> 
> todo:
> - download arm64-netinstall iso
> - copy iso to usb stick (cp debian-10.1.0-arm64-netinst.iso /dev/sdx)
> - copy vmlinuz and initrd.gz from stick to sdcard
> - edit config.txt to boot vmlinuz and initrd.gz
> - insert sdcard and usb stick to raspi and start

Umh, this looks like quite a bit of "legwork". I understand you are
basically proving it is _possible_ to boot into d-i, but this all
should probably be prepared into a first-blob bit of a hybrid image:

https://www.debian.org/releases/stable/arm64/ch04s03.en.html

> - ignore missing firmware, brcmfmac43455-sdio.bin is wlan, can be
> installed later (firmware-brcm80211)

AIUI, you can also drop this file in your USB drive and have it picked
up by the installer.

> toto:
> - not all languages are shown correctly in installer

This seems quite odd...

Thanks a lot!



Re: auxiliary mail client for HTML

2019-11-04 Thread Mark Rousell
Before I go on, I should say that this is now an area with which I am
not overly familiar in detail. I know Thunderbird very well but I am not
familiar in detail with getmail, Dovecot or maildir structures. However,
I know the principles and I'll do my best to reply usefully below.

On 05/11/2019 04:44, Russell L. Harris wrote:
> On Tue, Nov 05, 2019 at 04:10:17AM +, Mark Rousell wrote:
>> Set up a local IMAP server instead? :-)
>
> I found a HOWTO:
> https://www.linux.com/news/how-build-local-imap-server/
> but I have not read though it.

I should also say that I only suggested setting up a local IMAP server
as a way to let Thunderbird access your email. My smiley was because
this might be overkill solely in order to see emails with HTML content!
But it's not actually that unreasonable, come to think of it.

I just had a quick look at this HOWTO and it seems to focus on using
fetchmail to store mail in mbox format with Dovecot as the IMAP server.
>From your comments, you're using getmail and maildir so you could not
follow the HOWTO exactly but you can still apply the same principles to
place email in a place where Dovecot can find it. I have not done it
myself but I understand that you should be able to configure Dovecot to
look at your getmail's maildir structure.

I'm going to refer to Dovecot as your local IMAP server below on the
assumption that you choose Dovecot to do the IMAP job, but other IMAP
servers are available.

> Is it necessary to route all my mail through the local IMAP server?

No (but read on). As I understand it, at the moment you are using
getmail to collect mail from your ISP (presumably using IMAP or POP3)
and store it locally in a maildir structure. Mutt reads from the maildir
structure.

If you were to install Dovecot as an IMAP server alongside of this then
(as I understand it, as I've not done it myself) Dovecot could also read
from the same maildir structure.

Mail clients like Thunderbird could then access your local Dovecot IMAP
server, which in turn would show them the contents of your maildir
structure.

So the Dovecot IMAP server (and any mail clients like Thunderbird that
connect to it) could see all your email (whatever is in your maildir
structure) but email would not be routed through the IMAP server, as
such. It's just that the IMAP server could access it as needed.

> Mail with getmail and Mutt now is running nicely, and I am hesitant to
> monkey with a system which I understand and with which I am comfortable.

Yup, I understand. As above, you need not (as far as I can tell) alter
your working system. Getmail, the maildir structure, and Mutt should
continue to work. The Dovecot (or other IMAP server) would just access
the maildir structure as needed.

Note that there may be complications in adding Dovecot to your existing
getmail, maildir, Mutt set up but in principle it should work.

> I am thinking that, inasmuch as I have web hosting for my weather
> station, and the web hosting agreement includes email (which I have
> not bothered to set up, because I have not had need for it), the
> easiest solution is to set up an email account on the URL of the
> weather station web site, forward problematic messages to that
> account, then configure a GUI mail client for that mail account, or
> else use the webmail interface of the ISP. 

Yup, you could do that. But if you are happy with processing you email
locally then I personally think it would be preferable to keep doing so
with your own local IMAP server.

-- 
Mark Rousell
 
 
 



Re: auxiliary mail client for HTML

2019-11-04 Thread Charles Curley
On Mon, 4 Nov 2019 23:22:58 +
"Russell L. Harris"  wrote:

> Several times a week I receive a HTML email with numerous links.  Mutt
> (or neoMutt, which I am using until I upgrade my Debian installation)
> seems not to be a good solution for such messages.
> 
> What is a decent, simple GUI client which I can point at my maildir
> structure to read such messages and be able to open on the links with
> a click?

You might look at Claws-Mail. It will handle maildir. It will render
HTML emails as plain text, and has plugins if you want to get fancier.

> 
> I do not require SMTP; I plan to use Mutt for any response I send.

Claws-mail does not do outgoing HTML mail; text email only. It might
serve your purpose there as well.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: auxiliary mail client for HTML

2019-11-04 Thread darb
* Russell L. Harris wrote:
> Several times a week I receive a HTML email with numerous links.  Mutt
> (or neoMutt, which I am using until I upgrade my Debian installation)
> seems not to be a good solution for such messages.
> 
> What is a decent, simple GUI client which I can point at my maildir
> structure to read such messages and be able to open on the links with
> a click?
 
The solution I use to view html email that doesnt play nicely with w3m
autoview or urlscan is to pipe the message to firefox with the mailcap
entry:

text/html; firefox %s && sleep 2;

Open the attachments menu (v) select the text/html message and hit
enter. The html message will then open in firefox and any links can be
clicked as required.


signature.asc
Description: PGP signature


Re: can not update/upgrade Debian 10 when using apt-cacher-ng

2019-11-04 Thread john doe
On 10/29/2019 4:10 PM, to...@tuxteam.de wrote:
> On Tue, Oct 29, 2019 at 03:58:04PM +0100, john doe wrote:
>> On 10/29/2019 2:01 PM, to...@tuxteam.de wrote:
>>> On Tue, Oct 29, 2019 at 01:36:35PM +0100, john doe wrote:
 On 10/29/2019 12:50 PM, Charles Curley wrote:
> On Tue, 29 Oct 2019 11:45:02 +0100
> john doe  wrote:
>
>> /etc/apt/sources.list:
>>
>> http://HOSTNAME-APT-CACHER-NG>:3142/debian-security buster/updates
>
> [...]
>
>> Yes, the hostname is not the one I use.
>
> Phew :-)
>
>> For now, method 2 is used when '/etc/apt/sources.list' is created by the
>> Debian installer, so method 1 is not an option.
>
> Not clear why, but... let's assume that.
>
>> Everything else is working but not downloading the upgrade through
>> apg-cacher-ng.
>>
>> Is anyone using a proxy to download the upgrade(s) and what format is to
>> be used in '/etc/apt/sources.list'?
>
> I have used that in the past, but I do prefer the cache specification
> in apt config these days.
>
> What happens when you point your browser at your cache instance?
>
> What do the cache log files say? (Find them typically in
> /var/log/apt-cacher-ng/apt-cacher.err and ...log).
>

I have filed a bugreport regarding this (1).

In a nutshell, the secdeb remap rule is missing the "directory spec".
Adding the directory spec ('/debian-security') wright before the first
semicolon (';') fixes the issue, so the secdeb line should look like:

Remap-secdeb: security.debian.org /debian-security ; security.debian.org
deb.debian.org/debian-security

Thanks to anyone who has chimed in.


1)  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=944114.

--
John Doe



Re: auxiliary mail client for HTML

2019-11-04 Thread Russell L. Harris

On Tue, Nov 05, 2019 at 05:30:10AM +, Mark Rousell wrote:

Before I go on, I should say that this is now an area with which I am not
overly familiar in detail. I know Thunderbird very well but I am not familiar
in detail with getmail, Dovecot or maildir structures.


No problem; I know getmail and maildir, but my last usage of
Thunderbird was ten years ago.


I should also say that I only suggested setting up a local IMAP server as a way
to let Thunderbird access your email. My smiley was because this might be
overkill solely in order to see emails with HTML content! But it's not actually
that unreasonable, come to think of it.


For some reason which I do not immediately recall, I chose POP3 over
IMAP the last time I had the option.

As to overkill, I lived for five years or more with the webmail client
of my ISP; compared to that, anything else is a pleasure.


I just had a quick look at this HOWTO and it seems to focus on using fetchmail
to store mail in mbox format with Dovecot as the IMAP server. From your
comments, you're using getmail and maildir so you could not follow the HOWTO
exactly but you can still apply the same principles to place email in a place
where Dovecot can find it. I have not done it myself but I understand that you
should be able to configure Dovecot to look at your getmail's maildir
structure.


I have heard that getmail is more reliable than fetchmail, and I know
from personal experience that getmail is rock solid.  I may hold the
world record for the number of messages downloaded in a nonstop
marathon lasting several days; this was a few years back.


I'm going to refer to Dovecot as your local IMAP server below on the assumption
that you choose Dovecot to do the IMAP job, but other IMAP servers are
available.


Understood.


As I understand it, at the moment you are using getmail to collect
mail from your ISP (presumably using IMAP or POP3)


POP3


and store it locally in a maildir structure. Mutt reads from the
maildir structure.


Correct.


If you were to install Dovecot as an IMAP server alongside of this then (as I
understand it, as I've not done it myself) Dovecot could also read from the
same maildir structure.



Mail clients like Thunderbird could then access your local Dovecot IMAP server,
which in turn would show them the contents of your maildir structure.


I might could live with that arrangement, but only if I do not lose
messages because Dovecot decides they have been read and have aged too
long to keep.


So the Dovecot IMAP server (and any mail clients like Thunderbird that connect
to it) could see all your email (whatever is in your maildir structure) but
email would not be routed through the IMAP server, as such. It's just that the
IMAP server could access it as needed.


I still am bothered by the possibility of accidentally telling Thunderbird or
another client to delete a message.  It would be nice if Dovecot could
be run in a read-only mode.

Whichever way I go, I thank you for recommending the IMAP approach.



Re: raspberry pi installation with debian installer

2019-11-04 Thread Florian La Roche
Hello,

Am Di., 5. Nov. 2019 um 06:20 Uhr schrieb Gunnar Wolf :
> OK, this is quite exciting news! It's great to see the Raspberries
> being closer to a first-tier architecture in Debian. TBH, I believe
> for almost all RPi users it will be easier to use the installed images
> — But yes, I can perfectly understand many will feel this to be better
> and more official.

I've also used vmdb2 to create install images of Debian stable/unstable
that keep very close to upstream Debian and also provide similar amd64
images here:

  https://github.com/laroche/arm-devel-infrastructure

great to see as much as possible to be merged into Debian,
best regards,

Florian La Roche



Re: auxiliary mail client for HTML

2019-11-04 Thread Mark Rousell
On 05/11/2019 05:57, Russell L. Harris wrote:
> For some reason which I do not immediately recall, I chose POP3 over
> IMAP the last time I had the option.

That would make sense. POP3 is best for when you want to download
everything for local processing, as you are doing.

IMAP makes more sense where you want to keep mail on the server to be
accessed by local mail clients.

> As to overkill, I lived for five years or more with the webmail client
> of my ISP; compared to that, anything else is a pleasure.

:-)

> I might could live with that arrangement, but only if I do not lose
> messages because Dovecot decides they have been read and have aged too
> long to keep.
> [...]
> I still am bothered by the possibility of accidentally telling
> Thunderbird or
> another client to delete a message.  It would be nice if Dovecot could
> be run in a read-only mode.

As mentioned, I'm not deeply familiar with Dovecot but I'd be surprised
if it could not be configured to protect your mail store from accidental
deletion in this way.

Also, you could potentially get getmail to write two copies of your
maildir structure: One for active use and the other as a pristine
original record. Or perhaps one for Mutt and the other for Dovecot.

> Whichever way I go, I thank you for recommending the IMAP approach.

Glad to help.

-- 
Mark Rousell