Re: [OT] browsable archive of OFTC debian channels?
Andrew M.A. Cater writes: On Sat, Aug 14, 2021 at 01:05:00PM +0200, Marco Möller wrote: > > Hello! Is there somewhere publicly accessible an archive of the > conversations taking place in the #debian[*] OFTC channels? Would be great, > because I am not constantly online and thus am unable to fully follow > conversations there. [...] https://irclogs.thegrebs.com/debian/ apparently - I asked in #debian on OFTC I got that with google, too. Looking at the front page, it seems to have `#debian` only, though. I.e. none of the other channels like `#debian-offtopic`, `#debian-mentors`, `#debian-cd` etc. Most people interested in seeing all the conversations seem to stay online all the time, e.g. using a detached screen or tmux session of an IRC client. HTH Linux-Fan öö [...] pgpbH2kBLupZk.pgp Description: PGP signature
Re: Debian 11 bullseye Gdm3 nvidia 7200go nouveau glitches and more
dimitris.varu writes: Hi i recently install debian 11 stable. Amd64 in hp dv6300 laptop.Gnome is very glitched from login through desktop. Many textures missing icons missing white squares everywhere... Lxde runs ok without problems... Gpu is nvidia 7200go nouveau driver. I know is old hardware.. any help or advice is most welcome! Tnx for your time Is there a specific reason for needing GNOME on this machine? From https://www.cnet.com/reviews/hp-pavilion-dv6300-preview/ I gather it has a rather old processor (Celeron M 440 to Core 2DuoT7200) -- which one do you have exactly? Also, it seems there would be at most 2 GiB of RAM. I'd thus suggest to install something lightweight like LXDE, Fluxbox, IceWM. Using a plain window manager will reduce the number of GUI background applications and that may free some essential RAM and CPU resources that you can use for your application of interest. Of course, if you need to stay with a mordern DE that requires GPU accelleration you could always try to install an old proprietary NVidia driver. The Debian package would be `nvidia-legacy-304xx-kernel-dkms` but I am afraid the last release it was part of was Debian Stretch (Debian 9). HTH Linux-Fan öö pgpvqdonhpurt.pgp Description: PGP signature
Re: nvme SSD and poor performance
Pierre Willaime writes: I have a nvme SSD (CAZ-82512-Q11 NVMe LITEON 512GB) on debian stable (bulleye now). For a long time, I suffer poor I/O performances which slow down a lot of tasks (apt upgrade when unpacking for example). I am now trying to fix this issue. Using fstrim seems to restore speed. There are always many GiB which are reduced : [...] but few minutes later, there are already 1.2 Gib to trim again : # fstrim -v / / : 1,2 GiB (1235369984 octets) réduits Is it a good idea to trim, if yes how (and how often)? Some people use fstrim as a cron job, some other add "discard" option to the /etc/fstab / line. I do not know what is the best if any. I also read triming frequently could reduce the ssd life. I do `fstrim` once per week by a minimalistic and custom script as cron job: https://github.com/m7a/lp-ssd-optimization There is no need for custom scripts anymore, you can nowdays enable the timer from `util-linux` without hassle: # systemctl enable fstrim.timer This will perform the trim once per week by default. When the use of SSDs increased people tried out the `discard` options and found them to have strange performance characteristics, potential negative effects on SSD life and in some cases even data corruption (?) Back then it was recommended to use the periodic `fstrim` instead. I do not know if any of the issues with discard are still there today. I also noticed many I/O access from jbd2 and kworker such as : # iotop -bktoqqq -d .5 11:11:16 364 be/3 root0.00 K/s7.69 K/s 0.00 % 23.64 % [jbd2/nvme0n1p2-] 11:11:16 8 be/4 root0.00 K/s0.00 K/s 0.00 % 25.52 % [kworker/u32:0-flush-259:0] [...] I do not know what to do for kworker and if it is a normal behavior. For jdb2, I have read it is filesystem (ext4 here) journal. I highly recommend you to find out what exactly is causing the high number of I/O operations. Usually there is a userspace process responsible (or a RAID resync operation) for all the I/O which is then processed by the kernel threads you see in iotop. I usually look at `atop -a 4` (package `atop`) for half a minute or so to find out what processes are active on the system. It is possible that something is amiss and causing an exceedingly high I/O load leading to the performance degradation you observe. [...] P-S: If triming it is needed for ssd, why debian do not trim by default? Detecting reliably if the current system has SSDs that would benefit from trimming AND that the user has not taken their own measures is difficult. I guess this might be the reason for there not being an automatism, but you can enable the systemd timer suggested above with a single command. HTH Linux-Fan öö pgpMM3E95N78t.pgp Description: PGP signature
Re: nvme SSD and poor performance
Christian Britz writes: On 17.08.21 at 15:30 Linux-Fan wrote: Pierre Willaime writes: P-S: If triming it is needed for ssd, why debian do not trim by default? Detecting reliably if the current system has SSDs that would benefit from trimming AND that the user has not taken their own measures is difficult. I guess this might be the reason for there not being an automatism, but you can enable the systemd timer suggested above with a single command. I am pretty sure that I have never played with fstrim.timer and this is the output of "systemctl status fstrim.timer" on my bullseye system: ● fstrim.timer - Discard unused blocks once a week Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled) Active: active (waiting) since Tue 2021-08-17 14:10:17 CEST; 3h 5min ago Trigger: Mon 2021-08-23 01:01:39 CEST; 5 days left Triggers: ● fstrim.service Docs: man:fstrim So it seems this weekly schedule is enabled by default on bullseye. Nice, thanks for sharing :) BTW, if my system is not online at that time, will it be triggered on next boot? Yes, I would think so. A systemd timer can be configured to run if its schedule is missed by `[Timer] Persistent=true` which on my machine is configured for `fstrim.timer` (check `systemctl cat fstrim.timer`). See also: https://jeetblogs.org/post/scheduling-jobs-cron-anacron-systemd/ Of course, it is also possible to find out experimentally. Check `systemctl list-timers` to find out about all registered timers and when they ran last. HTH Linux-Fan öö pgpb4GJLB0KsQ.pgp Description: PGP signature
Re: nvme SSD and poor performance
Pierre Willaime writes: Thanks all. I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan). You're welcome :) But I do not think my issue is trim related after all. I have always a lot of I/O activities from jdb2 even just after booting and even when the computer is doing nothing for hours. Here is an extended log of iotop where you can see jdb2 anormal activities: https://pastebin.com/eyGcGdUz According to that, a lot of firefox-esr and dpkg and some thunderbird processes are active. Is there a high intensity of I/O operations when all Firefox, Thunderbird instances and system upgrades are closed? When testing with iotop here, options `-d 10 -P` seemed to help getting a steadier and less cluttered view. Still, filtering your iotop output for Firefox, Thunderbird and DPKG respectively seems to be quite revealing: | $ grep firefox-esr eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s' | 10:38:513363 be/4 pierre 0.00 K/s 1811.89 K/s 0.00 % 17.64 % firefox-esr [mozStorage #3] | 10:39:585117 be/4 pierre 0.00 K/s 1112.59 K/s 0.00 % 0.37 % firefox-esr [IndexedDB #14] | 10:41:553363 be/4 pierre 0.00 K/s 6823.06 K/s 0.00 % 0.00 % firefox-esr [mozStorage #3] | 10:41:553305 be/4 pierre 1469.88 K/s0.00 K/s 0.00 % 60.57 % firefox-esr [QuotaManager IO] | 10:41:553363 be/4 pierre 6869.74 K/s 6684.07 K/s 0.00 % 31.96 % firefox-esr [mozStorage #3] | 10:41:566752 be/4 pierre 2517.19 K/s0.00 K/s 0.00 % 99.99 % firefox-esr [Indexed~Mnt #13] | 10:41:566755 be/4 pierre 31114.18 K/s0.00 K/s 0.00 % 99.58 % firefox-esr [Indexed~Mnt #16] | 10:41:563363 be/4 pierre 9153.40 K/s0.00 K/s 0.00 % 87.06 % firefox-esr [mozStorage #3] | 10:41:576755 be/4 pierre 249206.18 K/s0.00 K/s 0.00 % 59.01 % firefox-esr [Indexed~Mnt #16] | 10:41:576755 be/4 pierre 251353.11 K/s0.00 K/s 0.00 % 66.02 % firefox-esr [Indexed~Mnt #16] | 10:41:586755 be/4 pierre 273621.58 K/s0.00 K/s 0.00 % 59.51 % firefox-esr [Indexed~Mnt #16] | 10:41:586755 be/4 pierre 51639.70 K/s0.00 K/s 0.00 % 94.90 % firefox-esr [Indexed~Mnt #16] | 10:41:596755 be/4 pierre 113869.64 K/s0.00 K/s 0.00 % 79.03 % firefox-esr [Indexed~Mnt #16] | 10:41:596755 be/4 pierre 259549.09 K/s0.00 K/s 0.00 % 56.99 % firefox-esr [Indexed~Mnt #16] | 10:44:413265 be/4 pierre 1196.21 K/s0.00 K/s 0.00 % 20.89 % firefox-esr | 10:44:413289 be/4 pierre 3813.36 K/s 935.22 K/s 0.00 % 4.59 % firefox-esr [Cache2 I/O] | 10:44:533363 be/4 pierre 0.00 K/s 1176.90 K/s 0.00 % 0.00 % firefox-esr [mozStorage #3] | 10:49:283363 be/4 pierre 0.00 K/s 1403.16 K/s 0.00 % 0.43 % firefox-esr [mozStorage #3] So there are incredible amounts of data being read by Firefox (Gigabytes in a few minutes)? Does this load reflect in atop or iotop's summarizing lines at the begin of the respective screens? | $ grep thunderbird eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s' | 10:38:432846 be/4 pierre 0.00 K/s 1360.19 K/s 0.00 % 15.51 % thunderbird [mozStorage #1] | 10:39:492873 be/4 pierre 0.00 K/s 4753.74 K/s 0.00 % 0.00 % thunderbird [mozStorage #6] | 10:39:492875 be/4 pierre 0.00 K/s 19217.56 K/s 0.00 % 0.00 % thunderbird [mozStorage #7] | 10:39:502883 be/4 pierre 0.00 K/s 18014.56 K/s 0.00 % 29.39 % thunderbird [mozStorage #8] | 10:39:502883 be/4 pierre 0.00 K/s 3305.94 K/s 0.00 % 27.28 % thunderbird [mozStorage #8] | 10:39:512883 be/4 pierre 0.00 K/s 61950.19 K/s 0.00 % 63.11 % thunderbird [mozStorage #8] | 10:39:512883 be/4 pierre 0.00 K/s 41572.77 K/s 0.00 % 27.19 % thunderbird [mozStorage #8] | 10:39:522883 be/4 pierre 0.00 K/s 20961.20 K/s 0.00 % 65.02 % thunderbird [mozStorage #8] | 10:39:522883 be/4 pierre 0.00 K/s 43345.16 K/s 0.00 % 0.19 % thunderbird [mozStorage #8] | 10:42:272846 be/4 pierre 0.00 K/s 1189.63 K/s 0.00 % 0.45 % thunderbird [mozStorage #1] | 10:42:332846 be/4 pierre 0.00 K/s 1058.52 K/s 0.00 % 0.31 % thunderbird [mozStorage #1] | 10:47:272846 be/4 pierre 0.00 K/s 2113.53 K/s 0.00 % 0.66 % thunderbird [mozStorage #1] Thunderbird seems to write a lot here. This would average at ~18 MiB/s of writing and hence explain why the SSD is loaded continuously. Again: Does it match the data reported by atop? [I am not experienced in reading iotop output, hence I might interpret the data wrongly]. By comparison, dpkg looks rather harmless: | $ grep dpkg eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s' | 10:38:254506 be/4 root0.00 K/s 4553.67 K/s 0.00 % 0.26 % dpkg --status-fd 23 --no-triggers --unpack --auto-deconfigure --force-remove-protected --recursive /tmp/apt-dpkg-install-E69bfZ | 10:38:334506 be/4 root7.73 K/s 4173.77 K/s 0.00 % 1.52 % dpkg --status-fd 23 -
Re: how to change debian installation mirror list?
Fred 1 writes: I would like to know where the installation pulls the list of Debian mirrors. hope I can change it and rebuild the netinstall ISO with jigdo maybe ? It would be nice if I could manually add a custom one during install, but I didn't see such option, strictly just pick from the list I can only answer the last of your questions: If you want to add a custom mirror during the install, go to the top of the menu and choose "enter information manually". The installer will then ask you for the host name, path and proxy information of your custom mirror. Scroll near the end of https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m6-d11a3-i386-netinst.xhtml for screenshots of the respective dialogs. It is from an an alpha release of the installer but the dialogs should not differ too much from the final one. HTH Linux-Fan öö [...] pgpuHOOoPciDc.pgp Description: PGP signature
Re: Installing old/deprecated packages
riveravaldez writes: I have this `phwmon.py`[1] which I use with fluxbox to have a couple of system monitors at hand. It depends on some python2 packages, so stopped working some time ago. Any specific reason for preferring `phwmon.py` over a tool like `conky`? I've just made it work, installing manually (# apt-get install packages.deb) this packages that I've downloaded from Debian OldStable official archives: python-psutil python-is-python2 (this is in fact in Testing) python-numpy python-pkg-resources python-cairo libffi6 python-gobject-2 python-gtk2 Therefore, my questions: How safe is this? IMHO it's pretty OK because that is quite similar to having upgraded from an old system with the legacy packages installed to a new release where they are no longer part of. Is it better to install them as I did, or adding the corresponding line in sources.list and pull them from there? Is there any difference? There are differences: Whenever you install packages, you may not notice that they are only avaliable in old releases because the output of `apt-cache search` and similar tools will include old packages. Also, running a release with stable+oldstable in sources.list is less common than the other case: stable in sources.list and some oldstable packages leftover from upgrades. In case bugs are fixed in the oldstable package, you will get them automatically if you have them in sources.list. My personal choice would be to install the packages without adding the oldstable repositories as to be reminded that they are obsolete and are likely to stop working in the future. Be aware that libraries like `python-psutil` may not work with newer kernels. Here (on oldstable with a backported kernel 5.10) the script would not run due to excess fields reported by the kernel for disk statistics: | $ ./phwmon.py | Traceback (most recent call last): | File "./phwmon.py", line 341, in | HardwareMonitor() | File "./phwmon.py", line 128, in __init__ | self.initDiskIo() | File "./phwmon.py", line 274, in initDiskIo | v = psutil.disk_io_counters(perdisk=False) | File "/usr/lib/python2.7/dist-packages/psutil/__init__.py", line 2131, in disk_io_counters | rawdict = _psplatform.disk_io_counters(**kwargs) | File "/usr/lib/python2.7/dist-packages/psutil/_pslinux.py", line 1121, in disk_io_counters | for entry in gen: | File "/usr/lib/python2.7/dist-packages/psutil/_pslinux.py", line 1094, in read_procfs | raise ValueError("not sure how to interpret line %r" % line) | ValueError: not sure how to interpret line ' 259 0 nvme0n1 42428 17299 3905792 8439 49354 7425 3352623 15456 0 48512 26929 43429 11 476835656 3033 0 0\n' See also: https://forums.bunsenlabs.org/viewtopic.php?id=967 [...] [1] https://gitlab.com/o9000/phwmon Btw. it looks as if `python-is-python2` is not needed for this to run? `phwmon.py` states `python2` explicitly. HTH Linux-Fan öö PS: If you are interested in my thoughts on status bars, see here: https://masysma.lima-city.de/32/i3bar.xhtml pgpGp_ovIoHsX.pgp Description: PGP signature
Re: Installing old/deprecated packages
riveravaldez writes: On 9/5/21, Linux-Fan wrote: > riveravaldez writes: > >> I have this `phwmon.py`[1] which I use with fluxbox to have a couple >> of system monitors at hand. It depends on some python2 packages, so >> stopped working some time ago. > > Any specific reason for preferring `phwmon.py` over a tool like `conky`? Hi, Linux-Fan, thanks a lot for your answers. `conky` is great, but you have to see the desktop to see `conky`, and I tend to work with maximized windows. Monitors like `phwmon.py` or the ones that come by default with IceWM for instance are permanently visible in the sys-tray/taskbar (no matter you're using fluxbox, openbox+tint2, etc.). That's the only reason: minimal and visible. That makes sense. In case you still want to try conky, there might be means to make it appear as a dedicated panel that is not overlapped by maximized windows, although I did not test that back when I was using Fluxbox (now on i3). See e.g. https://superuser.com/questions/565784/can-conky-remain-always-visible-alongside-other-windows https://forum.salixos.org/viewtopic.php?t=1166 [...] > There are differences: Whenever you install packages, you may not notice > that they are only avaliable in old releases because the output of > `apt-cache search` and similar tools will include old packages. Also, > running a release with stable+oldstable in sources.list is less common than > the other case: stable in sources.list and some oldstable packages leftover > from upgrades. In case bugs are fixed in the oldstable package, you will > get them automatically if you have them in sources.list. > > My personal choice would be to install the packages without adding the > oldstable repositories as to be reminded that they are obsolete and are > likely to stop working in the future. Thanks again. Very informative and educational. When you say 'as to be reminded that they are obsolete', how/when/where the system will remind me this?, will it be? There is no automatism for this that I am aware of. There are tools like `deborphan` and `aptitude search ~o` that may tell you about them. The release notes recommend proactively removing obsolete packages: https://www.debian.org/releases/bullseye/amd64/release-notes/ch-upgrading.en.html#for-next > Be aware that libraries like `python-psutil` may not work with newer > kernels. Here (on oldstable with a backported kernel 5.10) the script would > > not run due to excess fields reported by the kernel for disk statistics: [...] Yes, indeed. I didn't mentioned it but I had to "fix" that as seen in: https://gitlab.com/o9000/phwmon/-/issues/3#note_374558691 Essentially, convert: `elif flen == 14 or flen == 18:` to `elif flen == 14 or flen == 18 or flen == 20:` In /usr/lib/python2.7/dist-packages/psutil/_pslinux.py Supposedly shouldn't be problematic, but I'm not sure. Any comment on this? It's pretty much how I would go about it for a short-term solution. Any upgrade/reinstallation of the respective python package may revert your change. I would not mind that too much, given that there will not be any "unexpected" upgrades to the package while its repositories are not enabled in sources.list :) [...] > PS: If you are interested in my thoughts on status bars, see here: > https://masysma.lima-city.de/32/i3bar.xhtml Thanks a lot, LF! I'm checking it right now. Very interesting. You're welcome. If anything is unclear/wrong there, feel free to tell me directly via e-mail :) HTH Linux-Fan öö pgp_58R53yXQu.pgp Description: PGP signature
Re: plugged and unplugged monitor
michaelmorgan...@gmail.com writes: I have a linux machine for scientific calculations with GPU installed. It also installed GUI but normally the default start mode is just terminal (multi-user.target). If I start up the machine with a monitor plugged into the GPU (HDMI or Displayport), the monitor works well (showing the terminal). However, if I unplug the HDMI cable and connect it back again, there will be no signal to the monitor any more. The same thing happens if I start up the machine without a monitor. I can ssh to the machine, but there is no signal if I plug in the monitor. What could be the problem? Does the problem occur if you immediately re-connect the monitor after disconnecting or only after a certain amount of time? It might be that after a timeout, the console "blanks" and does not display anymore. You could try to attach a keyboard and press "any key" ("Alt" is one of my favorites for the purpose) to see if that is the problem. Another route to debug might be to try the connect/disconnect procedure while the GUI is running. If you run X11, you might try to capture the output of `xrandr` for the connected, disconnected and re-connected cases to see if the monitor connection is being recognized by X11. AFAIK it should be possible to run `xrandr` over SSH while nothing is displayed as long as the GUI is running and you prefix it by the proper `DISPLAY=:0` variable. HTH Linux-Fan öö [...] pgp4B7Br5bkKd.pgp Description: PGP signature
[OT] C++ Book Question (Was: Re: How to improve my question in stackoverflow?)
William Torrez Corea writes: Book.cpp:1:10: fatal error: Set: No existe el fichero o el directorio [closed] I trying compile an example of a book. The program use three classes: Book, Customer and Library. The question is offtopic for debian-user, how did it get here? Also, how are subject and content related? When giving the source code, give it in entirety, i.e. including the missing "customer.h", "library.h" and others. Also, for such long source codes, it seems preferrable to provide them as an attachment rather than inline. Book.cpp #include [...] When i compile the example with the following command: g++ -g -Wall Book.cpp book.h -o book The result expected is bad. Book.cpp:1:10: fatal error: Set: No existe el fichero o el directorio #include ^ compilation terminated. [...] The book is C++17 By Example, published by Packt. [...] The immediate problem is with the case "Set" vs. "set". In your source code above you correctly have `#include `, but the error message suggests that the `Book.cpp` you are trying to compile still has `#include ` as displayed by the compiler. In fact, this seems to be a known erratum for the book sample codes: https://github.com/PacktPublishing/CPP17-By-Example/issues/1 Clone the repository to get the entire sample source code. I could get to compile the `Chapter04` example by making changes as attached in `patchbook.patch`. Apply it with patch --strip 1 < patchbook.patch from inside the repository. HTH Linux-Fan öö diff --git a/Chapter04/LibraryPointer/Book.cpp b/Chapter04/LibraryPointer/Book.cpp index ae86fd2..62f60f5 100644 --- a/Chapter04/LibraryPointer/Book.cpp +++ b/Chapter04/LibraryPointer/Book.cpp @@ -1,9 +1,9 @@ -#include -#include -#include -#include -#include -#include +#include +#include +#include +#include +#include +#include using namespace std; #include "Book.h" @@ -84,4 +84,4 @@ ostream& operator<<(ostream& outStream, const Book& book) { } return outStream; -} \ No newline at end of file +} diff --git a/Chapter04/LibraryPointer/Customer.cpp b/Chapter04/LibraryPointer/Customer.cpp index ac17964..31ffe1d 100644 --- a/Chapter04/LibraryPointer/Customer.cpp +++ b/Chapter04/LibraryPointer/Customer.cpp @@ -1,8 +1,8 @@ -#include -#include -#include -#include -#include +#include +#include +#include +#include +#include using namespace std; #include "Book.h" @@ -87,4 +87,4 @@ ostream& operator<<(ostream& outStream, const Customer& customer){ } return outStream; -} \ No newline at end of file +} diff --git a/Chapter04/LibraryPointer/Library.cpp b/Chapter04/LibraryPointer/Library.cpp index 10b4ac4..2c6f1a7 100644 --- a/Chapter04/LibraryPointer/Library.cpp +++ b/Chapter04/LibraryPointer/Library.cpp @@ -1,10 +1,10 @@ -#include -#include -#include -#include -#include -#include -#include +#include +#include +#include +#include +#include +#include +#include using namespace std; #include "Book.h" @@ -611,4 +611,4 @@ Library::~Library() { for (const Customer* customerPtr : m_customerPtrList) { delete customerPtr; } -} \ No newline at end of file +} diff --git a/Chapter04/LibraryPointer/Main.cpp b/Chapter04/LibraryPointer/Main.cpp index 18d1637..586b47e 100644 --- a/Chapter04/LibraryPointer/Main.cpp +++ b/Chapter04/LibraryPointer/Main.cpp @@ -1,15 +1,16 @@ -#include -#include -#include -#include -#include -#include +#include +#include +#include +#include +#include +#include using namespace std; #include "Book.h" #include "Customer.h" #include "Library.h" -void main() { +int main() { Library(); -} \ No newline at end of file + return 0; +} pgpUlyJNnqpAw.pgp Description: PGP signature
Re: HTML mail [was: How to improve my question in stackoverflow?]
to...@tuxteam.de writes: On Thu, Sep 09, 2021 at 07:45:43PM -0400, Jim Popovitch wrote: [...] > First, most folks on tech mailinglists despise HTML email. The original mail was a passable MIME multipart/alternative with a plain text part. I /think/ that is OK, what do others think? Postel's principle applies, hence: OK :) Perhaps you can teach you mailer to pick the text part for you :-) (I'm just asking, because I've seen this complaint a couple of times for a well-formed multipart message: personally, I'd be OK with it, but I'd like to know how the consensus is). My mail client of choice (cone, not in Debian anymore) seems to prefer displaying the HTML part by default. It does this by converting it to a almost-text-only presentation retaining some things like bold and underline. It gets a little annoying with URLs because it displays link target and label which often leads to the same URL being displayed twice in the terminal. AFAIK I cannot configure it to prefer the text version, but I have not checked that part of the source code to see how difficult it might be to implement such a choice. I can view both variants of the mail content on a case-by-case basis by opening them explicitly. This even works from inside the MUA, nice feature :) . For my usage, it is easiest to have text-only e-mails because those always display nicely by default. Additionally, in this C++ question thread, the source code was given in HTML and text parts of the e-mail and while the `#include` statements were all on separate lines in the HTML, they appear as one long line in the text part. IMHO it causes unnecessary confusion to have two slightly differently displayed parts of the same mail especially for questions with source code :) Btw.: For C++ STL components such as `set` or `string` it is perfectly fine to not append a `.h`. In fact, it would seem not to be standards compliant to append the `.h`, cf. https://stackoverflow.com/questions/15680190 YMMV Linux-Fan öö pgp8MUSLxC_NJ.pgp Description: PGP signature
Re: Dual GPU dual display support in Debian?
Anssi Saari writes: I was wondering, since I didn't really find anything definite via Google but is dual GPU dual display actually a supported configuration in Debian 11? I would be interested in knowing about that, too. So basically for a while I ran this kind of setup: Display 1 connected to CPU's integrated GPU (Core i7-4790K, Intel HD Graphics 4600). Display 2 connected to Nvidia RTX3070Ti. The two displays setup as a single wide display, i.e. windows movable/draggable from one display to the other. I have a triple boot setup, Windows 10 worked fine, Arch Linux with KDE was hit or miss, Debian didn't work, no image on one display and xrandr saw only one display. Which one seemed to depend on which GPU was set as primary in the UEFI setup. No xorg.conf but I fiddled with that too. [...] The reason for this setup was that Debian 10 has no drivers for the RTX3070Ti so I just used one display there and since it worked in Arch (at least sometimes) I figured it should just start working in Debian 11 after the upgrade but it didn't. I believe it might be easier to fix the problem by attaching all displays to the NVidia GPU. Here are some hints about what you might check to make the NVidia GPU work under Debian 11: https://forums.developer.nvidia.com/t/linux-460-driver-kubuntu-20-04-rtx-3070-not-booting/171085 About doing a "dual GPU dual display" my experience is as follows (from Debian 10 oldstable/buster with purely X11 and no Wayland): First, I never got it to work properly. Closest I could get was to run two different window managers on the respective displays all under the same X server. This allowed the mouse and clipboard to move across the screens but the windows needed to remain on the GPU they were started on. Additionally, one cannot combine arbitrary window managers this way - I used i3 and IceWM. The trick is to not have them compete for "focus". The `.xsession` looked as follows: DISPLAY=:0.1 icewm & exec i3 IIRC there were also some approaches like starting an X-server on top of the two X11 displays (:0.1 etc.) but I cannot seem to find them right now. Back when I tried that setup, it seemed these would be unable to provide graphics accelleration, hence I opted for the more convoluted variant with two window managers and full performance. HTH Linux-Fan öö pgpTXpJwVgbqn.pgp Description: PGP signature
Re: Dual GPU dual display support in Debian?
Anssi Saari writes: Linux-Fan writes: > Anssi Saari writes: > >> I was wondering, since I didn't really find anything definite via Google >> but is dual GPU dual display actually a supported configuration in >> Debian 11? > > I would be interested in knowing about that, too. I wonder if this is a kernel space problem or user space or both? So does it work in Arch Linux because the Nvidia drivers and Linux kernel are much newer? Or is there some user space component too that matters? AFAIK there are at least these components involved: * Kernel * Proprietary graphics driver (kernel- and userspace parts, NVidia) * Mesa (userspace) I am not sure how much "cooperation" from the X server itself is required i.e. if it might be enough to build a newer Mesa but keep running an old X server. >> So basically for a while I ran this kind of setup: > I believe it might be easier to fix the problem by attaching all > displays to the NVidia GPU. Here are some hints about what you might > check to make the NVidia GPU work under Debian 11: Thanks, but a hint to you and Felix: "for a while I ran..." indicates something happening in the past which has now ended. I have no current issues for basic use. I haven't actually tried to do accelerated video encode with the new video card which I count as advanced use but I need it so rarely it hasn't come up. OK [...] Linux-Fan öö pgpM22WSVfXz6.pgp Description: PGP signature
Re: Bullseye (mostly) not booting on Proliant DL380 G7
Claudio Kuenzler writes: On Wed, Jun 30, 2021 at 9:51 AM Paul Wise wrote: Claudio Kuenzler wrote: > I currently suspect a Kernel bug in 5.10. Thanks to everyone for hints and suggestions! At the end it turned out to be an issue with the hpwdt module. After blacklisting this module, no boot or stability issues with Bullseye were detected anymore. Findings documented in my blog: https://www.claudiokuenzler.com/blog/1125/debian-11-bullseye-boot-freeze-kernel-panic-hp-proliant-dl380 Thanks for sharing and digging up all that information. I found the article worth reading despite not having had this issue - a nicely structured approach to tackle such problems! Linux-Fan öö pgpBYX_NJWs13.pgp Description: PGP signature
Re: write only storage.
Marco Möller writes: On 21.09.21 17:53, Tim Woodall wrote: I would like to have some WORM memory for my backups. At the moment they're copied to an archive machine using a chrooted unprivileged user and then moved via a cron job so that that user cannot delete them (other than during a short window). My though was to use a raspberry-pi4 to provide a USB mass storage device that is modified to not permit deleting. If the pi4 is not accessible via the network then other than bugs in the mass storage API it should be impossible to delete things without physical access to the pi. What about the overall storage size: Assume an adversary might corrupt your local data and then invoke the backup procedure in an endless loop in an attempt to reach the limit of the "isolated" pi's underlying storage. You might need a way to ensure that the influx of data is somehow rate-limited. Before I start reinventing the wheel, does anyone know of anything similar to this already in existence? I know of three schemes trying to deal with the situation: (a) Have a pull-based or append-only scheme implemented in software. Borg's append-only mode and your current method fall into that category. I am using a variant of that approach, too: Have a backup server pull the data off my local machine at irregular intervals. (b) Use physically write-once media like CD-R/DVD-R/BD-R. I *very rarely* backup the most important data to DVDs (no BD writer here and a single one would not provide enought redundancy to rely on it in case of need...). (c) Use a media-rotation scheme with enough media to cover the interval you need to notice the adversary's doings. E.g. you could use seven hard drives all with redundant copies of your data and each day chose the next drive to update with the "current data" by a clear schedule, i.e. "Monday" drive on Mondays, "Tuesday" drive on Tuesdays etc. If an adversary tampers with your data you would need to notice within one week as to be able from the last drive to still contain unmodified data. Things like chattr don't achieve what I want as root can still override that. I'm looking for something that requires physical access to delete. My solution is to use a separate, dedicated, not-always-on machine that pulls backups when its turned on and then shuts itself off as to reduce the time frame in which an adversary might try to break into it via SSH. In theory, one could leave out the SSH server on the backup server altogether, but this would complicate the rare occasions where maintenance is needed. The backup tool borg, or borgbackup (this latter is also the package name in the Debian repository), has an option to create backup archives to which only data can be added but not deleted. If you can get it managed, that only borgbackup has access through the network to the backup system but no other user can access the backup system from the network, then this might be want you want. Borgbackup appears to be quite professionally designed. I have never had bad experience for my usage scenario backing up several home and data directories with it and restoring data from the archives - luckily restoring data just for testing the archives but not for indeed having needed data from a backup. My impression is, that this tool is also in use by the big professionals, those who have to keep up and running a real big business. Well, maybe someone of those borgbackup users with the big business pressure and experience should comment on this and not me. At least for me and my laboratory measurement data distributed on still less than 10 computers and all together comprising still less than 10 TB data volume, it is the perfect tool. Your question sounds like it could also fit your needs. Its one tool that could be used for the purpose, yes. Borg runs quite slowly if you have a lot of data (say > 1 TiB). If you can accept that/deal with it, it is a tool worth considering. Some modern/faster alternatives exist (e.g. Bupstash) but they are too new to be widely deployed yet. AFAIK in "business" contexts, tape libraries and rsync-style mirrors are quite widespread. HTH Linux-Fan öö pgpxke0vDpypy.pgp Description: PGP signature
Re: Privacy and defamation of character on Debian public forums
rhkra...@gmail.com writes: On Sunday, September 26, 2021 08:45:13 AM Greg Wooledge wrote: > On Sun, Sep 26, 2021 at 07:00:06AM -0400, rhkra...@gmail.com wrote: > > Well, to be fair to Google, the first two or three hits did show FAOD, > > but without explaining what it meant -- those sites that you have to > > actually go to to find the meaning. I skipped over those to find a hit > > that actually included the meaning, and the first (and next) one(s) I > > found were for FOAD, and I didn't notice the difference. > > Fascinating. My first page results from Google were all of this form: > > What are long-chain fatty acid oxidation disorders (LC-FAOD)? > https://www.faodinfocus.com › learn-about-lc-faod > LC-FAOD are rare, genetic metabolic disorders that prevent the body from > breaking down long-chain fatty acids into energy during metabolism. > > This is obviously wrong in this context. > > The entire first page consisted solely of results like this, so I didn't > even bother going to page 2. Hmm, I don't remember the eact google query I tried, I might have done something like [define: FAOD slang] or something similar (maybe "acronym", but maybe more likely "slang"). Trying to find out what the fuss was about, my first query was FAOD urban dictionary which yielded 1. https://www.urbandictionary.com/define.php?term=Faod (not helpful) 2. https://www.urbandictionary.com/define.php?term=foad Like rhkramer, I did not notice that the letters were intermingled. When entering FAOD I get results similar to Greg's. When entering FAOD meaning (as suggested by Google for related searches), I get this among the top results: https://www.acronymfinder.com/Slang/FAOD.html That, finally, explains it. Most of the time, I try to stick to acronyms that are found in "The Jargon File" (package `jargon`) because these seem to have a pretty agreed-upon meaning :) HTH and YMMV Linux-Fan öö pgpjxsIFEVmwU.pgp Description: PGP signature
Re: usb audio interface recommendation
Russell L. Harris writes: Needed: a USB audio interface which "just works" with Debian 9, 10, 11 on i386 and amd64 desktop machines. The newest of my machines is several years years old and has both black and blue USB ports. I am using an SSL 2 here: https://www.solidstatelogic.com/products/ssl2 Tested successfully with Debian 10 amd64 and Debian 11 amd64 each with ALSA + PulseAudio non-professional audio. In case you consider buying it, I might be able to do a basic test with a Debian 11 i386, too. Caveat: I have found the interface to only be recognized properly if I attach it _after_ PulseAudio has already started up. Hence, I have it disconnected by default and upon needing it, first start `pavucontrol` and only afterwards attach the interface. Btw.: I saw you asked about the Motu M2 earlier (https://lists.debian.org/debian-user/2021/09/msg00958.html). Was there any progress in getting it to run properly? A cursory internet search suggests that there were problems wrt. old kernels and PulseAudio. Additionally, some tuning to reduce kernel latency might be needed? See https://panther.kapsi.fi/posts/2020-02-02_motu_m4 for a summary. Back when I searched for audio interfaces, I had also considered the Zoom UAC-2 (https://www.zoom.co.jp/sites/default/files/products/downloads/pdfs/E_UAC-2.pdf). Reviews seemed to indicate acceptable Linux compatibility, but I do not have any first-hand experience with it. HTH Linux-Fan *who uses the SSL 2 for video conferencing* öö [...] pgpKahUviJy2M.pgp Description: PGP signature
Re: New mdadm RAID1 gets renamed from md3 to md127 after each reboot
Reiner Buehl writes: I created a new mdadm RAID 1 as /dev/md3. But after each reboot, it gets activated as md127. How can I fix this - preferably without haveing to delete the whole array again... The array is defined like this in /etc/mdadm: ARRAY /dev/md3 metadata=1.2 level=raid1 num-devices=1 UUID=41e0a87f: 22a2205f:0187c73d:d8ffefea [...] I have observed this in the past, too and do not know how to "fix" it. Why is it necessary for the volume to appear under /dev/md3? Might it be possible to use its UUID instead, i.e. check the output of ls -l /dev/disk/by-uuid to find out if your md3/md127 can be accessed by an unique ID. You could then point the entries in /etc/fstab to the UUID rather than the "unstable" device name? HTH and YMMV Linux-Fan öö pgpXLRom6yQiu.pgp Description: PGP signature
Re: Disk partitioning phase of installation
Richard Owlett writes: I routinely place /home on its own partition. Its structure resembles: /home/richard ├── Desktop ├── Documents ├── Downloads ├── Notebooks └── Pictures My questions: 1. Can I have /home/richard/Downloads bed on its own partition? Yes. The only thing to consider is that they are mounted in correct order i.e. first /home/richard then /home/richard/Downloads. Alternatively, you could mount them at independent times by using a mountpoint outside of /home/richard (e.g. /media/richards_downloads) and having `Downloads` as a symbolic link pointing to the mountpoint of choice (`ln -s /media/richards_downloads Downloads`). 2. How could I have found the answer? By trying it out :) If you do it wrongly, it yields "mountpoint does not exist" or similar. If you do it correctly, it "just works". HTH Linux-Fan [...] öö pgpAlDLwrDZxM.pgp Description: PGP signature
Re: AMD OpenCL support
piorunz writes: On 17/10/2021 09:00, didier gaumet wrote: [...] Yes I have that mesa version of OpenCL installed. Unfortunately, this version is too old and not recognized. I need OpenCL 1.2 at least I think. clinfo says, among many other things: Device Version OpenCL 1.1 Mesa 20.3.5 Driver Version 20.3.5 Device OpenCL C Version OpenCL C 1.1 Perhaps your claim of not having OpenCL support is erroneous and what happens actually is you have uncomplete/unsufficent support for your use case: a typical example is Darktable not having OpenCL image support, this requiring more recent OpenCL implementation that the Mesa one. Then you would probably have to either: - revert to use the proprietary amdgpu-pro driver (including an AMD ICD) instead of the free amdgpu one https://www.amd.com/en/support/kb/faq/amdgpu-installation This procedure requires downloading .deb drivers from https://support.amd.com/en-us/download. Only distros supported are Ubuntu 18.04.5 HWE, Ubuntu 20.04.3. They will most likely fail in Debian. [...] Hello, I happened to have some issues wrt. a bug similar to this: https://bugs.freedesktop.org/show_bug.cgi?id=111481 There, the suggested fix is to switch to amdgpu-pro (which seems to remedy the issue but not entirely...) which lead me to try the `.deb` files from AMD. I downloaded `amdgpu-pro-21.20-1292797-ubuntu-20.04.tar.xz` and it seems to have installed just fine. As a result, I should be running the proprietary driver now and thus have OpenCL running -- I only ever tested it with a demo application, though... Excerpt from clinfo: ~~~ Platform Name: AMD Accelerated Parallel Processing Number of devices: 1 Device Type:CL_DEVICE_TYPE_GPU Device OpenCL C version:OpenCL C 2.0 Driver version: 3261.0 (HSA1.1,LC) Profile:FULL_PROFILE Version:OpenCL 2.0 ~~~ btw. I do not seem to have a `Device Version` string in there? ~~~ # dpkg -l | grep opencl | cut -c -90 ii amdgpu-pro-rocr-opencl21.20-1292797 ii ocl-icd-libopencl1:amd64 2.2.14-2 ii ocl-icd-libopencl1:i386 2.2.14-2 ii ocl-icd-libopencl1-amdgpu-pro:amd64 21.20-1292797 ii ocl-icd-libopencl1-amdgpu-pro-dev:amd64 21.20-1292797 ii ocl-icd-opencl-dev:amd64 2.2.14-2 ii opencl-base 1.2-4.4.0.117 ii opencl-c-headers 3.0~2020.12.18-1 ii opencl-clhpp-headers 3.0~2.0.13-1 ii opencl-headers3.0~2020.12.18-1 ii opencl-intel-cpu 1.2-4.4.0.117 ii opencl-orca-amdgpu-pro-icd:amd64 21.20-1292797 ii opencl-rocr-amdgpu-pro:amd64 21.20-1292797 ii opencl-rocr-amdgpu-pro-dev:amd64 21.20-1292797 ~~~ To summarize: It might be worth trying the Ubuntu-.debs out on Debian. Although its not a "clean" solution by any means, it might "just work"? HTH Linux-Fan öö pgpyhukeY7BBf.pgp Description: PGP signature
Re: AMD OpenCL support
piorunz writes: On 17/10/2021 21:50, Linux-Fan wrote: There, the suggested fix is to switch to amdgpu-pro (which seems to remedy the issue but not entirely...) which lead me to try the `.deb` files from AMD. I downloaded `amdgpu-pro-21.20-1292797-ubuntu-20.04.tar.xz` and it seems to have installed just fine. Can't install that on my Debian Bullseye. I have clean system with nothing modified or added from outside of Debian. Result: sudo ./amdgpu-install --opencl=legacy,rocr (...) Loading new amdgpu-5.11.19.98-1290604 DKMS files... ^^^ Our driver versions seem to differ. I have 5.11.5.30-1292797 rather than 5.11.19.98-1290604. It has the following SHA-256 sum: ef242adeaa84619cea4a51a2791553a7a7904448dde81159ee2128221efe8e50 amdgpu-pro-21.20-1292797-ubuntu-20.04.tar.xz Back then, I got it from https://www.amd.com/en/support/professional- graphics/radeon-pro/radeon-pro-w5000-series/radeon-pro-w5500 under "Radeon(TM) Pro Software for Enterprise on Ubuntu 20.04.2" and the download still seems to point to a file with the same SHA-256 sum. It could be worth trying the exact same version that I used? [...] I tried various attempts: sudo ./amdgpu-install --opencl=rocr --headless sudo ./amdgpu-install sudo ./amdgpu-install --opencl=rocr Same result each time, something with compiling amdgpu-dkms. I used ./amdgpu-pro-install without additional arguments. It is a symlink to the same script but the script executes different code if invoked with the `-pro` inserted. Not sure if it will make a difference, though. After successful installation with `./amdgpu-pro-install` I installed additional packages from the repository added by the `amdgpu-pro-install` in order to enable the OpenCL features. As a result, I should be running the proprietary driver now and thus have OpenCL running -- I only ever tested it with a demo application, though... I'd love that, but it fails on my system. What system do you have? How did you do it? [...] Debian 11 Bullseye. Before writing this post, I was still on kernel 5.10.0-8-amd64, but I just upgraded and the DKMS compiled successfully for the new 5.10.0-9-amd64. Differences between our systems seem to be as follows: - Minor version difference in proprietary drivers - I installed by using the symlink with `-pro` in its name I might add that I have installed a bunch of firmware from non-free and am running ZFS on Linux as provided by non-free package `zfs-dkms`. HTH Linux-Fan öö pgpKCdX0RatVL.pgp Description: PGP signature
Re: AMD OpenCL support
piorunz writes: On 17/10/2021 23:31, Linux-Fan wrote: Back then, I got it from https://www.amd.com/en/support/professional-graphics/radeon-pro/radeon-pro- w5000-series/radeon-pro-w5500 under "Radeon(TM) Pro Software for Enterprise on Ubuntu 20.04.2" and the download still seems to point to a file with the same SHA-256 sum. It could be worth trying the exact same version that I used? Thanks for your reply. I have Radeon 6900XT, which is different type of card. Not sure if https://www.amd.com/en/support/professional-graphics/radeon-pro/radeon-pro- w5000-series/radeon-pro-w5500 will work for me? I use a Radeon Pro W5500. AMDs website leads me to https://www.amd.com/de/support/graphics/amd-radeon-6000-series/amd-radeon-6900-series/amd-radeon-rx-6900-xt for your GPU and proposes to download "Radon(TM) Software for Linux Driver for Ubuntu 20.04.3" which seems to be a different TGZ than what I am using (it has 1290604 vs. 1292797). I cannot find the list of compatible GPUs for the particular package I have downloaded, the documentation only tells me about "Stack Variants" quoting from it (amdgpu graphis and compute stack 21.20 from the TGZ with 1292797): | There are two major stack variants available for installation: | | * Pro: recommended for use with Radeon Pro graphics products. | * All-Open: recommended for use with consumer products. Hence it is clear that AMD proposes using the "amdgpu-pro" only with the "Radon Pro" graphics cards. Whether that also means that the driver is incompatible with "consumer products", I do not know. Searching online yields these links: * https://wiki.debian.org/AMDGPUDriverOnStretchAndBuster2 * https://www.amd.com/en/support/kb/release-notes/rn-amdgpu-unified-linux-21-20 The second page indicates that "AMD Radeon™ RX 6900/6800/6700 Series Graphics" are compatible with the "Radeon(TM) Software for Linux(R) 21.20". Now whether that document is the correct one to correspond with my downloaded TGZ I cannot really tell. But if they match, it may as well indicate that it is possible to use that driver with your GPU, too. HTH Linux-Fan öö [...] pgpQdJA4EUpZ6.pgp Description: PGP signature
Re: How to install official AMDGPU linux driver on Debian 11?
Markos writes: Em 17-10-2021 19:47, piorunz escreveu: On 17/10/2021 22:27, Markos wrote: Hi, Please, could someone suggest a tutorial (for a basic user) on how to install the driver for the graphics card for a laptop Lenovo IdeaPad S145 with AMD Ryzen™ 5 3500U and AMD Radeon RX Vega 8 running Debian 11 (Bullseye). I found a more complete tutorial just for Stretch and Buster: https://wiki.debian.org/AMDGPUDriverOnStretchAndBuster2 What are the possible risks of problems using these AMD drivers? [...] No reply so far. So, it seems that no one is interested in this question. :-( Or none managed to do this installation, yet. [...] Before your initial post, there was already some discussion about a very similar case in the following thread: https://lists.debian.org/debian-user/2021/10/msg00700.html Summary: Just following AMDs instructions may lead to compile errors (see https://lists.debian.org/debian-user/2021/10/msg00738.html) whereas it worked for my GPU and downloaded driver: (see https://lists.debian.org/debian-user/2021/10/msg00738.html) I am interested in the questions of yours, but unfortunately cannot provide much of an assistance beyond what I already wrote in the other thread. HTH Linux-Fan öö pgpbWc1g5EV6Y.pgp Description: PGP signature
Re: Leibniz' "best of all possible worlds" ...
Ricardo C. Lopez writes: Use case: At work (a school) they use Windows 10 and IT is kind of fundamentalist about it. So I am thinking of "just" using the RAM and the processor in their machine. I am thinking of: * running Debian Live from an external USB attached DVD player May work, but is going to run slow. While live images of today are still ISO files, running them from actual DVDs is slow. It will need dedicated preparation time before each use to startup such live images. * via Qemu, which, of course, I will have to install and for which I may need admin rights, and I am not sure if QEMU on Windows supports the virtualization accelleration yet. If yes, this route might be quite feasible. If no, you could consider using one of the other solutions like Microsoft Hyper V, VirtualBox, VMWare Player. All of them need admin rights. Also: Where would you store the VM's HDD image? * attached an external pan and/or microdrive with whatever code I need for my business. This is what I would probably try first. Especially consider these two variants: - Live system from external pen drive (16 GB stick or similar) You might consider adding a "persistence" partition such that work results can be saved. - Installed system from external hard drive or large pen drive (> 32 GB). This allows for the most "natural" feel of Linux because all settings will be saved persistently by default. Is such an environment possible? What kinds of technical problems do you foresee with such setup?, probably with the BIOS? Any tips you would share or any other way of doing such thing (I don't like to use Windows, but at work you must use it)? Problem I foresee: Most likely, your admins have deactivated booting from external devices. You could also tackle this with a networked approach: Given that your Windows PCs are most likely already properly connected to a common local network, you could make use of that by providing a central "Linux server" - if your admins insist, it could be a Windows host with Linux in VMs. Then, you could access it from any windows host either vith a virtualization client software (VMWare vSphere?) or through the SSH and VNC protocols. HTH and YMMV Linux-Fan öö pgpDksUqm6OCc.pgp Description: PGP signature
Re: LXQT desktop environment hangs?
kaye n writes: Hi Friends! LXQT says, It will not get in your way. It will not hang or slow down your system. However, I've experienced the opposite, especially when I'm using Firefox browser and attempt to open several pages in different tabs. I don't think the problem lies with Firefox though because I didn't have that problem on Debian 10 XFCE. I am now running Debian 11 LXQT. Note that from Debian 10 XFCE to Debian 11 LXQT a lot of things changed. I am pretty sure that it is not LXQT's fault here especially if the problem only occurs while Firefox is running. I'd suggest to try debugging _why_ Firefox has become so much slower as per the upgrade. It is unlikely that it is the new Firefox version being slower than the old ones (although it may be possible). What I'd rather guess is that something different changed in the system which now causes it to run slower. What GPU are you using? How is the RAM load during the slow phases? Can you close invidual tabs to stop the slowness i.e. can you make out that the high load is caused by a specific website or a specific type of website. It could be possible that during the upgrade a combination of driver+GUI was installed that is no longer compatible with your GPU. This would typically manifest in high CPU load for things like video playing or animations of all kinds because the system falls back to "software rendering" in case the GPU is not supported properly. I don't want to use XFCE again just because I want to try another. Could LXDE be better? Is there a way I can install LXDE to my existing Debian 11 running LXQT? Or should I just fresh install Debian 11 with LXDE as the default desktop environment? Adding LXDE to your existing installation should be as simple as # apt-get install task-lxde-desktop Watch out if APT displays any conflicts with your existing packages. During login, you should then be able select different types of "sessions" including LXDE and LXQT ones. From experiments on slow (old, i386) machines, my experience was that LXDE was indeed a little bit faster than LXQT but the difference is probably not notable on anything but >13 year old hardware :) Thank you for your time. HTH and YMMV Linux-Fan öö OT: If you want something that _really_ does not get in the way or slow down, consider switching to any lightweight window manager like Fluxbox, IceWM, i3 etc. These are know to be _very_ responsive even on old machines. Of course, this will not solve the slowness caused by the software you use. Webbrowsers will remain bulky and large as many sites do not work with the lightweight browsers anymore... pgpMLA9XQ8nad.pgp Description: PGP signature
Re: LXQT desktop environment hangs?
kaye n writes: SORRY I SHOULD HAVE SENT IT TO DEBIAN LIST Never mind, I am taking this as an OK to post my answer to the list :) On Sun, Oct 31, 2021 at 9:16 PM kaye n wrote: On Fri, Oct 29, 2021 at 10:35 PM Linux-Fan wrote: [...] I'd suggest to try debugging _why_ Firefox has become so much slower as per the upgrade. It is unlikely that it is the new Firefox version being slower than the old ones (although it may be possible). What I'd rather guess is that something different changed in the system which now causes it to run slower. What GPU are you using? How is the RAM load during the slow phases? Can you close invidual tabs to stop the slowness i.e. can you make out that the high load is caused by a specific website or a specific type of website. It [...] What GPU are you using? Device-1: Intel 82945G/GZ Integrated Graphics driver: i915 v: kernel Display: x11 server: X.Org 1.20.11 driver: loaded: intel unloaded: fbdev,modesetting,vesa resolution: 1366x768~60Hz OpenGL: renderer: Mesa DRI Intel 945G v: 1.4 Mesa 20.3.5 Can you close invidual tabs to stop the slowness Almost always no. Actually, this looks pretty OK to me. I conclude that it is most likely not to be a GPU-related issue, although I do not know what else might be causing the slowness. Others suggested to check the CPU load with `htop`. This might indeed be a good next thing to try? HTH Linux-Fan öö pgpZXeQQQzcJd.pgp Description: PGP signature
Re: Is "Debian desktop environment" identical to "GNOME" upon installation?
Brian writes: On Fri 05 Nov 2021 at 17:02:01 +, Andrew M.A. Cater wrote: > On Fri, Nov 05, 2021 at 04:04:17PM +, Brian wrote: > > On Fri 05 Nov 2021 at 13:43:29 +, Tixy wrote: > > > > > On Fri, 2021-11-05 at 07:47 -0500, Nicholas Geovanis wrote: > > > > On Fri, Nov 5, 2021, 7:21 AM Greg Wooledge wrote: [...] > > > > > With the "Live" installers, the default is different. > > > > > > > > And if I may ask: Why is it different? If there is a reason or two. > > > > > > Guessing here... because a live version already has a desktop > > > environment on the disk, so it make sense to default to installing that > > > one. E.g. if you choose, say, the XFCE live iso, it would default to > > > XFCE not Gnome. Would be a bit perverse otherwise. AFAICT it does not only "default" to the DE contained within the live system but rather does not even show the choice screen because it installs by copying/extracting the live system's data and hence, the DE (and other software choices) are already set. See below. > > I rather thought the Live images contained a copy of d-i but am not > > going to download an ISO to refresh my menory. I will offer > > > > https://live-team.pages.debian.net/live-manual/html/live- > > manual/customizing-installer.en.html > > > > I'd see it as a bit unusual for this copy to differ from the regular d-i. > A few things: [...] > 3. The live CDs are designed so that you download the one with the desktop > you want. The "standard" one installs a minimum Debian with standard > packages and no gui. OK, but the relevance to the OP's issue is obscure. Does it need to taken into account for the issue raised? TL;DR: Live Installers do not present the DE selection screen hence it should not relate to the OP. > 4. Live CD install is not guaranteed to be the same as the traditional > Debian installer. Calamares is very significantly different. Live CD/DVD is > maintained by a different libe CD team and not by the Debian media team. Ah! Calamares. It alters the way tasksel behaves in d-i? Heaven help us! Is that is what is meant when it is claimed by Greg Wooledg: With the "Live" installers, the default is different"? Calamares introduces a new ball game? [...] Let me try to clarify this a little bit from my experience as an "advanced" user :) Calamares is an entirely separate installer that can be invoked from within a running (live) system. It is _one_ way to install Debian from a live system but it is not the only one. It is worth stressing that there is _no_ interaction between Calamares and d-i and that they prsent different screens. Behind the scenes, Calamares invokes an `rsync` to copy the data from within the live system to the target. For a typical session in Calamares, see [1] for an example from Debian Buster. Now d-i is separate in that it does not run from within the live system but has to be invoked _instead_ of the respective live system from the boot menu. It is, however, contained on the same ISO image/DVD together with the live system's data. The d-i variant used on live systems does not ask for the choice of DE because its software selection cannot be customized like in the regular d-i. Instead, it simply copies the data from the live file system to the targed drive (? I am not exactly sure on this one ?). See [2] for an example from Debian Buster and note the absence of the tasksel screen. Now the regular d-i shows the tasksel screen and asks for which DE to install. See [3] for an example from the Debian Bullseye Alpha 3 installer. Here are some "real screenshots" :) [1] https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m2-dl1080-i386-lxde-calamares.xhtml [2] https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m2-dl1080-i386-lxde-di.xhtml [3] https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m6-d11a3-i386-netinst.xhtml HTH Linux-Fan öö pgpK1tyhC5AEL.pgp Description: PGP signature
Re: Debian version
Koler, Nethanel writes: Hi I am Nati, I am trying to find a variable that is configured in the linux- headers that can tell me on which Debian I am Any reason for not using /etc/os-release instead? IIRC this one is available on RHEL _and_ Debian systems. For example in RedHat After downloading the linux-headers I can go to cd /usr/src/kernels//include/generated/uapi/linux There there is a file called version.h Where they define this variables #define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c)) #define RHEL_MAJOR 8 #define RHEL_MINOR 4 When I tried the same with Debian I got to a dead end Can you please help me find something similar in the linux-headers for Debian? I tried $ grep -RF Debian /usr/src and got a few hits, among those are | .../include/generated/autoconf.h:#define CONFIG_CC_VERSION_TEXT "gcc-10 (Debian 10.2.1-6) 10.2.1 20210110" | .../include/generated/compile.h:#define LINUX_COMPILER "gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2" | .../include/generated/compile.h:#define UTS_VERSION "#1 SMP Debian 5.10.46-5 (2021-09-23)" | .../include/generated/package.h:#define LINUX_PACKAGE_ID " Debian 5.10.46-5" If your goal is to evaluate them programatically during compile-time of a C project, this might not be ideal though, because all of the values I found seem to be strings. HTH Linux-Fan öö pgpliwZzHZIW3.pgp Description: PGP signature
Re: Emoji fonts in Debian [WAS:] Re: How to NOT automatically mount a specific partition of an external device?
Nate Bargmann writes: * On 2021 26 Nov 11:36 -0600, Celejar wrote: > On Thu, 25 Nov 2021 10:43:16 + > Jonathan Dowland wrote: > > ... > > > 👱🏻Jonathan Dowland > > ✎ j...@debian.org > > 🔗 https://jmtd.net > > I finally got tired of seeing tofu for some of the glyphs in your sig, > so I looked up their Unicode codepoints: Interestingly, I see the glyphs in Mutt running in Gnome Terminal and in Vim as I edit this in the same Gnome Terminal. My font is one installed locally, Droid Sans Mono Slashed which provides the zero character with a slash. I know that there is keyboard sequence in Gnome Terminal (Ctl-Shift-E then Space) to bring up a menu to select Unicode glyphs. 🐮 - Nate I use the cone e-mail client in rxvt-unicode with the Terminus bitmap font and I see only the icon next to `j...@debian.org`. Apart from that, the first line of the signature has two squares, the third line one and the post by Nate has a single square, too. I can view the glyphs correctly by saving the mail as text file and opening it with mousepad. `aptitude search ~inoto` returns the following here: | idA fonts-noto-color-emoji- color emoji font from Google | i A fonts-noto-core - "No Tofu" font families with large | i A fonts-noto-extra - "No Tofu" font families with large | i A fonts-noto-mono - "No Tofu" monospaced font family wi | i A fonts-noto-ui-core I am pretty fine with _not_ seeing the correct glyphs by default given that I do not want fancy colorful icons in my terminals anyway :) YMMV Linux-Fan öö [...] pgpGNsO7W6ND3.pgp Description: PGP signature
Re: Looking for reccomendations
Juan R.D. Silva writes: The headphone jack failed on my Dell M4800 laptop. I need to find reliable with decent stereo audio output External USB Sound Card/Audio Adapter with 3.5mm Stereo Headphone (3 pole plug) and Mono Microphone (nice to have) Jacks. It should be available in North America. Back when I wanted an USB sound card, I bought "Creative Sound Blaster PLAY! 3" [1]. It works out of the box with Debian stable (back then it was Debian 10) and PulseAudio+ALSA. I did not check if any of the advertised advanced audio functionality (higher sampling rate etc.) work under LInux. Note also, that I used this with a desktop-style system and thus do not know about its power consumption. I gather it is one of the more expensive ones. From my (limited) experience the audio quality is decent and it has a nice extra feature: The headphone plug supports 3-pole plugs as well as 4-pole plugs. I.e.: If you want to use your smartphone's 4-pole headset, it will work, too. It also has a dedicated microphone jack to use for "regular" PC headsets. AFAICT, it should be available in America. I bought my one from a retail store in Germany, though. [1] https://www.newegg.com/creative-sound-blaster-play-3/p/N82E16829102100 HTH and YMMV Linux-Fan öö [...] pgpD1mIVhKG9H.pgp Description: PGP signature
Re: downsides to replacing xfce4-terminal?
Greg Wooledge writes: On Sat, Jan 08, 2022 at 12:16:44AM +0100, Michael Lange wrote: > In case you have a mouse with a wheel, what's wrong with middle-button > pasting? It is worth mentioning that the common Windows program to access Linux machines over SSH `putty.exe` has the right-click for paste behaviour. I gather it might be hard to adjust muscle memory to the middle-mouse-click if one switches from Putty to something else. It's virtually impossible to press the wheel without accidentally turning it, either forward or backward. Depending on where you're clicking, this can have undesired side effects. This problem is highly hardware-dependent. I know there are some mice where the wheel requires much force to press whereas scrolling happens immediately as soon as you touch it. Given that I need both, the middle mouse button as well as the wheel function I specifically avoid them when buying and try to find a mouse with the opposite behaviour: Easy to click wheel and hard to unexpectedly trigger scrolling. For my uses, I have found the "MadCatz R.A.T.3" to work well (seven years of light mouse usage passed). It seems to be superseded by the "R.A.T.4+" which has a bunch of more featurs that I probably don't need :) Personally, I'm still using a three-button mouse with no wheel. The middle button pastes, just as the gods of Unix (or Xerox) intended. That's also a fine choice iff one can do without the wheel :) YMMV Linux-Fan öö pgp_2lQcnZGC5.pgp Description: PGP signature
Re: smartd
pe...@easthope.ca writes: From: Andy Smith Date: Sat, 22 Jan 2022 19:07:23 + > ... you use RAID. I knew nothing of RAID. Therefore read here. https://en.wikipedia.org/wiki/RAID Reliability is more valuable to me than speed. RAID 0 won't help. For reliability I need a mirrored 2nd drive in the host; RAID 1 or higher. Google of "site:wiki.debian.org raid" returned ten pages, each quite specialized and jargonified. A few tips to establish mirroring can help. Here, it returns a few results, too. I think the most straight-forward is this one: https://wiki.debian.org/SoftwareRAID For most purposes, I recommend RAID1. If you have four HDDs of identical size, RAID10 might be tempting, too, but I'd still consider and possibly prefer just creating two independent RAID1 arrays. If you want to configure it from the installer, these step-by-step instructions show all the relevant installer screens: https://sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/ Also, keep in mind that establishing the mirroring is not all you need to do. To really profit from the enhanced reliability, you need to play through the recovery scenario, too. I recommend doing this in a VM unless you have some dedicated machine with at least two HDDs to play with. [...] HTH and YMMV Linux-Fan öö pgp1HGhRHorQN.pgp Description: PGP signature
Re: on the verge of shopping for new desktop hardware, recommendations?
Tixy writes: On Mon, 2021-03-08 at 05:36 -0500, The Wanderer wrote: > On 2021-03-07 at 22:53, Felix Miata wrote: > > > Linux-Fan composed on 2021-03-08 03:35 (UTC+0100): > > > > > Wrt. power I usually start from CPU + GPU > > > > I used an online calculator > > https://www.bequiet.com/en/psucalculator > > on this system with > > Applying the power-consumption handwave estimates from Linux-Fan's > larger mail, that would be: > > > i3-7100T (TDP 35W) supports up to 3 displays up to 4096x2304 > > 35 TDP is Thermal Design Power, it doesn't mean the max power. Whilst experimenting with my new desktop, with a 65W TDP CPU, the power of the whole system (measured at the mains supply) went from 16W idle to 170W by executing "while : ; do : ; done" for each core. I'd guess the bulk of that is the CPU, not the memory system. Yes, this is good to keep in mind. For my system, the power consumption went from 100W idle to 264W with your test, i.e. +164W which is close to the processor TDP of 165W. Using a different "benchmark" [1], I achived 288W i.e. more than the TDP. Of course, the UPS' power display is slow and hence I would not notice the real spikes :) CPU Manufacturers do not seem to publish max power figures AFAICT. Hence it seems best to estimate the additional power needed based on experience/tests? It soon drops to 130W as the thermal protection kicks in, but you'd want a PSU to cope with the peaks. And maxing out all cores isn't just a theoretical exercise, transcoding video files or compiling programs will happily do that. The interesting thing with modern CPUs is that even applications that seemingly cause 100% CPU usage (like my benchmark [1]) do not actually stress the CPU most -- I still do not know all the details about that, though. In part, it seems to be related to some instructions needing more power than others (vector extensions are known to be power-hungry). [1] https://masysma.lima-city.de/32/bruteforce3.xhtml Linux-Fan öö [...] pgpcvMMWz1KY5.pgp Description: PGP signature
Re: Use motherboard video-out or GPU’s
Pankaj Jangid writes: I have a confusion. Actually there are two. Please help me understand this. Suppose I have a good motherboard with two GPUs installed in the PCIe slots. Now, should I connect the monitor to the motherboard video-out or should I use GPU’s output? Suppose the OS (Debian GNU/Linux in this case) is fully configured to utilize the GPUs i.e. drivers etc. are set up. Will the graphic system be able to utilize the GPUs irrespective of where I have connected the monitor? Connect the monitors to the GPU/cards you want to use. Otherwise, it is most likely that graphics output will be computed by a processor-integrated graphics unit (if present, otherwise black screen). Here are the two special cases that immediately come to mind: * It is possible to utilize GPUs that are not connected to screens for computation purposes (e.g. OpenCL/NVidia CUDA). Unless you are doing this all the time there is no reason against using them for video output, too :) * In case of mobile devices there are some which can render on a different GPU than the monitor is connected to but for usual desktops this is not the case. HTH Linux-Fan öö [...] pgp9Z8SztHDoq.pgp Description: PGP signature
[OT] Debian Offtopic (Was: Re: Social-media antipathy))
Brian writes: On Thu 18 Mar 2021 at 21:09:58 +0100, deloptes wrote: [...] > paranoid - isolate the machine where you read and write, encrypt and > decrypt your messages! I was thinking of starting a crowdfund project for > such a device, but no time to even draft a business plan. Perhaps you and all the others could use -debian-offtopic to air opinions about life and big-tech? I'll crowdfund the move there :). Just a suggestion from someone who hasn't derived a single benefit from this long-running conversation on privacy. There used to be d-community-offtopic (IIRC), but as far as I remember, it was shut down a few years ago? Back when it still existed, I was subscribed. I have not heard of any new list since (that does not mean that none exists). At least, the following link does no longer work: https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/d-community-offtopic Linux-Fan öö [...] pgpmDYoIi9pCg.pgp Description: PGP signature
Re: how to record sound to mp3 [wav, for those who can]
David Wright writes: On Thu 25 Mar 2021 at 17:40:51 (+0100), Nicolas George wrote: > David Wright (12021-03-25): [...] > > To record, you could type, for example, in another xterm: > > > > $ arecord -d 10 -f cd -v -v -v -D plughw:0,0 /tmp/audiofile.wav > > This command does not record the sound being played. … on your machine. That's why I wrote "If you can't get ALSA to work…". You're a candidate for pulseaudio, I assume. Not sure about that command above (no means to try it just now), but _with_ PulseAudio, I can record the sound that is being played back just fine by means of "monitor" audio devices. E.g. I have the following command to record my screen (`0:v`), the "monitor" device (`1:a`) and a microphone (`2:a`): exec ffmpeg -video_size 1600x1200 -framerate 12 -f x11grab -i :0.0+0,0 -f pulse -ac 2 -i 0 -f pulse -i 1 -c:v libvpx-vp9 -deadline realtime -b:v 2M -c:a libvorbis -map 0:v -map 1:a -map 2:a "recording.webm" adapted from these two sources: -> https://trac.ffmpeg.org/wiki/Capture/Desktop -> https://askubuntu.com/questions/682144/capturing-only-desktop-audio-with-ffmpeg It may of course be true that the hardware _does_ support/accellerate this monitoring capability, but it does not seem to be entirely uncommon a feature? Here, it even works inside virtual machines :) Btw. the existence of monitor devices can be checked in `pavucontrol` where under "Output" it lists two monitor devices here: One for the HDMI output and one for the "Built-in Analog Stereo" Output. AFAICT, this recording facility is getting harder to find on most computers, if you're not prepared to fork out for a sound card. I've been fortunate, in that just as my ancient Pentium III expired, I have acquired a Dell Precision T3500 which has a well endowed (integrated) sound card. I'm still finding my way round it: for example, it also has HDMI playback, but I haven't yet worked out how to exploit it. The machine has one DVI output and two DisplayPorts, so I need to find a DisplayPort/HDMI adapter to see if that would yield anything. [...] As far as I can tell, DisplayPort can transport audio without the need for an HDMI adapter. Here, a Radeon Pro W5500 graphics card is connected to a Dell U2713HM display which has one HDMI, DP, VGA and DVI input each. The W5500 is connected to the DisplayPort and if I play sound to the "HDMI" output, the display outputs that sound through its headphones socket. Similar to your case, there are no HDMI ports on the graphics card. In my case, it is only DisplayPorts. HTH Linux-Fan öö pgpoa49bihnMC.pgp Description: PGP signature
Re: Running debian on WSL (windows-system-for-linux)
Dan Hitt writes: Does anybody have any experience running debian on a WSL (windows-system-for- linux) machine? Yes, limited experience with it here :) I need to get a machine for family use, but i would also like to be able to also use it myself. So i would like to be able to ssh in, back up files into it, and do other tasks, maybe even a little programming on it. (A mac can handle all of this sort of thing quite easily, but has a huge price tag.) At least these options come to mind: * Debian Host (e.g. QEMU/KVM) with a Windows Guest System * Microsoft Hyper-V Host with a Debian Guest system * WSL as mentioned * Docker with a Debian container on Windows (Hyper-V in disguise...) My order of preference would be: first item -- first choice, last item -- last resort... TL;DR: Virtualization software supports your points a--d just fine :) [...] In particular, i would like to (a) be able to remotely access the WSL debian just as if it were debian box, including having ssh, rsync, and x windows (b) occasionally do the same sorts of things from its console (c) not have to manually set up and keep alive daemons or special services, (d) as an extra, keep the debian and windows things on separate disks, if possible. Basic commandline like rsync and ssh work fine (from my experience). Getting to run the services automatically can be a little complicated and from my experience _does_ require manual setup. I do not know whether X11 and/or different HDD will work, but I expect `ssh -X` style X11 and VNC to work just fine (not tested). I'm not looking for a multi-boot situation, as i want to be able to access the WSL apparatus while the console is engaged with doing windows operations for somebody else (and i guess the converse as well, although i'm pretty foggy about sshing into windows). No experience on SSHing into Windows here, but Debian's package `remmina` works fine as an RDP client to connect to Windows here. Newly, Windows also has a SSH client that can be used to connect to remote Linux servers (but no `-X` IIRC). Thanks in advance for any advice or pointers. Consider using virtualization explicitly. Regardless of which OS is host and which is guest, it is much more stable a kind of technology. Especially considering running services and graphics, VMs provide all the necessary configuration options, whereas WSL is rather limited in this regard, although it can be done with it, too... HTH Linux-Fan öö [...] pgpceKr_R39OZ.pgp Description: PGP signature
Re: fail2ban Squawk
Martin McCormick writes: /lib/systemd/system/fail2ban.service:12: PIDFile= references path below legacy directory /var/run/, updating /var/run/fail2ban/fail2ban.pid → /run/fail2ban/fail2ban.pid; please update the unit file accordingly. So I looked in to that file and the actual line they were referring 2 is numbered 15 and points fail2ban.pid to /var/run/fail2ban/fail2ban.pid where it certainly lives with a recent date. What is the problem exactly? I am not sure about the "why" behind all this, but the preference of /run over /var/run is documented in the Filesystem Hierarchy Standard: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s15.html On my Debian 10 system, /var/run is a symlink to /run, allowing for compatibility with programs using that path. HTH Linux-Fan öö pgp4nvTHgIVeb.pgp Description: PGP signature
Re: Converting markdown to PDF
Paul M Foster writes: I'm trying to use pandoc to convert markdown files to PDF. But when I try, pandoc dumps out with an error that it needs "pdflatex" or some other similar converter to do its job. No such "pdflatex" package exists in the Buster stable archives. I'm sure something does this, but I don't know what it is. Does anyone else know? pdflatex is provided by package texlive-latex-base. Some hints about which packages might be needed can be seen in pandoc's "Suggests": ~$ aptitude show pandoc Package: pandoc [...] Suggests: texlive-latex-recommended, texlive-xetex, texlive-luatex, pandoc-citeproc, texlive-latex-extra, context, wkhtmltopdf, librsvg2-bin, groff, ghc, nodejs, php, perl, python, ruby, r-base-core, libjs-mathjax, node-katex [...] I personally have the following texlive packages installed: ~$ aptitude search ~itexlive i A texlive - TeX Live: A decent selection of the TeX Li i A texlive-base- TeX Live: Essential programs and files i A texlive-binaries- Binaries for TeX Live i A texlive-extra-utils - TeX Live: TeX auxiliary programs i A texlive-font-utils - TeX Live: Graphics and font utilities i A texlive-fonts-extra - TeX Live: Additional fonts i A texlive-fonts-extra-links - TeX Live: i A texlive-fonts-recommended - TeX Live: Recommended fonts i A texlive-games - TeX Live: Games typesetting i A texlive-generic-extra - TeX Live: transitional dummy package i A texlive-generic-recommended - TeX Live: transitional dummy package i A texlive-lang-german - TeX Live: German i A texlive-lang-greek - TeX Live: Greek i A texlive-latex-base - TeX Live: LaTeX fundamental packages i A texlive-latex-extra - TeX Live: LaTeX additional packages i A texlive-latex-recommended - TeX Live: LaTeX recommended packages i A texlive-pictures- TeX Live: Graphics, pictures, diagrams i A texlive-plain-generic - TeX Live: Plain (La)TeX packages i A texlive-pstricks- TeX Live: PSTricks i A texlive-science - TeX Live: Mathematics, natural sciences, c I am pretty sure that not all of them are needed. It might be sufficient to begin with `texlive-latex-base` and only install others if pandoc keeps complaining. HTH Linux-Fan öö pgpxlIqHwvT_f.pgp Description: PGP signature
Re: Converting markdown to PDF
Paul M Foster writes: On Thu, Apr 01, 2021 at 10:46:09PM +0200, Linux-Fan wrote: [...] > pdflatex is provided by package texlive-latex-base. > > Some hints about which packages might be needed can be seen in pandoc's > "Suggests": > > ~$ aptitude show pandoc [snip] Thanks. When I used "-t latex" and tried outputting to PDF after installing some of these dependencies, I did get a PDF. Unfortunately, it looks about as horrible as most LaTeX docs. I'll keep tweaking it. Yes, I do not like the default style either :) There are multiple options to change the appearance without digging too deeply into the generated LaTeX code. E.g. I use a wrapper script for the following command: pandoc \ -t latex \ -f markdown+compact_definition_lists+tex_math_single_backslash+link_attributes \ -V papersize=a4 \ -V classoption=DIV10 \ -V fontsize=12pt \ -V documentclass=scrartcl \ -V fontfamily=kpfonts \ -V babel-otherlangs=greek \ -V babel-newcommands=\\usepackage{teubner} \ -V toc-depth=1 \ -V x-masysma-logo=/usr/share/mdvl-d5man2/tplpdf/logo_v2 \ -V x-masysma-icon=/usr/share/mdvl-d5man2/tplpdf/masysmaicon \ --default-image-extension=pdf \ --template=/usr/share/mdvl-d5man2/tplpdf/masysma_d5man.tex \ --resource-path="$(dirname "$1")" \ -o "$(basename "$1").pdf" \ "$1" Some of the `-V` options improve (IMHO) the output style. The choice of font already makes a huge difference (but is also a matter of tast!). Additionally, I use an own “template” file to do some advanced things like add logos/icons etc. If you are interested in an example, see https://github.com/m7a/bo-d5man2/tree/master/d5manexportpdf HTH Linux-Fan öö [...] pgpcLbF9G7FPS.pgp Description: PGP signature
Re: ubuntu/snap future
Celejar writes: On Fri, 9 Apr 2021 20:48:01 +0300 Andrei POPESCU wrote: > On Vi, 09 apr 21, 08:02:46, Celejar wrote: > > What about cases where the software simply isn't in Debian at all? > > Recently, I've used IntelliJ IDEA and Android Studio, and I'd like to > > set up an OwnCloud server when I get a chance. These, and many other > > complex / fast-changing applications aren't in Debian, but are > > available in containerized app formats. [...] Have the packages I'm interested in not been in unstable because potential maintainers knew they couldn't make it into stable (and now that there's fasttrack, they'll get into unstable), or is there some other reason they can't be in unstable (and hence can't make it to fasttrack either)? I do not know it for the excact packages you mentioned but to me it seems quite plausible that there are modern open source applications which cannot be easily included in Debian (not even in unstable). AFAICT it boils down to the fact that some of that applications' build processes do not meet Debian's high quality standards (formally: Debian Policy). See, for instance: * https://lists.debian.org/debian-user/2020/04/msg01281.html * https://blogs.gentoo.org/mgorny/2021/02/19/the-modern-packagers-security-nightmare/ * https://lwn.net/Articles/842319/ * https://xkcd.com/2347/ I do not think that the solution to these issues has been found yet -- they run quite deeply and affect (at least) multiple Linux distributions. HTH Linux-Fan öö [...] pgp7hIfVnNYt5.pgp Description: PGP signature
Re: upgrading firefox-esr from 78.6 to 78.9 results in non-working firefox
Eric S Fraga writes: Hello all, I don't frequently upgrade if my system (mostly Debian testing) is working but my bank told me that my browser was out of date and I needed to upgrade. So, I did 'apt update' and 'apt install firefox-esr' to upgrade from version 78.6 to version 78.9. Start up the new version and not a single site I tried worked. Just blank pages. Reverted to 78.6 and it works just fine (although the bank is not happy with me...). I've checked the Debian bugs list but did not see anything relevant (although there are so many bug reports that it's easy to miss one...). I don't have time to do much more investigating at the moment but thought I'd post this just in case anybody else has had the same behaviour. I'll hopefully investigate more in a few days. Did you try to completely stop all of the previous version's processes? With new Firefox updates I frequently observe that the already running old version "disintegrates" -- I cannot open new pages and get error messages or blank pages instead. This can then be solved by closing all Firefox windows and starting it such that only the new executable ist running. HTH Linux-Fan öö [...] pgpUDkqrUrP5f.pgp Description: PGP signature
Re: how to use fetchmail with MS Office 365 / davmail?
Michael Grant writes: I saw in the last 6 months a daemon that let you get oauth tokens on linux and then it refereshed the token indefinitely until told to stop. Essentially making the token available on linux so you could use it in another program that requied a password, for example fetchmail or getmail. I've tried to find it but I'm turning up nothing. I'm pretty sure I didn't imagine it! Does anyone recall the name? This could definitely be helpful for fetching mail from an account with oauth setup. I do not know about a daemon, but `oathtool` (package `oathtool`) does a fine job for TOTP based tokens here. Usage: $ oathtool --base32 JBSWY3DPEHPK3PXP 282760 Of course, you will need to get the Base32 secret from somewhere. It can often be accessed from the screens that tell you to scan a QR code with an authenticator app if you press something like "I cannot scan the code" or such. HTH Linux-Fan pgpmTYmvChzw1.pgp Description: PGP signature
Re: how to use fetchmail with MS Office 365 / davmail?
Kushal Kumaran writes: On Thu, Apr 29 2021 at 06:57:02 PM, Linux-Fan wrote: > Michael Grant writes: > >> I saw in the last 6 months a daemon that let you get oauth tokens on >> linux and then it refereshed the token indefinitely until told to >> stop. Essentially making the token available on linux so you could >> use it in another program that requied a password, for example >> fetchmail or getmail. >> >> I've tried to find it but I'm turning up nothing. I'm pretty sure I >> didn't imagine it! Does anyone recall the name? This could >> definitely be helpful for fetching mail from an account with oauth >> setup. > > I do not know about a daemon, but `oathtool` (package `oathtool`) does a > fine job for TOTP based tokens here. Usage: > >$ oathtool --base32 JBSWY3DPEHPK3PXP >282760 > > Of course, you will need to get the Base32 secret from somewhere. It can > often be accessed from the screens that tell you to scan a QR code with an > authenticator app if you press something like "I cannot scan the code" or > such. oauth is not the same as oath You are right. Sorry for the noise :( Linux-Fan öö [...] pgpAbFqcEH9Z2.pgp Description: PGP signature
Re: OT: minimum bs for dd?
Bob Bernstein writes: As noted, is there a minimum bs size for dd? AFAIK its one byte. HTH Linux-Fan öö pgpQux2oMAtlK.pgp Description: PGP signature
Re: How do I permanently disable unattended downloads of software/security updates?
Greg Wooledge writes: On Wed, Jun 02, 2021 at 07:33:32PM +0300, Reco wrote: >Hi. > > On Wed, Jun 02, 2021 at 06:27:45PM +0200, Stella Ashburne wrote: > > Output of systemctl list-timers [...] > > 6 timers listed. > > Pass --all to see loaded but inactive timers, too. > > The most important parts of "systemctl list-timers" (your problem > considered) are UNIT and ACTIVATES columns, and your result lacks them > for some reason. The designers of systemctl made some odd choices. They drop you into a weird interactive mode by default, and expect you to be willing and able to scroll around to see the fields of this report. Worst of all is that you may not even *know* that you're supposed to do this. [...] to work, you need to redirect or pipe systemctl's output so that it isn't going to a terminal. systemctl list-timers | cat Of course, this is still ugly as sin, because the designers of systemctl don't understand that terminals are 80 characters wide, always and forever. They just dump a bunch of longer-than-140-character lines and let them wrap as they will. Well, at least the information is there, even if it's hard to read. I have added the following aliases to all my systems: alias systemctl='systemctl -l --no-pager' alias journalctl='journalctl --no-pager' But of course, I like that `cat` trick for systems which I do not own. Much easier than remembering that it was `--no-pager` :) [...] Anyway, I suspect that the OP might find some useful information from this command: systemctl list-timers | grep apt As far as I can tell, these ultimately lead to /usr/lib/apt/apt.systemd.daily which in turn claims to honor `APT::Periodic::Enable "1";` from /etc/apt/apt.conf.d. Still it is worth checking the logs from the systemd timers, e.g.: journalctl --no-pager -u apt-daily-upgrade.service journalctl --no-pager -u apt-daily.service It is also possible that there might be systemd user timers? systemctl --user --no-pager -l list-timers Here, the outputs are as follows: ~~~ # journalctl -u apt-daily-upgrade.service -- Logs begin at Wed 2021-06-02 12:24:45 CEST, end at Wed 2021-06-02 20:47:39 CEST. -- Jun 02 12:24:55 masysma-18 systemd[1]: Starting Daily apt upgrade and clean activities... Jun 02 12:24:56 masysma-18 systemd[1]: apt-daily-upgrade.service: Succeeded. Jun 02 12:24:56 masysma-18 systemd[1]: Started Daily apt upgrade and clean activities. # journalctl -u apt-daily.service -- Logs begin at Wed 2021-06-02 12:24:45 CEST, end at Wed 2021-06-02 20:54:02 CEST. -- Jun 02 12:24:55 masysma-18 systemd[1]: Starting Daily apt download activities... Jun 02 12:24:55 masysma-18 systemd[1]: apt-daily.service: Succeeded. Jun 02 12:24:55 masysma-18 systemd[1]: Started Daily apt download activities. $ systemctl --user --no-pager -l list-timers 0 timers listed. Pass --all to see loaded but inactive timers, too. ~~~ I do not believe to have observed the automatic download behaviour the OP sees despite the timers obviously being active and the script running. From the timings (between start and completion of the `apt.systemd.daily`) it seems to not do anything out of the box. I am leaning towards the "DE explanation" -- that the upgrades are not caused by APT's own mechanisms but rather triggered by some DE through opaque means not visible in cron or systemd timers. I am not sure how I would go about identifying the cause there, except for checking the GUI configuration that all related options are turned off? HTH Linux-Fan öö pgpHlpmwe63e_.pgp Description: PGP signature
Re: How to verify newly burned disc [Was: Fatal error while burning CD]
Michael Lange writes: [...] I discovered then that, unless I missed something, that verifying the success of the burning procedure appears to be surprisingly (to me at least) non-trivial. For data-discs I finally found a recipe that seems to work in the archives of debianforum.de : $ cat whatever.iso | md5sum 50ab1d0cba4c1cedb61a6f22f55e75b7 - $ wc -c whatever.iso 8237400064 whatever.iso $ dd if=/dev/sr0 | head -c 8237400064 | md5sum 50ab1d0cba4c1cedb61a6f22f55e75b7 - I usually go for this kind of command: cmp whatever.iso /dev/sr0 If it reports "EOF on whatever.iso" its fine :) This method does not, however, check the speed or any soft errors that the firmware and driver were able to correct. You could run a `dmesg -w` in parallel to find out about them in a qualitative way (if there are many "buffer IO errors" the burn quality could be bad?). [...] I could not find any way to verify the success after burning audio-CDs though, except of course of carefully listening (the first one burned with the new drive seems to sound ok at first glance, I haven't found the time and leisure yet to listen attentively to 75 min. of weird Japanese jazz-music though :) So, does anyone know about a way to verify the integrity of burned audio-CDs? [...] Can you mount it and view the individual tracks as files? Did you supply the .wav files exactly as they were going to be burnt unto the disc? If both is yes, might it make sense to compare the individual track files against your supplied sources? I have not tried to verify audio discs this way, though. HTH and YMMV Linux-Fan öö pgpPPQlh6zJ8m.pgp Description: PGP signature
Re: [OFFTOPIC] Mass storage prices and form factor (was: ISO to external SSD)
Stefan Monnier writes: Peter Ehlert [2021-08-03 08:27:26] wrote: > On August 3, 2021 8:17:58 AM Stefan Monnier wrote: >>> Second, the price of spinning disks is such that it makes no >>> sense to buy anything smaller than 4TB, which will fit all this, >>> and 6-8 TB are often a reasonable idea even for single users. >> You seem to assume a 3½" form factor which either requires a "large" >> desktop or an external enclosure. > Not really. My HP z620 work station ain't huge. [...] >> Personally I consider this form factor dead every since I bought my >> first 2TB 2½" disk. > If you use Lots of drives (4tb), and are on a limited budget, like me... > The 3.5 form factor is more cost effective. That's why I wrote "personally". AFAIK the proportion of computer users which "use lots of drives" is quite small. Maybe it's higher among Four internal drives here (2x3.5" HDD, 2x2.5" SSD), not sure if that counts as a lot. Debian users, admittedly, but still your original statement above lacks a qualification like "AFAIC" or "IMO" ;-) OK, to add actual info to this message, here's a quick look at today's lowest prices in my "usual" store: 2TB $ 64 3½ HDD 32 $/TB 3TB $ 65 3½" HDD 22 $/TB 4TB $104 3½" HDD 26 $/TB 6TB $139 3½" HDD (external) 23 $/TB 8TB $199 3½" HDD 25 $/TB 10TB $323 3½" HDD (external) 32 $/TB 12TB $339 3½" HDD (external) 28 $/TB I'm surprised at how stable the price per TB is over the 2-12 range. It used to be the case that HDD price's curve was not nearly as linear (which reflected the fact that production costs aren't (weren't?) proportional to the drive's capacity). I suspect that the profit margin varies widely over this spectrum. Thank you for putting the numbers together. It is always nice to see some current statistics about it. From memory, 6-8T used to be the sweet spot. 2TB $ 60 2½" HDD (external) 30 $/TB 3TB $125 2½" HDD (external) 42 $/TB (only "recertified" available) 4TB $114 2½" HDD (external) 29 $/TB 5TB $139 2½" HDD (external) 28 $/TB It's also interesting to see that the price per TB is about the same for 2½" HDD as for 3½" HDD, whereas it used to be significantly higher. [ And also that in the 2½" space, your best bet in $/GB is to buy a drive+enclosure, which implies you don't really know what you're getting other than the capacity of the drive. :-( ] That makes 3½" form factor even more dead than I thought (two 4TB 2½" drives should offer better performance than one 8GB 3½" drive and use less space, not sure about power consumption). [...] I still use and like the 3.5" drives mostly due to the following considerations: * It is easier to get 3.5" at 7200 rpm than 2.5". Faster rotation means faster access and this is usually where HDDs are very slow hence I do not want them to be even slower. * 3.5" drives are available with CMR even at high capacities like 6T oder 8T. With 2.5" a limited and expensive selection of server drives offer 2T or 4T with CMR, whereas AFAIK all consumer-grade 2.5" hard drives above 1T are SMR drives. I am not sure if there are any 2.5" CMR hard drives (server or not) above 4T? This means in terms of performance, it is quite possible for a single 3.5" drive to outperform two 2.5" ones (unless you are only doing sequential reads from two drives in parallel). YMMV Linux-Fan -- öö pgpRk8_IwJMav.pgp Description: PGP signature
Re: On improving mailing list [was: How to Boot Linux ISO Images Directly From Your Hard Drive Debian]
Andy Smith writes: Hello, On Sun, Aug 08, 2021 at 11:35:15AM +0200, to...@tuxteam.de wrote: > any ideas on how to make the situation better? To be honest I don't think that mailing lists are a very good venue for user support and I would these days prefer to direct people to a Stack Overflow-like site. The chief advantages of such sites are that posted problems are narrowed down to contain the required information, and answers are ranked so as to make poor answers (and ultimately, disruptive posters) disappear. Ask Ubuntu. I think, works well. The primary disadvantage of Q&A-style sites from my limited experience with StackOverflow is that they often fail to recognize XY-problems. Debian-user is pretty good at helping people to re-phrase their question to actually arrive at and solve the underlying question. I have seen numerous Q&A-sites where the immediate problem was answered but the solution was convoluted, insecure or in some other way dissatisfying and that mostly because the initial question was not open enough to allow for the technically correct answer to be given. There have been a few attempts to set up such sites for Debian, so [...] The previous attempts have sort of started as an announcement that such a site is available, but not followed up by any level of advertising on Debian's web site. The announcement threads on the mailing lists then got dominated by arguments from the same small group of people loudly and repeatedly arguing how they would never use or support such a thing. That's fine, but without a way to continually advertise a site as a support venue, it will not get used. A classic chicken-egg problem :) Also, I think that switching support over to a different medium i.e. from e- mail to Q&A-style will see a different sort of user participating. Hence, the "community" one would find on the Q&A site is not there yet. This explains why it would not be used much (initially) even if there was a lot of advertisement. [...] So in summary, I don't think any of the things that would be necessary to improve the way this list works are going to be popular with the regular posters, while starting over with a different solution requires consensus and support from the Debian project that has up until now not been there. As far as I can tell, Debian's development communication mostly uses e-mail (for bugs, mailing lists, announcements) and IRC (for real-time communication e.g. release testing). Hence it seems only natural that e-mail and IRC would be the primary means to ask for help, too. The idea behind this is (in theory?) that the developers use the same means of communication as the users. Whether the combination of IRC+e-mail is still “up to date” with practices from younger free software projects can still be debated. I have read articles claiming that participating in Linux kernel development is hard because they are not tracking their development using Github issues whereas most other projects today are easily reachable this way. The same principle could possibly be applied to Debian development, too. Btw. as of today, at least three types of support channels are advertised: https://www.debian.org/support In that order (not sure if it is related to priority?): IRC, Mailing lists, Forums Where does the notion that the mailing list is the primary support channel stem from? ~ ~ ~ There are some unlisted discussion and support channels, too. A community where one can vote up and down: https://www.reddit.com/r/debian/ I even tried out Reddit for a few weeks but noticing how much data they collect just by my clicks on up/down and choice of topics to read is quite a revelation. Both, mailing-lists and IRC are in a way more public that everything one sends is published for all to read but also more private in that what one does not intend to send (which messages I read and how long I take for it for instance) stays private. HTH Linux-Fan öö [...] pgpKvreXwIXey.pgp Description: PGP signature
Re: Mini server hardware for home use NAS purposes
Jonathan Dowland writes: On Wed, Feb 02, 2022 at 03:11:57PM +0100, Christian Britz wrote: Do you have any recommendations for me? I have much the requirements and my current solution is documented here: <https://jmtd.net/hardware/phobos/> I am using an Intel NUC with Celeron J3455 with 8 GiB of RAM. It has a fan but runs very quietly. Here, it is only a "backup-server" hence the low-speed CPU is not an issue. It is currently still on oldstable (see sheet below), but that's only because I have not found any time to upgrade it yet. Given that it is just a regular amd64 machine, I do not expect any problems with upgrading. ┌─── System Sheet Script 1.2.7, Copyright (c) 2012-2021 Ma_Sys.ma ─┐ │ linux-fan (id 1000) on rxvt-unicode-256colorDebian GNU/Linux 10 (buster) │ │ Linux 4.19.0-18-amd64 x86_64 │ │ 02.02.2022 22:21:37 masysma-16 │ │ up 65 days, 41 min, 1 user, load avg: 0.02, 0.03, 0.00781/7856 MiB │ │ 4 Intel(R) Celeron(R) CPU J3455 @ 1.50GHz│ ├── Network ───┤ │ Interface Sent/MiB Received/MiBAddress │ │ enp2s017925 18624192.168.1.22/24 │ ├─── File systems ─┤ │ Mountpoint Used/GiBOf/GiB Percentage │ │ / 246 181714% │ ├─── Users ┤ │ Username MEM/MiBTop/MEM CPU Top/CPU Time/min │ │ root 271dockerd 0.6% dockerd972 │ │ backupuser 194 megasync 0.2% megasync 382 │ │ linux-fan 32systemd 0% syssheet 0 │ │ monitorix 21monitorix-httpd 0% monitorix-httpd 2 │ └──┘ The system has been running since its installation in 09/2020 in mostly 24/7 operation (a few weeks of vacation per year) with little to no issues -- I only remember overloading it once with a time series database and having to reboot to restore some order :) HTH Linux-Fan [...] pgpobDn0eFivj.pgp Description: PGP signature
Re: Query
Chuck Zmudzinski writes: On 2/7/2022 4:36 PM, Greg Wooledge wrote: On Mon, Feb 07, 2022 at 04:31:51PM -0500, Chuck Zmudzinski wrote: On 2/7/2022 10:50 AM, William Lee Valentine wrote: I am wondering whether a current Debian distribution can be installed and run on an older Pentium III computer. (I have Debian 11.2 on a DVD.) The computer is Dell Dimension XPS T500: Intel Pentium III processor (Katnai) memory: 756 megabytes, running at 500 megahertz IDE disc drive: 60 gigabytes Debian partition: currently 42 gigabytes Debian 6.0: Squeeze Based on what others are saying, it looks like a typical modern Debian desktop environment such as Gnome or Plasma KDE will not work well with such an old system. I suggest you look for a Distro that is tailored for old hardware. Bah, silly. Just use a traditional window manager instead of a bloated Desktop Environment. Problem solved. Which windows manager for an extremely resource-limited system? Debian's One could use one of e.g. the following list: - IceWM - i3 - Fluxbox All of them are packaged for Debian and work on low-resource computers. I have successfully deployed i3 on a system with similar specs to the OP's. Mine is still on Debian 10 and not upgraded to Debian 11 yet, though. wiki page on window managers lists more than 30 possibilities. Its not silly to take a look at a distro based on Debian that is tailored for low resources as a starting point to try and build a Debian 11.2 system that will work OK on a Pentium III with less than 1 GB of memory. Debian provides Of course, its a valid approach :) so many packages, and such distros like antiX can give one an idea about which packages to use when trying to build a Debian 11.2 system that will work well on an older system with such a small amount of memory and such an old CPU. The other option is to ask here for recommendations. Debian is one of the last large/mainstream distributions to still support i386 architecture hence it is not unlikely that some people will be running old hardware here (I do for instance :) ). But the *real* problem will come when they try to run a web browser. That's where the truly massive memory demand is. 756 MB is plenty of RAM for daily use of everything except a web browser. Yes, it will be important to try to find a web browser that is the least bloated as possible. Again, looking at the browser choices of distros tailored for old hardware can help build a Debian 11.2 system that will work well on old hardware. Independent of the other distros one will need to do a compromise here because: * Any browser supporting all the modern features (mostly JS and CSS3) will be too slow for such old a machine. * Any other browser will be too limited in features to satisfy a modern user's needs. E.g. try to access Gmail or Youtube over any lightweight browser and see how it goes (I suspect it will not work _at all_!) In any case, it will need to be a carefully crafted selection of Debian 11.2 packages to have a decent experience, and most definitely start with a small netinst installation with only the text console to start, and then build the GUI environment carefully from the ground up. On such an old system one should only install what is needed because any additional background service will reduce the already very limited computational capacity. Rather than crafting a set of applications it might be easier to start with the question what the machine is going to be used for and then figure out if this is even possible for the hardware and only afterwards check which applications will fit the purpose _and_ resource constraints. E.g. I regularly run `maxima` as a "calculator" app. On an old machine it takes many seconds to run and to compute even simple expressions. Hence I switched to `sc-im` (a lightweight spreadsheet program) on old machines for such tasks. HTH and YMMV Linux-Fan öö [...] pgppno1xg3h5S.pgp Description: PGP signature
Re: Memory leak
Stefan Monnier writes: > I used to have 8 GB on the system, and it would start to thrash at > about 7+ GB usage. I recently ugrade to 16 GB; memory usage is > currently over 8 GB, and it seems to be slowly but steadily increasing. Presumably you bought 16GB to make use of it, right? So it's only natural for your OS to try and put that memory to use. Any "free memory" is memory that could potentially be used for something more useful (IOW "free" = "wasted" in some sense). It's normal for memory use to increase over time, as your OS finds more things to put into it. That was my first intuition, too. There is even a classic website about this very topic: https://www.linuxatemyram.com/ HOWEVER, given that the OP mentions looking at the RSS sizes I think the classic "all memory used" issue is already ruled-out. The issue seems to be modern webbrowsers which could be considered OSes on their own already hence they also claim more resources whenever it is useful for them. Firefox takes just above 1600 MiB here with only six tabs open for four hours. Yet I am pretty sure it would take less were this a "lower-end" system e.g. fewer CPU cores would cause fewer processes to be spawned and hence the memory efficiency might be better in such cases. HTH and YMMV Linux-Fan öö [...] pgpNyJMgXvFcU.pgp Description: PGP signature
Captive Portal Alternatives (Was: Re: miracle of Firefox in the hotel)
Brian writes: On Sat 12 Feb 2022 at 21:07:10 +0100, to...@tuxteam.de wrote: [...] > This is Firefox's captive portal [1] detection [2]. > > Cheers > > [1] Had I a say in it, I'd reserve a very special place in Hell >for those. Could the process to replace them on, say, public transport be outlined? [...] It highly depends on your jurisdiction and other regulatory requirements thus I gather there is no comprehensive answer to this question. Alternatives could be any of the following: * Not using a captive portal at all i.e. having just a free WiFi for everyone near enough to receive the radio signal. * Using WPA Enterprise (RADIUS) to have users login without any website but directly as part of joining the network. This works for very large networks, too. E.g. the `eduroam` common in some universities can be accessed from any of the participating universities' accounts by just entering their campus e-mail address for login. * RFC8910 - Captive-Portal Identification in DHCP and Router Advertisements (RAs). I never never heard of it before searching for “Alternatives to captive portals wifi” online :) See also: * https://radavis.github.io/captive-portal-is-dead/ * https://old.reddit.com/r/HomeNetworking/comments/lrebw5/alternatives_to_a_captive_portal_for_open_networks/ * https://www.rfc-editor.org/rfc/rfc8910.txt HTH Linux-Fan öö pgpevR0soRT0q.pgp Description: PGP signature
Re: Captive Portal Alternatives (Was: Re: miracle of Firefox in the hotel)
Brian writes: On Sun 13 Feb 2022 at 16:02:53 +0100, to...@tuxteam.de wrote: > On Sun, Feb 13, 2022 at 02:41:31PM +0100, Linux-Fan wrote: > > Brian writes: [...] > > > Could the process to replace them on, say, public transport be > > > outlined? [...] > > * RFC8910 - Captive-Portal Identification in DHCP and Router > > Advertisements (RAs). I never never heard of it before searching > > for “Alternatives to captive portals wifi” online :) > > * Joining a local initiative providing free connectivity (and, of > course, lobbying your local policy makers that this be legal; > the very idea of providing free stuff tends to be suspect). > > Freifunk [...] is one successful example. Interesting. Captive portals provide free connectivity. What's the problem? [...] I do not use Wifi with captive portals very often so I have only experienced a limited subset of problems, but I can think of at least the following issues: - Security: Intercepting requests to arbitrary pages and replying with some other content is quite similar to a MITM adversary. Hence, users following the recommended “prefer HTTPS” usage will get certificate errors instead. The RFC explains this much better than I could do under section “5. Security Considerations”. Also, I think the OP's problem is caused exactly by this. For captive portals to work in a HTTPS-preferring browser quirks like those implemented by Firefox are needed i.e. try to detect the Internet connectivity by connecting to the vendor's URL... not good for privacy and only a heursitics. - Browser requirements: Captive portals often require a JS-capable browser to accept their terms etc. This is probably acceptable for Notebooks and “Smartphones”, but any other type of device will often be unable to access a captive-portal-protected Wifi. I have not tested it but I would imagine that it be tough to join such a network for the purpose of playing with a handheld console (e.g. Nintendo 2DS or such) on a train given that the device's webbrowser is very limited. - Acutally, not all captive portals provide ”free” connectivity. At least not in the freedom sense. IIRC in Italy, they request your tax number before allowing you to use the Wifi on the trains? You pay with your data... According to [1] I seem to misremember this: They want your phone number or credit card number instead. It seems that on some lines they have eliminated this need for registration (not sure if that means there is no longer any captive portal at all). It might only be anecdotical but here is another counter-intuitive problem caused by captive portals [2]. [1] https://www.trenitalia.com/it/offerte_e_servizi/portale-frecce/come-accedere-al-portale-frecce.html [2] https://ttboj.wordpress.com/2014/11/27/captive-web-portals-are-considered-harmful/ HTH and YMMV Linux-Fan öö pgpOD2dzeuOMS.pgp Description: PGP signature
Re: Captive Portal Alternatives (Was: Re: miracle of Firefox in the hotel)
Cindy Sue Causey writes: On 2/13/22, Brian wrote: > On Sun 13 Feb 2022 at 16:02:53 +0100, to...@tuxteam.de wrote: >> > > On Sat 12 Feb 2022 at 21:07:10 +0100, to...@tuxteam.de wrote: [...] >> > > > [1] Had I a say in it, I'd reserve a very special place in Hell >> > > >for those. [...] > Interesting. > > Captive portals provide free connectivity. What's the problem? I almost responded to this thread yesterday to say, "Shudder!" My thought process was that it seems like it might be pretty easy for perps hovering out in a parking lot or maybe a nearby building to create a fake captive portal that resembles what users would be expecting to see from the, yes, FREE Internet provider. That would only be possible if this is working like I'm imagining is being described here. That imagination involves a webpage such as what I once encountered popping up unexpectedly while trying to access WIFI through a local grocery store a few years ago. [...] Yes, it works pretty much as you describe with exactly the problematic aspects (see my other post and the RFC linked before). It is not _that_ bad for security because of two key points: - Captive portals cannot bypass protection by TLS certificates. Users will instead be unable to access the respective pages and either get a certificate error or no useful error message at all. - In case of unencrypted/unprotected traffic, adversaries can manipulate that even _without_ captive portals if they setup their own (malicious) “free” WiFi service. HTH Linux-Fan öö pgpm03BfoCPio.pgp Description: PGP signature
Re: Bulseye - TacacsPlus - Configure ?
Maurizio Caloro writes: Found and install this package, TacacsPlus on Bullseye. Please asking for short adivce to configure this. root@HPT610:# apt search tacacs libauthen-tacacsplus-perl/stable,now 0.28-1+b1 amd64 [installed] Perl module for authentication using TACACS+ server See https://metacpan.org/pod/Authen::TacacsPlus. The package is a Perl module. I.e. it is useful inside Perl scripts. If you do not want to create or use a perl script with that module, it seems unlikely that you would benefit from the package at all? HTH Linux-Fan öö pgpfodrMpgRO5.pgp Description: PGP signature
Re: how many W a PSU for non-gaming Debian?
Henning Follmann writes: On Fri, Mar 04, 2022 at 06:36:35PM +, Andrew M.A. Cater wrote: > On Fri, Mar 04, 2022 at 06:47:14PM +0100, Emanuel Berg wrote: > > Alexis Grigoriou wrote: > > > > >> I've heard that for gaming you would want a 600~800W PSU [...] > > motherboard, RAM and SSD are at most 232W. > > > > CPU AMD mid end (4 cores) 125 > > fans 80 mm (3K RPM) 9 (3*3W = 9W) > > 120 mm (2K RPM) 12 (2*6W = 12W) > > motherboard high end80 > > RAM ~DDR3 (1.5V) 3 (actually it is a DDR4) > > SSD 2.8 > > > > (+ 125 (* 3 3) (* 2 6) 80 3 2.8) ; 231.8W > > > > The only thing left is the GPU, I take it even in that PSU [...] > If your draw is a max of 230W and you use a 300W power supply, you've > still got to account for inrush current to capacitors as the machine is > switched on. > > A larger PSU in wattage terms may have better capacitors, more capacity to > withstand dips and spikes in mains voltage and may have a better power > factor so be more effective overall. > > the cost differential between 300 and 600W should be relatively small. > > Easier to overspecify: the other thing is that larger PSU wattages may have > quieter / better quality fans. I love almost silent PCs. [...] And to add to that, most recent PSUs are very good in terms of efficiency. They are switched and drag much less power when the computer doesn't demand it. I would also go with a 600 W PSU. [...] Please keep the following points in mind when doing PSU wattage sizing for modern PCs: - Judging a CPU by its thermal design power is no longer feasible due to some CPUs permanently overclocking while the actually available cooling power permits it. On some Intel CPUs this can mean about twice the power than you would have expected. If we were to apply this logic directly to the unspecified (?) AMD CPU from the OP's config, it would mean adding 250W for the CPU rather than the 125W from its TDP. - 80+ certified PSUs are rated in terms of their performance at certain load percentages. If you choose a high-power PSU (e.g. 600W) then even if it has a high efficiency according to 80+ it will not necessarily be more efficient than a less highly rated 300W model. To summarize: For the use case, one might want to add the CPU's TDP "another time", i.e. 231.8W + 125W = 356W. Then choose either the next fitting PSU size (400W) or go slightly larger for extra safety e.g. 450W, 500W or even 550W would all be sensible choices. HTH and YMMV Linux-Fan öö pgp0I0qlAv8Hu.pgp Description: PGP signature
Re: how many W a PSU for non-gaming Debian?
Emanuel Berg writes: Linux-Fan wrote: > Please keep the following points in mind when doing PSU > wattage sizing for modern PCs: > > - Judging a CPU by its thermal design power is no longer > feasible due to some CPUs permanently overclocking while > the actually available cooling power permits it. On some > Intel CPUs this can mean about twice the power than you > would have expected. If we were to apply this logic > directly to the unspecified (?) AMD CPU from the OP's > config Oh, the OP has the AMD4 x86_64 CPU that comes with/in the Asus ROG Strix B450-F Gaming motherboard! By default, a motherboard is just that, a motherboard. Unless you have some specific "bundle" package, there is no CPU included with it. According to https://rog.asus.com/motherboards/rog-strix/rog-strix-b450-f-gaming-model/ the board has "AM4 socket: Ready for AMD Ryzen(TM) processors". And according to https://en.wikipedia.org/wiki/Socket_AM4 this socket can accomodate for a wide variety of different processors. > - 80+ certified PSUs are rated in terms of their performance > at certain load percentages. If you choose a high-power > PSU (e.g. 600W) then even if it has a high efficiency > according to 80+ it will not necessarily be more efficient > than a less highly rated 300W model. Not following? You've snipped the part by Andy Cater: | A larger PSU in wattage terms may have better capacitors, more capacity to | withstand dips and spikes in mains voltage and may have a better power factor | so be more effective overall. ^ | | the cost differential between 300 and 600W should be relatively small. | | Easier to overspecify: the other thing is that larger PSU wattages may have | quieter / better quality fans. I love almost silent PCs. I just wanted to point out that larger PSU can be more efficient, but smaller PSU can also be more efficient. Even when energy efficiency labels are compared (cf. https://en.wikipedia.org/wiki/80_Plus), a better rating (e.g. gold over silver or such) may not always indicate better efficiency. Say, for example, you have two PSUs under consideration: (a) PSU with 450W and 80+ silver rating (b) PSU with 600W and 80+ gold rating. Then at 20% load, 80+ specifies (a) to have efficiency 87% and (b) to have efficiency 90%. In absolute numbers: (a) PSU with 450W will have specified 87% at 90W i.e. draw 90W/0.87 = 103W (b) PSU with 600W will have specified 90% at 120W i.e. draw 120W/0.9 = 133W As 20% is the lowest load specified for the rating (silver, gold etc.) we cannot tell how the respective PSUs operate if less than 20% load is requested. From the rating we only know that (a) will take at most 103W in idle loads and (b) at most 133W, hence the power consumption of (b) could potentially be higher in very-low-load idle scenarios which are not uncommon to be the dominating factor for typical PC worksloads. HTH Linux-Fan öö pgpTY2xmqwiwR.pgp Description: PGP signature
Re: how many W a PSU for non-gaming Debian?
Emanuel Berg writes: Linux-Fan wrote: >> Oh, the OP has the AMD4 x86_64 CPU that comes with/in the >> Asus ROG Strix B450-F Gaming motherboard! > > By default, a motherboard is just that, a motherboard. > Unless you have some specific "bundle" package, there is no > CPU included with it. According to > https://rog.asus.com/motherboards/rog-strix/rog-strix-b450-f-gaming-model/ > the board has "AM4 socket: Ready for AMD Ryzen(TM) > processors". And according to > https://en.wikipedia.org/wiki/Socket_AM4 this socket can > accomodate for a wide variety of different processors. $ lscpu | grep "name" Model name: AMD Ryzen 3 3200G with Radeon Vega Graphics Now this is _not_ a 125W CPU but a 65W TDP one [1]. When you calculate that CPU with 125W you are already safe. No need to multiply that *2 what I would have suggested were it a 125W TDP CPU. In fact, by calculating with 125W you almost took twice the TDP already which would have been 2*65W = 130W. [1] https://www.amd.com/en/products/apu/amd-ryzen-3-3200g > (a) PSU with 450W will have specified 87% at 90W i.e. > draw 90W/0.87 = 103W > > (b) PSU with 600W will have specified 90% at 120W i.e. > draw 120W/0.9 = 133W Okay, so with everything and +25% wiggle room, i.e. (ceiling (* 1.25 (+ 19 125 (* 3 3) (* 2 6) 80 3 2.8))) ; 314 W Use that figure, see above :) it is 314 W, and then the worst efficiency is 87%, the digit lands at (ceiling (/ 314 0.87 1.0)) ; 361 361 W. ? You do not need to take efficiency into account this way. Reason is: The efficiency is the factor between the wall power draw and the PSU's output power. To size a PSU for a new computer you _only_ take into account the output power and this is also the figure that the manufacturer advertises (i.e. 300W PSU means 300W output power, not input power draw). The efficiency only comes into play when you want to consider the wall power draw e.g. to find out how much it will cost to run the systmem 24/7 in terms of electricity. If you take an "overly large" PSU, efficiency will possibly degrade compared to a one that is "just right" in size. I hope that clarifies it a little. HTH Linux-Fan öö [...] pgpJCzDitOFKs.pgp Description: PGP signature
Re: how many W a PSU for non-gaming Debian?
Emanuel Berg writes: Unfortunately the motherboard was worst-case 166.2 W, not the previous estimate/approximation at 80. The 166.2 (motherboard W really doesn't include the CPU?) It does, see https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F-Gaming-Benchmark-1.jpg which is from your reference iv and explicitly shows an AMD R5 2600X processor being used. So no it says it is is worst-case 277 W, and with +30% wiggle room it is 361 W :( back to passive 400W I guess ... device model/category max W note ref - CPU AMD middle end, 4 cores 65 exact [i] fans 80 mm (3K RPM) 9 3*3W = 9W 120 mm (2K RPM) 12 2*6W = 12W[ii] GPU geforce-gt-710 19 exact[iii] mb Asus ROG Strix B450-F Gaming AM4 166.2 exact [iv] RAM ~DDR3 (1.5V) 3 actually, a DDR4 SSD 2.8 - total: (ceiling (+ 19 65 (* 3 3) (* 2 6) 166.2 3 2.8)) ; 277 W with +30% wiggle room: (ceiling (* 1.30 (+ 19 65 (* 3 3) (* 2 6) 166.2 3 2.8))) ; 361 W [...] [iv] https://www.techporn.ph/review-asus-rog-strix-b450-f-gaming-am4- motherboard/ [...] May I suggest you to compute it differently as follows? device model/category max W note ref - CPU AMD middle end, 4 cores130 2*65W [i] fans 80 mm (3K RPM) 9 3*3W = 9W 120 mm (2K RPM) 12 2*6W = 12W[ii] GPU geforce-gt-710 19 exact[iii] mb Asus ROG Strix B450-F Gaming AM480 previous estimate RAM ~DDR3 (1.5V)10 actually, a DDR4 SSD 5 - What did I change: * CPU power doubled to account for short-time bursts. * Motherboard back to 80W which should still be a safe estimate. * RAM upped to 10W and SSD upped to 5W (depending on the actual components, you might want to revert that but computing an SSD with 3W makes your entire calculation dependent on that specific model and if you upgrade that later you'd have to take it into account). Sum = 130+9+12+19+80+10+5 = 265W. With +30%: 265*1.3 = 344.5W Hence it would be suggested to take at least a 350W PSU. Use a larger one if you ever plan to extend the system. HTH and YMMV Linux-Fan öö pgpl5untslDYm.pgp Description: PGP signature
Re: how many W a PSU for non-gaming Debian?
Emanuel Berg writes: Linux-Fan wrote: > It does, see > https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F-Gaming- Benchmark-1.jpg > which is from your reference iv and explicitly shows an AMD R5 > 2600X processor being used. I'll subtract 65W from it then ... > * CPU power doubled to account for short-time bursts. Double it, that something one should do? In Intel world, yes :) In AMD world it seems to be slightly better, cf.: https://images.anandtech.com/graphs/graph16220/119126.png The TDP is given in the labels whereas the actual max power consumption observed is in the diagram. It seems that for AMD systems, the most extreme factor observed there is 143.22/105 = 1.364, so you might take that or round up to 1.5 rather than factor 2 for AMD systems. Default TDP 65W AMD Configurable TDP (cTDP) 45-65W <https://www.amd.com/en/products/apu/amd-ryzen-3-3200g> > * RAM upped to 10W and SSD upped to 5W (depending on the > actual components, you might want to revert that but > computing an SSD with 3W makes your entire calculation > dependent on that specific model and if you upgrade that > later you'd have to take it into account). I got these digits from https://www.buildcomputers.net/power-consumption-of-pc-components.html which is one of the first Google hits so I trust them for now ... The figures on that page for CPUs are misleading (they specify TDP range which is not much related to actual power draw anymore, see linked figure above). The remainder of the figures seems sensible. Some GPUs are also known to draw extreme peak loads (though usually that's only the "large" ones). SSD highly depends on the model. No need to argue for one general figure over the other. I think my SSD is specified 14W, but it is large and not the "newest" :) For RAM it seems that my figure is just a little too high and that your 3W are more correct in modern times. Nice to know :) As for upgrading that will be easy in this regard since I'll read how many Watts on the box of whatever I get :) device model/category max W note ref - CPU AMD middle end, 4 cores 65 exact [i] fans 80 mm (3K RPM) 9 3*3W = 9W[ii] 120 mm (2K RPM) 12 2*6W = 12W[ii] GPU geforce-gt-710 19 exact[iii] mb Asus ROG Strix B450-F Gaming AM4 166.2 exact, incl CPU [iv] RAM DDR3 (1.5V) 3 actually, a DDR4 [ii] SSD 2.8 [ii] - total: (ceiling (+ 65 (* 3 3) (* 2 6) 19 (- 166.2 65) 3 2.8)) ; 212 W with +30% wiggle room: (ceiling (* 1.3 (+ 65 (* 3 3) (* 2 6) 19 (- 166.2 65) 3 2.8))) ; 276 W IMHO this is too low a figure for the system being planned. I am pretty sure it _will_ run on a 300W PSU, BUT probably not stable for a long time and under high loads. HTH Linux-Fan [i] https://www.amd.com/en/products/apu/amd-ryzen-3-3200g [ii] https://www.buildcomputers.net/power-consumption-of-pc-components.html [iii] https://www.techpowerup.com/gpu-specs/geforce-gt-710.c1990 [iv] https://www.techporn.ph/review-asus-rog-strix-b450-f-gaming-am4-motherboard/ https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F-Gaming-Benchmark-1.jpg [...] pgpJwCQrGCL_L.pgp Description: PGP signature
Re: how many W a PSU for non-gaming Debian?
Emanuel Berg writes: Linux-Fan wrote: >>> CPU power doubled to account for short-time bursts. >> >> Double it, that something one should do? > > In Intel world, yes :) In AMD world it seems to be slightly > better, cf.: > https://images.anandtech.com/graphs/graph16220/119126.png > > The TDP is given in the labels whereas the actual max power > consumption observed is in the diagram. It seems that for > AMD systems, the most extreme factor observed there is > 143.22/105 = 1.364 [...] OK, included ... > SSD highly depends on the model. No need to argue for one > general figure over the other. I think my SSD is specified > 14W, but it is large and not the "newest" :) OK, it says the SSD and RAM are Corsair Vengeance LPX · DDR4 · 2*8GB=16GB · 3600Mhz 250GB Kingston KC 2000 (SSD/NVMe/M.2) if one can find exact digits, that's optimal, but what do you search for to find out? I mean in general? The model name and ... ? ... datasheet ... power consumption This does not yield anything ineteresting for the RAM here, but for the SSD we get a useful datasheet this way: [v] https://www.kingston.com/datasheets/SKC2000_us.pdf Which indicates on page 2 that the max. power consumption is 7W under heavy write loads. I get most of my estimates derived from the figures of a PC magazine I regularly read :) Anyway, the computation now lands at 307 W. device model/category max W note - CPU AMD Ryzen 3, 4 cores89 exact plus extra [i] fans 80 mm (3K RPM) 9 3*3W = 9W[ii] 120 mm (2K RPM) 12 2*6W = 12W[ii] GPU geforce-gt-710 19 exact[iii] mb Asus ROG Strix B450-F Gaming AM4 101 exact excl CPU[iv] RAM DDR3 (1.5V) 3 actually a DDR4 [ii] SSD 2.8 [ii] - 3W is for one module per [ii] but further above you write 2x8GB so why not at least compute it at 6W? Also, SSD could go up to 7W per datasheet [v]. For actual PSU sizes this will end up at 350W min. which is OK I guess. total, with +30% wiggle room: (ceiling (* 1.3 (+ (* (/ 143.22 105) 65) (* 3 3) (* 2 6) 19 (- 166.2 65) 3 2.8) )) ; 307 W [i] https://www.amd.com/en/products/apu/amd-ryzen-3-3200g https://images.anandtech.com/graphs/graph16220/119126.png [ii] https://www.buildcomputers.net/power-consumption-of-pc-components.html [iii] https://www.techpowerup.com/gpu-specs/geforce-gt-710.c1990 [iv] https://www.techporn.ph/review-asus-rog-strix-b450-f-gaming-am4- motherboard/ https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F- Gaming-Benchmark-1.jpg [...] HTH and YMMV Linux-Fan öö pgp9ZR9Hn5YpT.pgp Description: PGP signature
Re: which references (books, web pages, faqs, videos, ...) would you recommend to someone learning about the Linux boot process as thoroughly as possible?
Albretch Mueller writes: imagine you had to code a new bootloader now (as an exercise) in hindsight which books would you have picked? I do not know of any books about bootloaders, but having a look at multiple different bootloaders (documentation and possibly source code) should be a good way to start? I think GRUB, SYSLINUX, u-boot will cover for a wide variety of boot scenarios? I am OK with Math and technology of any kind and I am more of a Debian kind of guy. In fact, I am amazed at how Debian Live would pretty much boot any piece of sh!t you would feed to it, but, just to mention one case, knoppix would not. But then knoppix, has such super nice boot-up options as: toram (right as a parameter as you boot, no tinkering with any other thing!), fromhd and bootfrom (you can use to put an iso in a I think that at least in the past it was possible to boot Debian Live systems with `toram` option, too. You should probably just try it out? In case the menu does not offer it, consider tinkering with the respective syslinux/isolinux configuration and adding a menu entry with `toram` set? partition of a pen drive, or even stash it in your own work computer, in order to liberate your DVD player after booting), ..., which DL doesn’t have. I think GRUB2 supports this feature, but am not sure if it will work correctly in all of the cases. For my own tinkering I mostly prefer SYSLINUX. It can boot just about any live linux and you can also add `memdisk` images to add DOS and other small systems. [...] I have been always intrigued about such matters and such differences, between what I see as supposedly being standardized, like a boot process. Compare the boot process between amd64 and armhf to find out that there are quite the differences :) HTH Linux-Fan öö pgpVSDkpKfsYh.pgp Description: PGP signature
Re: libvirt tools and keyfiles
Celejar writes: Hi, I'm trying to use virt-manager / virt-viewer to access the console of some qemu / kvm virtual machines on a remote system over ssh. I have public key access to root@remote_system. When I do: virt-manager -c 'qemu+ssh://root@remote_system/system? keyfile=path_to_private_key' the connection to libvirt on the remote system comes up fine, and I can see the various VMs running there, but when I try to access a VM console (via the "Open" button or "Edit / Virtual Machine Details"), I get prompted for the password for "root@remote_system" (which doesn't even work, since password access is disabled in the ssh server configuration). What do you insert for `remote_system`? A hostname or an IP? IIRC I once tried to use an IP address directly (qemu+ssh://u...@192.168.yyy.yyy), and while it would perform the initial connection successfully, subsequent actions would query me for the password of (user@masysma-...) i.e. change from IP-address-based (which was configured to use a key in .config/ssh) to hostname based (for which the key was not specified in the config. I solved this by adding the hostname to /etc/hosts and configuring SSH and my virt-manager connection to use the hostnames rather than IP addresses. I also remember that I had to add the connection to my GUI user's .ssh/config AND my root user's .ssh/config. In my case, I am not specifying the keyfile as part of the connection, though. HTH Linux-Fan öö [...] pgpagjTs3CcR0.pgp Description: PGP signature
Re: Problem downloading "Installation Guide for 64-bit PC (amd64)"
Richard Owlett writes: On 04/07/2022 10:22 AM, Cindy Sue Causey wrote: On 4/7/22, Richard Owlett wrote: I need a *HTML* copy of "Installation Guide for 64-bit PC (amd64)" for *OFFLINE* use. The HTML links on [https://www.debian.org/releases/stable/installmanual] lead *ONLY* to Page 1. Is the complete document downloadable as a single HTML file? Have you seen the "installation-guide-amd64" package in Debian's repositories? Thank you. No, I hadn't. The machine I'm currently on is running Debian 9.13 [I'm prepping to do a install of 11.3 to another machine]. I found it listed in Synaptic and installed it. It does not appear in any of MATE's menus and I can't reboot until later today. Do you know in which sub-directory I might find it? Try /usr/share/doc/installation-guide-amd64/en/index.html as the entry point. An easy way to find out about a package's files after installation is `dpkg -L `, e.g. in this case: $ dpkg -L installation-guide-amd64 Btw. the guide in the package is then not a single HTML file but multiple files (in case it matters...) HTH Linux-Fan öö [...] pgpREAh7J2Hck.pgp Description: PGP signature
Re: updatedb.mlocate
Greg Wooledge writes: On Sat, Apr 09, 2022 at 09:26:58PM -0600, Charles Curley wrote: > Two of my machines have their database files dated at midnight or one > minute after. > > Possibly because updatedb is run by a systemd timer, not cron. [...] # skip in favour of systemd timer if [ -d /run/systemd/system ]; then exit 0 fi [...] Wow. That's incredibly annoying! It provides a mechanism that adjusts to the init system at runtime. Maybe there are better ways to do it, but it seems to work OK? unicorn:~$ less /lib/systemd/system/mlocate.timer [Unit] Description=Updates mlocate database every day [Timer] OnCalendar=daily AccuracySec=24h Persistent=true [Install] WantedBy=timers.target ... it doesn't even say when it runs? What silliness is this? Try # systemctl list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Mon 2022-04-11 00:00:00 CEST 12h left Sun 2022-04-10 00:00:01 CEST 11h ago mlocate.timermlocate.service It shows that at least on my system, it is going to run on 00:00 local time. I can imagine that the idea behind not specifying the actual time in the individual unit allows you to configure the actual time of invocation somehwere else. This way if you have a lot of machines all online you can avoid having bursts of activity in the entire network/datacenter just as the clock turns to 00:00. Oh well. It clearly isn't bothering me (I'm usually in bed before midnight, though not always), so I never had to look into it. I'm sure someone felt it was a "good idea" to move things from perfectly normal and well-understood crontab files into this new systemd timer crap that nobody understands, and that I should respect their wisdom, but I don't see the advantages at this time. I think systemd tries to provide an alternative for all the `...tab` files that used to be the standard (/etc/inittab, /etc/crontab, /etc/fstab come to mind). IMHO the notable advantage over the traditional method is that on systemd-systems one can query all the status information with a single command: `systemctl status `. Similarly, the lifecycle of start/stop/enable/disable can all be handled by the single command `systemctl start/stop/enable/disable `. All stdout/stderr outputs available via `journalctl -u `. In theory this could eliminate the need to know about or remember the individual services' log file names. Specifically in the case of `cron`, I think it is an improvement that user- specific timers are now kept in the user's home directory (~/.config/systemd/user directory) rather than a system directory (/var/spool/cron/crontabs). IMHO systemd's interface is not the best design-wise and in terms of its strange defaults, formats and names (paged output by default is one of the things I disable all the time, INI-derivatives are really not my favorite configuration syntax, binary logfiles, semantics of disable/mask can be confusing...). IMHO it does provide a lot of useful features that were missing previously, though. YMMV Linux-Fan öö pgpcH9a30fVSj.pgp Description: PGP signature
Re: system freeze
state! Sep 18 13:11:41 masysma-18 kernel: [ 2051.441397] amdgpu :67:00.0: amdgpu: Msg issuing pre-check failed and SMU may be not in the right state! Sep 18 13:11:42 masysma-18 kernel: [ 2051.634059] amdgpu :67:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring kiq_2.1.0 test failed (-110) Sep 18 13:11:42 masysma-18 kernel: [ 2051.634111] [drm:gfx_v10_0_hw_fini [amdgpu]] *ERROR* KGQ disable failed Sep 18 13:11:42 masysma-18 kernel: [ 2051.813223] amdgpu :67:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring kiq_2.1.0 test failed (-110) Sep 18 13:11:42 masysma-18 kernel: [ 2051.813267] [drm:gfx_v10_0_hw_fini [amdgpu]] *ERROR* KCQ disable failed Sep 18 13:11:43 masysma-18 kernel: [ 2053.354507] amdgpu :67:00.0: amdgpu: Msg issuing pre-check failed and SMU may be not in the right state! Sep 18 13:11:43 masysma-18 kernel: [ 2053.354509] amdgpu :67:00.0: amdgpu: Failed to disable smu features except BACO. Sep 18 13:11:43 masysma-18 kernel: [ 2053.354511] amdgpu :67:00.0: amdgpu: Fail to disable dpm features! Sep 18 13:11:43 masysma-18 kernel: [ 2053.354570] [drm:amdgpu_device_ip_suspend_phase2 [amdgpu]] *ERROR* suspend of IP block failed -62 Sep 18 13:11:43 masysma-18 kernel: [ 2053.390519] [drm] free PSP TMR buffer Sep 18 13:11:43 masysma-18 kernel: [ 2053.423405] amdgpu :67:00.0: amdgpu: BACO reset Sep 18 13:11:45 masysma-18 kernel: [ 2054.968108] amdgpu :67:00.0: amdgpu: Msg issuing pre-check failed and SMU may be not in the right state! Sep 18 13:11:45 masysma-18 kernel: [ 2054.968110] amdgpu :67:00.0: amdgpu: Failed to enter BACO state! Sep 18 13:11:45 masysma-18 kernel: [ 2054.968112] amdgpu :67:00.0: amdgpu: ASIC reset failed with error, -62 for drm dev, :67:00.0 Sep 18 13:11:45 masysma-18 kernel: [ 2054.968154] amdgpu :67:00.0: amdgpu: GPU reset(1) failed Sep 18 13:11:45 masysma-18 kernel: [ 2054.989004] amdgpu :67:00.0: amdgpu: GPU reset end with ret = -62 Sep 18 13:11:55 masysma-18 kernel: [ 2065.186711] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring sdma1 timeout, signaled seq=3181, emitted seq=3181 Sep 18 13:11:55 masysma-18 kernel: [ 2065.186910] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process pid 0 thread pid 0 Sep 18 13:11:55 masysma-18 kernel: [ 2065.186919] amdgpu :67:00.0: amdgpu: GPU reset begin! I waded through <https://gitlab.freedesktop.org/drm/amd/-/issues/892> to gather ideas about how to fix it. Most of the "solutions" seemed to be very hacky though and I did not try them thoroghly. This is what I noted from installing the proprietary driver: https://www.amd.com/en/support/professional-graphics/radeon-pro/radeon-pro-w5000-series/radeon-pro-w5500 ./amdgpu-pro-install rm /etc/X11/xorg.conf Also, I noted I had had a DP connectivity issue that was possibly responsible for the case where after the hang, the monitor would come back with a picture. It was fixed by re-attaching the DP cable... HTH and YMMV Linux-Fan öö pgpw529fpcICT.pgp Description: PGP signature
Re: Can't setup a VM using SLIC data to activate Win10 in guest
Joao Roscoe writes: [...] I'm trying to migrate my VMs from VirtualBox to KVM/QEMU/VirtManager. [...] I then proceeded extracting and providing the SLIC/MSDM data, as described https://gist.github.com/Informatic/ 49bd034d43e054bd1d8d4fec38c305ec>here, However, the VM now fails to start/install (tried to re-create the VM using the same image file, and setting up the SLIC/MSDM info from the beginning), stating that the SLIC file can't be read. [...] What am I missing here? Any clues? Have you checked the hint at the very beginning of the gist "apparmor/security configuration changes may be needed". I dimly remember that I had to do this when migrating from Debian 10 to Debian 11? There is even a comment in the gist about it linking to <https://egirland.blogspot.com/2018/12/get-rid-of-that-fng-permission-denied_7.html> Also, did you try the `sudo strings /sys/firmware/acpi/tables/MSDM` suggested in the comments already? NB: IIRC, back when I migrated all my Windows VMs from VirtualBox to KVM, I had to re-activate them all... HTH and YMMV Linux-Fan öö pgphq3f8S5oLx.pgp Description: PGP signature
Re: Firefox context menu and tooltip on wrong display?
Roberto C. Sánchez writes: I have recently decommissioned my main desktop workstation and switched to using my laptop for daily work (rather than only when travelling). I acquired a USB-C "docking station" and have connected two external monitors (which were formerly attached to my desktop machine). For some strange reason, when there is a Firefox window in the second or third monitor context menus (from right-click) and tooltips do not appear on the same screen as the Firefox window, but rather on the primary monitor (the laptop integrated monitor). I tried searching, but did not find anything recent or useful. Has anyone experienced this same issue? I did not experience this at all on my desktop machine with dual monitors (both the laptop where I am experiencing this and the desktop where I did not experience this are running bullseye), so I am curious if anyone has encountered this issue and if so how to resolve it. This sounds familiar to me. I think I have observed this behaviour in the past albeit never really reproducible. I thought it might be related to starting on another screen than the browser is later actually used on, but I could not trigger it with some simple tests here. Back when it appeared I mostly did not bother trying to fix it because for me it was only ever temporary in some situations and I could still control the rouge menus with the keyboard and avoid moving the mouse cursor all over the screens. Just a datapoint, no solution though :( HTH Linux-Fan öö [...] pgpbJ03YgamuY.pgp Description: PGP signature
Re: setting path for root after "sudo su" and "sudo" for Debian Bullseye (11)
Greg Wooledge writes: On Sat, May 21, 2022 at 10:04:01AM -0500, Tom Browder wrote: > I am getting nowhere fast. OK, let's start at the beginning. You have raku installed in some directory that is not in a regular PATH. You won't tell us what this directory is, so let's pretend it's /opt/raku/bin/raku. [...] The voices in your head will tell you that you absolutely must use sudo to perform your privilege elevation. Therefore the third solution for you: configure sudo so that it does what you want. Next, of course, the voices in your head will tell you that configuring sudo is not permissible. You have to do this in... gods, I don't know... [...] There is also `sudo -E` to preserve environment and `sudo --preserve-env=PATH` that could be used to probably achieve the behaviour of interest. If this is not permitted by security policy but arbitrary commands to run with sudo are, consider creating a startup script (or passing it directly at the `sudo` commandline) that sets the necessary environment for the raku program. That being said: On my systems I mostly synchronize the bashrc of root and my regular users such that all share the same PATH. I tend to interactively become root and hence ensure that the scripts are run all of the time. This has mostly survived the mentioned change in `su` behaviour and observable variations between `sudo -s` and `su` behaviours are kept at a minimum. HTH and YMMV Linux-Fan öö pgpRZzTp4rgEM.pgp Description: PGP signature
Recommendations for data layout in case of multiple machines (Was: Re: trying to install bullseye for about 25th time)
Andy Smith writes: Hello, On Thu, Jun 09, 2022 at 11:30:26AM -0400, rhkra...@gmail.com wrote: > For Gene, he could conceivably just rename the RAID setup that he has > mounted > under /home to some new top level mountpoint. (Although he probably has > some > scripts or similar stuff that looks for stuff in /home/ that > would > need to be modifed. For anyone with a number of machines, like Gene, there is a lot to be said for having one be in the role of file server. - One place to have a ton of storage, which isn't a usual role for a simple desktop daily driver machine. I have always wondered about this. Back when I bought a new PC I got that suggestion, too (cf. thread at https://lists.debian.org/debian-user/2020/10/msg00037.html). All things considered, I ended up with _one_ workstation to address desktop and storage uses together for the main reason that whenever I work on my "daily driver". I also need the files hence either power on and maintain two machines all of the time vs. having just one machine for the purpose? - One place to back up. - Access from anywhere you want. - Easier to keep your other machines up to date without worrying about all the precious data on your fileserver. Blow them away completely if you like. Be more free to experiment. These points make a lot of sense and that is basically how I am operating, too. One central keeper of data and multiple satellite machines :) I factor out backups to some extent in order to explicitly store copies across multiple machines and some of them "offline" as to resist modern threats of "data encryption trojans". Also, I have been following rhkramer's suggestion of storing important data outside of /home and I can tell it has served me well. My main consideration for doing this is to separate automatically generated program data (such as .cache, .wine and other typical $HOME inhabitants) from _my_ actual data. I still backup selected parts of $HOME e.g. the ~/.mozilla directory for Firefox settings etc. It's a way of working that's served me well for about 25 years. It's hard to imagine going back to having data spread all over the place. What do you use as your file server? Is it running 24/7 or started as needed? Thanks in advance Linux-Fan öö [...] pgptZmlQKdJhL.pgp Description: PGP signature
Re: SSD Optimization and tweaks - Looking for tips/recomendations
Marcelo Laia writes: Hi, I bought a SSD solid disk and will perform a fresh install on it. Debian testing. I've never used such a disc. I bought a Crucial CT1000MX500SSD1 (1TB 3D NAND Crucial SATA MX500 Internal SSD (with 9.5mm adapter) — 6.35cm (2.5in) and 7mm). I read the recommendations on the https://wiki.debian.org/SSDOptimization page. However, I still have some doubts: 1. Use ext4 or LVM partitioning? You could do both at once, too. See the other users' answers. 2. I read in the Warnming section that some discs contain bugs, including Crucial. But I don't know if I need to use or not use "discard" on this disk (CT1000MX500SSD1). If I need to proceed with use "discard", would you please have any tips on how to do it? I didn't understand how to do this. IIRC the best practice was to not use the "discard" mount option and rather run "fstrim" at regular intervals. You could use the `fstrim.timer` systemd unit from package util-linux for that purpose. 3. Should I reserve a swap partition or not? I always had one on hdd disks. I was in doubt, too. If you want to have a swap partition, it is perfectly OK to create one on an SSD. In fact, I have sometimes used SSD swap to my advantage. Today its mostly a matter of personal preference. 4. Any other recommendations to improve the performance and lifespan of this disk? The wiki page is already pretty comprehensive. On my systems I mostly do the “Reduction of SSD write frequency via RAMDISK” thing. As with all disks, it can help to setup S.M.A.R.T. monitoring. For SSDs, a metric like “liftime GiB written” or something similar is often included. This can be used to reveal if your system is doing a lot of writes or not by checking the changes of the value over time (e.g. with help from `smartd` from package smartmontools). HTH and YMMV Linux-Fan öö pgphTZnBNkz29.pgp Description: PGP signature
Re: VFAT vs. umask.
pe...@easthope.ca writes: David, thanks for the reply. From: David Wright Date: Fri, 29 Jul 2022 00:00:29 -0500 > When you copy files that have varied permissions onto the FAT, you may > get warnings about permissions that can't be honoured. (IIRC, copying > ug=r,o= would not complain, whereas u=r,go= would.) Primary store is an SD card. Rsync is used for backup. Therefore this dilema. * In Linux, an ext file system avoids those complications. To my knowledge, all SD cards are preformatted with a FAT. Therefore ext requires reformatting. FAT or exFAT depending on size, yes. * Most advice about flash storage is to avoid reformatting. Unfortunately most or all of this advice is written by software people; none, that I recall, from a flash storage manufacturer. The only reason for not chosing to format the SD card with a Linux file system is the lack of compatibility: Other systems (think Windows, Mac, etc.) will not be able to use the SD card anymore because it will appear „wrongly formatted” to them. My own experience, is one SD card about a decade old, reformatted to ext2 when new and still working. A second SD purchased recently with factory format unchanged seems very slow in mounting. As if running fsck before every mount. -8~/ Certainly tempted to reformat the new card to ext2. Information always welcome. Knowledge even better. Formatting it to ext2 should work and not cause any issues such as long as you remain in a “Linux world”. If the data should still be accessible from other systems, consider packing it into archives and storing the archives on the FAT file system. That is something that I used to do and that has worked reliably for me in the past. It does not allow for incremental updates (like rsync would) by default, though. Modern backup tools use their own archive/data formats and can thus store Linux metadata (permissions, ownerships etc.) on file systems that are incapable of doing this (like FAT, cloud storage etc.). I have written my notes about a few such tools here: https://masysma.lima-city.de/37/backup_tests_borg_bupstash_kopia.xhtml HTH Linux-Fan öö pgpwYqGXMmTxt.pgp Description: PGP signature
Re: VFAT vs. umask.
pe...@easthope.ca writes: From: Linux-Fan Date: Sat, 30 Jul 2022 21:37:37 +0200 > Formatting it to ext2 should work and not cause any issues ... Other authorities claim "factory format" is optimal and wear of flash storage is a concern. A revised "format" can impose worse conditions for wear? Does any manufacturer publish about this? What is hope? What is truth? Reformatting (in the sense of mkfs.ext2) does not by itself cause excessive wear on the flash drive. Now if the cards were optimized to handle FAT and somehow interpret its structures cleverly then this could in theory cause longer life of the drives for FAT compared to ext2. So far we got some rumors about that but nothing definitive. I sincerely doubt that there are any advantages of FAT on SD beyond the compatibility. Because: Nowdays it is quite common to use these cards as general purpose storage. Encrypted storage of Android files come to mind. Here, the card cannot optimize beyond the block level which is what I guess it does in any case. Here are some resources that you could use for further research: - https://www.reddit.com/r/raspberry_pi/comments/ex7dvo/quick_reminder_that_sd_cards_with_wearleveling/ - https://electronics.stackexchange.com/questions/27619/is-it-true-that-a-sd-mmc-card-does-wear-levelling-with-its-own-controller I do not know of any case where an optimization for FAT became visible. Especially, having used SD and µSD cards for Linux root file systems for longer than a year without observing any issues, I would conclude that it is much more related to the quality of the flash than the file system in use. > Modern backup tools use their own archive/data formats ... All that's needed here is a reliable copy on the HDD, of the files on the SD. If the SD fails I mount the part of the HDD, restore to another SD and continue working. Rsync is the most effective tool I've found. More advanced archiving is not needed. rsync on FAT will not preserve Unix permissions and most other metadata. I (personally) would not consider it “reliable” since I care about these metadata. There are three ways around this: * Don't care - If your workflow does not need any Unix permissions whatsoever then you might as well stick with FAT if none of its other limits are of concern (maximum file size for FAT32 comes to mind). * Format using an Unix-aware file system like e.g. ext2 and then just copy the files (rsync works). * Use a tool that keeps the metadata for you (like the suggested advanced backup tools). If rsync works for you, then by all means stick with it :) HTH Linux-Fan öö [...] pgpMJtSUSta_V.pgp Description: PGP signature
Re: OT: Virtualbox - ISO or VDI?
Hans writes: Hi folks, I have a little question, which I could not solve by searching the internet. My OT question: What are the advantages or disadvantages of using a VDI versus ISO file, when using Virtualbox? I am building my own ISO-files (kali or debian-live), and the iso's are bootable in Virtualbox. Which one should I use? Which one and why has more advantages? They are actually entirely different: * ISO is a CD/DVD/BD image, read only * VDI is a virtual HDD image, read-write If you are fine with running live systems all the time, then ISO is the way to go. For _installing_ OSes inside the VMs you will most likely want to use actual virtual HDD images (VDI). The advantage of an installation is of course that you can make persistent changes easily. The advantage of live systems in VMs is that they take significantly less storage compared to installed systems. HTH Linux-Fan [...] pgpzNOkvxaBLe.pgp Description: PGP signature
Re: Use of installer's interactive shells on tty1-tty4
Richard Owlett writes: Where can I find description of using installer's interactive shells on tty1- tty4? I am not sure how much documentation there is on it, but how about this (section 6.3.9.2): https://www.debian.org/releases/stable/amd64/ch06s03.en.html#di-miscellaneous HTH Linux-Fan [...]
Re: OOM-killer not being involked under memory pressure
Pariksheet Nanda writes: > I just checked my other server which has ZFS on root without encryption, and see that I did not enable swap at all on that machine. So I'll disable swap, thrash the RAM with`stress`, and then hopefully the OOM-killer works like it does on that machine. Yay! Indeed disabling swap allowed the OOM-killer to work. Pariksheet Hello, glad you could solve the problem and thanks for sharing the solution. I am planning on using ZFS, too, so would you mind a follow-up question about your setup: Is your swap on a ZFS volume? I have heard there are bugs with Swap-on-ZFS which may cause lockups similar to what you describe: https://github.com/openzfs/zfs/issues/342 https://github.com/openzfs/zfs/issues/7734 Thanks in advance Linux-Fan [...] pgpFrAe4hIosl.pgp Description: PGP signature
Re: reprepro using a gpg certificate
Andreas Ronnquist writes: On Mon, 28 Sep 2020 15:01:25 +0200, Philipp Ewald wrote: >afaik: > >you dont need a password on a gpg-key so if its not required you can >remove the password and script That is right of course - but how is this security-wise? I guess in my case it doesn't matter much though. Whether you store the password on the same computer as the keyfile or just a keyfile without password should not matter that much? What kind of adversary are you trying to protect against? I run a reprepro here with an "unprotected" keyfile and it works quite nicely. In case you are interested in how it is implemented here, see https://masysma.lima-city.de/32/masysmaci_pkgsync.xhtml Follow the links to Github or further documentation as interested :) HTH Linux-Fan [...] pgpXpvfgbYx4w.pgp Description: PGP signature
General-Purpose Server for Debian Stable
Hello fellow list users, I am constantly needing more computation power, RAM and HDD storage such that I have finally decided to buy a server for my next "workstation". The reasoning is that my experience with "real" servers is that they are most reliable, very helpful in indicating errors (dedicated LEDs next to the PCIe slots for instance) and modern servers' noise seems to be acceptable for my working envirnoment (?) I currently use a Fujitsu RX 1330 M1 (1U server, very silent) and it clearly is not "enough" in terms of RAM and HDD capacity. A little more graphics processing power than a low-profile GPU would be nice, too :) Rack-Mountability is a must, although I am open to putting another tower in there, sideways, should that be advantageous. In terms of "performance" specifications, I am thinking of the following: * 1x16-core CPU (e.g. AMD EPYC 7302) * 64 GiB RAM (e.g. 2x32 GiB or 4x16 GiB) I plan to extend this to 128 GiB as soon as the need arises. As I am exceeding the 4T mark, I am increasingly considering the use of ZFS. Currently, I have the maximum of 32 GiB installed in the RX 1330 M1 and while it is often enough, there are times where I am using 40 GiB SSD swap to overcome the limits. * 2x2T HDD for slow storage (local Debian Mirror, working data), 2x4T SSD for fast storage (VMs, OS) I will do software-RAID1 (ZFS or mdadm is still undecided). I possible, I would like to use the power of the modern NVMe PCIe U.2 (U.3?) SSDs, because they really seem to be much faster and that may speed-up the parallel use of VMs and be more future-proof. * 1-2x 10G N-BaseT Ethernet for connecting to other machines to share virtual machine storage (I am doing this already and it works...) * a 150W GPU if possible (75W full-sized card would be OK, too). Typical workloads: data compression (Debian live build, xz), virtual machines (software installation, updates) Rarely: GPGPU (e.g. nVidia CUDA, but some experimentation with OpenCL, too) single-core load coupled with very high RAM use (cbmc) Some time ago, there was this thread https://lists.debian.org/debian-user/2020/06/msg01117.html It already gave me some ideas... I am considering one the following models which are AMD EPYC based (I think AMDs provide good performance for my types of use). * HPE DL385 G10 Plus * Dell PowerEdge R7515 I have an old HP DL380 G4 in the rack and while it is incredibly loud, it is also very reliable. Of course, it is rarely online for its excessive loudness and power draw, but I derive that HPE is going be reliable? Before the Fujitsu, I used a HP Z400 workstation and before that a HP Compaq d530 CMT and all of these still "function", despite being too slow for today's loads. I am also taking into consideration these, although they are Intel-based and I find it a lot harder to obtain information on prices, compatibility etc. for these manufacturers: * Fujitsu PRIMERGY RX2540 M5 * Oracle X8-2L (seems to be too loud for my taste. especially compared to the others?) I have already learned from my local vendor that HPE does not support the use of non-HPE HDDs in the server which means I would need to buy all my drives directly from HPE (of course this will be very expensive). Additionally, none of the server manufacturers list Debian compatibility, thus my questions are as follows: * Does anybody run Debian stable (10) on any of these servers? Does it work well? * Is there any experience with "unsupported" HDD configurations i.e. disks not bought from the server manufacturer? I would think that during the warranty period (3y) I best stay with the manufacturer-provided HDDs but after that, it would be nice to be able to add some more "cheap" storage... * Of course, if there are any other comments, I am happy to hear them, too. I am looking into all options although a fully self-built system is probably too much. I once tried to (only) get a decent PC case and failed at it... I can only imagine it being worse for rackmount PC cases and creating a complete system composed of individual parts? Thanks in advance Linux-Fan pgp1pA_ASNOFF.pgp Description: PGP signature
Re: General-Purpose Server for Debian Stable
Dan Ritter writes: Linux-Fan wrote: [...] > * HPE DL385 G10 Plus > * Dell PowerEdge R7515 You should also look at machines made by SuperMicro and resold via a number of VARs. My company is currently using Silicon Mechanics and is reasonably happy with them. We have a few HPs as well. > I have already learned from my local vendor that HPE does not support the > use of non-HPE HDDs in the server which means I would need to buy all my > drives directly from HPE (of course this will be very expensive). It's ridiculously expensive. Also, note that HPE won't ship empty drive sleds for free. You can buy them aftermarket for about $15-30 each, depending. > Additionally, none of the server manufacturers list Debian compatibility, > thus my questions are as follows: > > * Does anybody run Debian stable (10) on any of these servers? > Does it work well? Yes, and yes. Avoid buying HP's RAID cards. > * Is there any experience with "unsupported" HDD configurations i.e. > disks not bought from the server manufacturer? They all work. They tend to be more finicky about RAM. Buy SuperMicro, from a VAR not too far away from you who can supply a next-day parts warranty for cheap. Thank you very much for these hints. I will surely add Supermicro to the consideration. The hints wrt. RAM and RAID cards are appreciated! OT: Message signature is still invalid, but I could track it down to some weird changes in space characters between what I send to and what I receive from the list. I have now idea how to solve it, though... Linux-Fan
Re: General-Purpose Server for Debian Stable
David Christensen writes: On 2020-10-01 14:37, Linux-Fan wrote: [...] Typical workloads: data compression (Debian live build, xz), virtual machines (software installation, updates) Rarely: GPGPU (e.g. nVidia CUDA, but some experimentation with OpenCL, too) single-core load coupled with very high RAM use (cbmc) [...] I suggest identifying your workloads, how much CPU, memory, disk I/O, etc., each requires, and then dividing them across your several computers. Division across multiple machines... I am already doing this for data that exceeds my current 4T storage (2x2T HDD, 2x2T "slow" SSD local and 4x1T outsourced to the other machine). I currently do this for data I need rather rarely such that I can run the common tasks on a single machine. Doing this for all (or large amounts of data) will require running at least two machines at the same time which may increase the idle power draw and possibilities for failure? Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have similar prices and power consumption, but the former will run sequential tasks twice as fast and the latter will run concurrent tasks twice as fast. Is this still true today? AFAIK all modern CPUs "boost" their frequency if they are lightly loaded. Also, the larger CPUs tend to come with more cache which may speed up single-core applications, too. I would think that you should convert one of your existing machines into a file server. Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB SSD's can work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection should be impressive. If you choose ZFS, it will need memory. The rule of thumb is 5 GB of memory per 1 TB of storage. So, pick a machine that has at least 20 GB of memory. 4x4T is surely nice and future-proof but currently above budget :) I saw that the Supermicro AS-2113S-WTRT can do 6xU.2 drives. In case I chose Supermicro this would allow upgrading to such a 4x4T configuration. As for the workstation, it is difficult to find a vendor that supports Debian. But, there are vendors that support Ubuntu; which is based upon Debian. So, you can run Ubuntu and you might be able to run Debian: https://html.duckduckgo.com/html?q=ubuntu%20workstation My experience with HP and Fujitsu Workstations is that they run well with Debian. I am still thinking that buying two systems will be more expensive and more power draw. Using one of the existent systems will slow some things down to their speed -- the current "fastest" system here has a Xeon E3-1231 v3 and while it has 3.4GHz it is surely slower (even singlethreaded) than current 16-core server CPUs... Thinking of it, a possible distribution accross multiple machines may be * (Existent) Storage server (1U, existent Fujitsu RX 1330 M1) [It does not do NVMe SSDs, though -- alternatively put the disks in the VM server?] * (New) VM server (2U, lots of RAM) * (New) Workstation (4U, GPU) For interactive use and experimentation with VMs I would need to power-on all three systems. For non-VM use, it would have to be two... it is an interesting solution that stays within what the systems were designed to do but I think it is currently too much for my uses. Still, thanks for the suggestion. OT: The hints about the details of e-mail encoding and signing are appreciated. Some other notes are here: https://sourceforge.net/p/courier/mailman/courier-cone/?viewmonth=202010 Linux-Fan Non-ASCII chars follow...: ö § ö ─ *E-Mail signed for experimentation* pgp_NWXyAEf2T.pgp Description: PGP signature
Re: General-Purpose Server for Debian Stable
Linux-Fan writes: Hello fellow list users, I am constantly needing more computation power, RAM and HDD storage such that I have finally decided to buy a server for my next "workstation". The [...] * Of course, if there are any other comments, I am happy to hear them, too. I am looking into all options although a fully self-built system is probably too much. I once tried to (only) get a decent PC case and failed at it... I can only imagine it being worse for rackmount PC cases and creating a complete system composed of individual parts? Hello everyone, I just wanted to thank everyone for the great replies they sent! I now know some additional things to consider and even got some progress on the unrelated e-mail signatures problem. Still unsure where I will end up with this, but if interested, I could post the actual results from my journey once I got the hardware. It will take some time, for sure, but probably happen before next year :) Thanks again Linux-Fan -- ── ö§ö ── 8 bit for signature ── ö§ö ── pgpCLjfKBBeGi.pgp Description: PGP signature
Re: General-Purpose Server for Debian Stable
David Christensen writes: On 2020-10-02 04:18, Linux-Fan wrote: David Christensen writes: On 2020-10-01 14:37, Linux-Fan wrote: >2x4T SSD for fast storage (VMs, OS) I suggest identifying your workloads, how much CPU, memory, disk I/O, etc., each requires, and then dividing them across your several computers. Division across multiple machines... I am already doing this for data that exceeds my current 4T storage (2x2T HDD, 2x2T "slow" SSD local and 4x1T outsourced to the other machine). Are the SSD's 2 TB or 4 TB? I currently have: * 1x Samsung SSD 850 EVO 2TB * 1x Crucial_CT2050MX300SSD1 together in an mdadm RAID 1. For the new server, I will need more storage, so I envied getting two NVME U.2 SSDs for 2x4T -- mainly motivated by the fact that I would take the opportunity to upgrade performance and that they are not actually that expensive anymore: https://www.conrad.de/de/p/intel-dc-p4510-4-tb-interne-u-2-pcie-nvme-ssd-6-35-cm-2-5-zoll-u-2-nvme-pcie-3-1-x4-ssdpe2kx040t801-1834315.html Of course, given the fact that server manufacturers have entirely different views on prices (factor 7 in the Dell Webshop for instance :) ), I might need to change plans a little... I currently do this for data I need rather rarely such that I can run the common tasks on a single machine. Doing this for all (or large amounts of data) will require running at least two machines at the same time which may increase the idle power draw and possibilities for failure? More devices are going to use more power and have a higher probability of failure than a single device of the same size and type, but it's hard to predict for devices of different sizes and/or types. I use HDD's for file server data and backups, and I use SSD's for system disks, caches, and/or fast local working storage. I expect drives will break, so I have invested in redundancy and disaster planning/ preparedness. Yes. It is close to the same here with the additional SSD usage for VMs and containers. Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have similar prices and power consumption, but the former will run sequential tasks twice as fast and the latter will run concurrent tasks twice as fast. Is this still true today? AFAIK all modern CPUs "boost" their frequency if they are lightly loaded. Also, the larger CPUs tend to come with more cache which may speed up single-core applications, too. Yes, frequency scaling blurs the line. But, the principle remains. I am not familiar with AMD products, but Intel does offer Xeon processors with fewer cores and higher frequencies specifically for workstations: https://www.intel.com/content/www/us/en/products/docs/processors/xeon/ultimate- workstation-performance.html AMD does it too, but their variants are more targeted at saving license costs by reducing the number of cores. As I am mostly using free software, I can stick to the regular CPUs. If I go for a workstation, I will end up with Intel anyways, because Dell, HP and Fujitsu seem to agree that Intels are the only true workstation CPUs. I would think that you should convert one of your existing machines into a file server. Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB SSD's can work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection should be impressive. If you choose ZFS, it will need memory. The rule of thumb is 5 GB of memory per 1 TB of storage. So, pick a machine that has at least 20 GB of memory. 4x4T is surely nice and future-proof but currently above budget :) Yes, $2,000+ for 4 @ 4 TB SATA III SSD's is a lot of money. But, U.2 PCIe/NVMe 4X drives are even more money. Noted. Actually, 4x4T SATA is affordable, as is 2x4T U.2 if not bought from the server vendor [prices from HPE are still pending, but I am scared by browsing for them on the Internet already...] :) [...] down to their speed -- the current "fastest" system here has a Xeon E3-1231 v3 and while it has 3.4GHz it is surely slower (even singlethreaded) than current 16-core server CPUs... That would make a good file server; even better with 10 Gbps networking. 10GE is in place already, but there are other hardware limitations (see next). Thinking of it, a possible distribution accross multiple machines may be * (Existent) Storage server (1U, existent Fujitsu RX 1330 M1) [It does not do NVMe SSDs, though -- alternatively put the disks in the VM server?] * (New) VM server (2U, lots of RAM) * (New) Workstation (4U, GPU) For interactive use and experimentation with VMs I would need to power-on all three systems. For non-VM use, it would have to be two... it is an interesting solution that stays within what the systems were designed to do but I think it is currently too much for my uses. The Fujitsu might do PCIe/NVMe 4X M.2 or U.2 SSD's with the ri
Re: SSD and HDD
mick crane writes: Bearing in mind I rarely do installs and when I do usually let the installer do its thing. Got a PC that has SSD and a HDD. I see that you are supposed to avoid writes to SSD for longevity. Is it a matter of putting entries in fstab for /swap /var /home to suitably formatted partitions on HDD ? You can add to /etc/fstab: tmpfs /tmptmpfs defaults,size=6G,nr_inodes=1M 0 0 Change 6G to a size that suits you, 1/2 RAM could a good choice. This puts /tmp on a tmpfs such that writes to it no longer go to SSD. Another common option is using the `relatime` mount option for SSDs, but I do not configure this explicitly on my systems. Or is there more to it ? Depends on what exactly you want. In terms of "avoiding writes" all current SSDs do with typical OS workloads just fine by default and there is no /need/ to configure anything. I personally like that periodic `trim` invocations be made and thus use a cronjob for it: https://masysma.lima-city.de/32/ssd-optimization.xhtml I am not sure whether this is the recommended approach to it as of today, though and: It does not reduce writes but tells the SSD controller which blocks are unused by the OS such that it can delete them etc. :) For further optimization, if using virtual Windows machines, configure them correctly because otherwise Windows may start "defragmentation" -- a lot of unnecessary writes for any SSD. HTH Linux-Fan öö pgptrGTdxKPrd.pgp Description: PGP signature
Re: Replacement Email Client
Greg Wooledge writes: On Mon, Oct 26, 2020 at 01:49:05PM +, Curt wrote: > On 2020-10-26, Greg Wooledge wrote: > > On Mon, Oct 26, 2020 at 12:38:36PM +0100, Michael wrote: > >> he is talking about filling in forms, etc. that are part of the html email. > >> guys, ever heard of the ... html tags? that's what he means. > > > > But what would the form's Submit action be? > > > > > > > HTML Forms > > > First name: > > Last name: > > > > > If you click the "Submit" button, the form-data will be sent to a page > called "/action_page.php". > > > Yes, this is exactly my point. If you've received this form from a WEB SERVER, then /action_page.php refers to a script on that same web server. Or the equivalent of a script. But if you're just reading this form in a FILE on your LOCAL MACHINE, which is what email is, then what is /action_page.php supposed to do? In an e-mail form one would expect the `action` to point to an absolute URL. Consider this example (derived from above): Absolute URL form test https://www.google.com/search"; method="GET"> Search Query: Works independently of where the file is stored and could thus also run inside an e-mail client. HTH Linux-Fan öö pgpeCNz8p4K3t.pgp Description: PGP signature
Re: The .xsession-errors problem
Teemu Likonen writes: It seems that ~/.xsession-errors file can still grow to infinity in size. Sometimes it grows really fast. This is nothing new: we have all seen it and talked about it. What do you do to maintain this file? Until now, I had not seen it as a problem. But it is quite large here, too: ~$ du -sh .xsession-errors 16M .xsession-errors - Do you just delete it when you happen to notice it's too big? I think that would be my approach, given that it seems to grow slowly here. First entry is from 30 Sep 2018 :) - Do you configure some rotating system, perhaps with logrotate(8)? (Why doesn't Debian have this automatically?) I think the policy against having an automatism is to avoid changing the home directory contents by anything else than explicitly user-invoked applications. - Do you add it to your backup system's ignore list so that a potentially big file doesn't fill your backups? I do not include /home in my backups at all (except for some specific subtrees) because there are a lot of applications writing unneeded files there. Consider the various recently-used-lists, cache files, thumbnails, GUI settings, trash folders whatever. Nothing to backup for me there. I opt to keep my data on an entirely different directory structure as to avoid applications dumping their files next to my personal data. [...] - Why is it normal that in Debian (and GNU/Linux) you need to manually delete a hidden file to keep it from filling your hard disks? There seem to be other areas that are constantly growing, too. APT downloaded package files come to mind. For /home-structures it is usually up to the user to fix it whereas the system-wide "growths" are better handled by the admin... [...] Just my thoughts, YMMV Linux-Fan öö pgpLpRPvm4i5V.pgp Description: PGP signature
Re: Sid random crash with no clue in log files
Sébastien Kalt writes: Hello, I'm having random crashes on my ASUS PN50 mini PC : sometimes it's while [...] When it crashes, the system seems to be not responding, I can move the mouse, but it doesn't click, and then the screen becomes blank. I need to shut down the mini PC pushing the power button for a long time. [...] Hello, common things to test in case of random crashes are hardware-related issues like cooling (CPU temperatures?), power supply and RAM (memtest86+). Another thing to debug is: While the system is in the described crash state, you can still move the mouse -- this means some software on the system remains running despite not being accessible through GUI. An interesting step could be to attempt a login through SSH (install openssh-server if not already there to enable SSH access). From that SSH login you might be able to call a tool like `htop` to check the state of the system and stop any rouge processes etc. I used to have similar types of crashes (the screen never went black, but the mouse cursor would move with any further inputs like klicks and key presses ignored) on a system with VirtualBox and the NVidia proprietary GPU driver (must have been Debian 7 or 8 IIRC). If either VirtualBox or NVidia graphics driver were uninstalled, it would never crash that way... In any case, logging in through SSH, I could see that the Xorg process was at 100% CPU. Killing it caused the screen to go black. Attempting to re-start would fail in some way, but afterwards a login from the local system would again be possible and the hard reset could be avoided... HTH Linux-Fan öö pgpIWNhCGmk3b.pgp Description: PGP signature
Re: Special Key Assignment?
The Wanderer writes: On 2020-10-31 at 15:42, Thomas George wrote: > I want to assign a unicode character to an unused key on my keyboard. > For example F6 currently is assigned ~, name of my home directory. I > would like to change it to ♠. [...] # xmodmap -e "keycode 71 = U2660" [...] (I haven't tried remapping this myself, as I don't know offhand how to reverse it and I don't have a discardable test environment on hand, so I can't testify as to whether these work for me.) [...] I just tried it in a VM and got the results The Wanderer suggested: The following inputs ♠ upon pressing [F5] xmodmap -e "keycode 71 = U2660" The following inputs ♠ upon pressing [F6] xmodmap -e "keycode 72 = U2660" HTH Linux-Fan pgp5jBE5XL1on.pgp Description: PGP signature
Re: Building my own packages
Victor Sudakov writes: songbird wrote: [...] > > Where can I learn to do a similar thing for Debian? I'd like to have my > > own package repository which: > > > > 1. Keeps my local patches and configure/build options. > > 2. Gets updated and recompiled when the main Debian repository gets > > updated. > > 3. Can have a higher preference for my Debian systems than the default > > Debian repositories. > > > > I know this can be done because I use some vendor repositories (zabbix, > > consul etc) but I need the tools and knowledge. > > > > What would you advise me to read? > > there is a ton of information under: > > https://www.debian.org/devel/ The problem is I don't need a ton of information :-) I need to hear from someone who has already done that for themselves: "I use such and such tools, and publish my repo this way..." [...] > in the place and set up a watch on the repository to > see when changes happen. I have no doubt there are many such tricky things, that's why I'm looking for a tutorial. [...] Hello, I am among those who have done something similar, although with slightly different focus: * For me, it is mostly not to change options of existing packages but rather to add some packages that are not there yet (at all) * Additionally, I store a lot of configuration * I only upgrade from upstream on rare occasions, thus I have not automated that part thoroughly. The commits from "Oct 27, 2020" here show how I did an upgrade: https://github.com/m7a/lp-cone/commits/master For my use case, I combine the following tools: * debuild (Debian package building) * reprepro (Custom repository) * ant (generic build tool) * Docker¹ (chroot-like environment) * git (for storing metadata and my own source code) * Perl (to tie it all together) My idea is to have one git repository per package and inside that, the package's metadata is "wholly" described by a single `build.xml` file for `ant`. Working on package is mostly done by modifying the files in the git repository and then tiggering automatic builds by committing the current state of work (no need to upload anything to a server, the system uses the local file system). I am really unsure whether my approach is anywhere near what you need, but I have automated almost all of the actual packaging stuff including the creation of a repository, the invocation of Debian build tools, the synchronization with the repository, the creation of a clean build environment... so maybe it can serve as a "formal tutorial" i.e. some code to look at, that does all the necessary steps. Beware that it is a simplified version of the story though -- the packages created by my system are not suited for inclusion in Debian as-is (still pondering how to achieve that one day...). Here is the documentation of all components: * https://masysma.lima-city.de/32/masysmaci_main.xhtml * https://masysma.lima-city.de/32/masysmaci_build.xhtml * https://masysma.lima-city.de/32/masysmaci_pkgsync.xhtml * https://masysma.lima-city.de/11/maartifact.xhtml This marks my second approach to automate the packaging. Before that, I used a script called `mdpc` (1.0) which I posted to this mailing list at the time it was written. Afterwards, it did not really change anymore... and it works until today “mostly” -- for instance, it is highly dependent on the host system and some of its functions have not been used for at least a year...: https://lists.debian.org/debian-user/2013/08/msg00042.html ¹) Debian package `docker.io`. Note: Docker is a good substitute for most chroot uses (and many VM uses, too), but it loses much of their lightweightness and running Docker as regular user is still experimental (I did not try it yet...). Additionally, it is one of the more complicated tools out there... HTH Linux-Fan öö pgpyl6DfIoZ2T.pgp Description: PGP signature
Re: An old box running Debian 8
Miroslav Skoric writes: I have an old comp (CPU Pentium II Celeron 400 MHz, 224 MB RAM) running ham radio server in Debian 8. It works well in CLI, but very slow after starting GUI. I wonder whether it would be worth to try (if possible at all) to upgrade it to Debian 9. Any experience with such old boxes? Misko YT7MPB Pentium II is old indeed. Whenever using old processors, it is important to test if the new kernel will still support them. As long as you stay on the CLI I do not expect there to be a major performance degradation from the upgrade. I am running Debian 10 on an old laptop (Acer TravelMate 210) with 128 MiB of RAM and 700 MHz Intel Celeron (?) and it is slow even on commandline use with "heavy" applications like vim or maxima. Other tools, like sc-im run quickly enough, though... HTH Linux-Fan öö pgpdtMjiSEd2L.pgp Description: PGP signature
Re: An old box running Debian 8
Miroslav Skoric writes: On 11/11/20 7:09 PM, Linux-Fan wrote: Pentium II is old indeed. Whenever using old processors, it is important to test if the new kernel will still support them. So maybe I shall try some newer kernel only? If you have an easy means to do that: Yes, I would highly recommend doing that. YMMV Linux-Fan pgpeSWud6ifYf.pgp Description: PGP signature
Re: An old box running Debian 8
Felix Miata writes: Miroslav Skoric composed on 2020-11-12 23:01 (UTC+0100): > At this stage (Debian 8) I do that in MATE + Thunderbird. It's slow but > works. What is not known is whether that would work in Debian 9. Possibly you could boot live media 9 to find out, or if you have enough disk space available, install a minimal 9 alongside 8. The problem with a live system on limited hardware is often the amount of RAM. I am impressed to read that Debian 8 + Mate + Thunderbird works anything near acceptable on the hardware. I am curious about a `free -h` output from that Pentium II-style Celeron machine :) Just for comparison, on my old Acer Travelmate laptop (Debian 10), for GUI tasks, I run the i3 window manager. It performs acceptably. Starting urxvt also works. But whenever I run a "larger" GUI application like the zathura PDF viewer (which is pretty minimalistic compared to a full-blown Thunderbird), it gets very slow and uses swap. While I have occasionally run it this way, I would not consider it really "working" because basic features like scrolling become "unusably" slow. On the other hand, if you are using Mate now and can consider switching to something lighter (Icewm and FVWM were good suggestions I saw in the thread), chances are it will continue to run slowly "as it used to" - the RAM and CPU saved from the DE may just be enough to compensate for the higher resource usage of newer software... PS/OT: My previous message missed the important signature characters: öö. Thus its PGP-signature is unfortunately incorrect :( HTH Linux-Fan öö pgpUUS2DJ31ro.pgp Description: PGP signature
Re: GPU for a new PC
George Shuklin writes: I'm driving into 'new pc struggles', and the thing to think hard about is GPU. What GPU is good for Linux? I've tired of Nvidia blobbing, (even it's packaged really well now), so the next (and last thing is AMD). How well is it working with open source drivers? Just enough for desktop, or good for normal 3D? If you recently upgraded/bought GPU, what was your issues if any? My experience with nVidia is that it works well with the proprietary graphics drivers. It is the way of least hassle towards a fully-functional GPU on linux if one needs more than an IGP's performance. I also have two AMD GPUs (Radeon RX570 and Radeon Pro W5500) with the following experience: The RX570 works out of the box for 3D accelleration with the open source driver + non-free firmware blobs. The W5500 does not work out of the box. A newer kernel and firmware is needed -- I am using 5.8.0-0.bpo.2-amd64. Even then, on Debian 10, it does not do any 3D accelleration or video decoding yet. It does support at least three screens in that configuration already (no fourth one to test here :) ) and 2D performance is decent enough for working (scrolling works etc.) I can get to run the 3D capabilities of the W5500 using the open source driver + firmware blobs with a newer Mesa version. I successfully tested version 20.2 (from Debian sid) for that purpose and it performs well. There does not seem to be a backported Mesa for Debian stable (and it does not seem exactly trivial to do one myself, although I have not tried yet). Some time ago, I tried to get to run the computing accelleration (i.e. OpenCL) for the AMD Radeon RX570. I got it to work in the end with a strange mix of the proprietary driver and the free implementation installed at the same time. I remember it was a "nightmare". It did run stable, though. I have not tested OpenCL for the W5500 yet. Summary: Older AMD cards work with Debian stable, newer ones only with newer Debian (tested with Debian sid). HTH Linux-Fan öö pgpPU9y1y4Xkj.pgp Description: PGP signature
Re: Dual Win10/Linux on HDD+SDD installation & RTL8821CE
Kanito 73 writes: Hello Finally I bought the laptop with Ryzen 5, it arrived yesterday. At first I backed up (clonezilla) the whole brand new system (Windows 10) before running for first time to have a virgin copy of the original system. Today I will erase the disks to create partitions and install both Windows 10 and Linux, but I'm not sure about how to organize the space. The laptop comes with a 1Tb HDD and a 128Gb SDD. Windows 10 is installed on the 128Gb SDD and the whole [...] (Can Linux run on "sdb" (Windows on "sda")?) Yes, it can. [...] My suggestions are as follows * Install the primary OS (the one you use more) on the SSD. It will improve application startup times and update processes both of which one needs to perform rather often independently of system usage. * If your primary OS is Linux, consider using the 128 GiB SSD for Linux only and put a decent swap partition on it to aid with data-intensive applications (how much RAM does your laptop have?) * Instead of installing the secondary (i.e. less-often-used) OS on the 1T HDD, consider running it as a virtual machine under the primary OS. For Linux, you can use virt-manager + KVM, for Windows (if you have Windows 10 Pro), you can use Microsoft Hyper-V. Advantages of this approach: + No need to restart the computer to access the secondary OS. Especially: No need to restart the computer to install security updates for Linux. If you use Linux host systems: No need to restart the computer except for applying kernel upgrades. + No need to worry about Windows' rapid startup feature that does not shutdown the computer but rather goes into some special suspend-to-disk mode. It can cause file system corruption if data is accessed by another OS while Windows is in that state. Modern Linux will most likely warn you before it happens, though :) + You can share data through a networked file system (SMB, even if it is only between Host and VM) and store it in the host OS' native file system. Rationale: I'd advise against using NTFS productively for Linux data (although I have not tried it extensively). + You can start by putting both OS on the SSD and once space gets filled-up move the virtual HDD to the 1T HDD. Depending on the usage pattern, 128G SSD may be enough for both [I know that I'd exceed it pretty quickly, but I use larger SSDs for that reason...] * If Windows is your primary OS, Microsoft has made some interesting progress in supporting the use of Linux applications. Check out the "Windows Subsystem for Linux 2" (WSL2) and Linux Docker containers on Windows (Docker on Windows boils down to a hidden VM IIRC). HTH Linux-Fan öö pgpz9pI4j3KeZ.pgp Description: PGP signature
Re: NTFS partitions can't be mounted
Kanito 73 writes: Hello Al the previous issues I published are now solved. Relative to the RTL8821CE, I searched for a module rtl8821ce.ko but the generated module was just 8821ce.ko so when I loaded the only RTL* (rtl8821ae.ko) the right 8821ce.ko was already loaded and I thought it was the rtl8821ae activating my wifi. [SOLVED] Now I have another BIG problem. I installed both Windows10 (version OCTOBER 2020) and Debian 10.0.6 in dual boot and left a large NTFS partition for data on the primary disk (HDD) and the whole secondary disk (SDD) also as a unique NTFS partition. Which parts went onto the SSD and which onto the HDD in the end? Which of the two systems do you intend to use more often? Which of the two systems will run computation-intensive (CPU, RAM, GPU) applications? Well, I installed Windows, then Installed Linux and tested the NTFS partitions from Linux (Debian) mounting and copying some files successfully. [...] So I think that Windows 10 locks the partitions or something weird is going on. If not solved I will clear the entire disks and install only Debian and run Windoze on VirtualBox, I don't want since there are some games that I want to play on native Windows but if it does not work will have to remove it from the computer. CHKDSK.EXE reports no errors when I run it on such partitions. [...] Damn Windows it is making me cry blood but I need it for some games and programs... [...] See https://lists.debian.org/debian-user/2020/11/msg00574.html The Wanderer's post contains a more elaborate explanation of the immediate issue you are most likely facing: Windows going into suspend-to-disk rather than actual shutdown. A possibility to bypass the fastboot/rapid startup technology is to use suitable arguments to the Windows `shutdown` command. It used to be possible to bypass it by doing a right-click on the Windows logo in the lower left and then choose "Shutdown" from that menu but I am not sure if this still works. I'd still very much recommend running one of the two systems inside a virtual machine rather than physically despite the fact that the immediate issue with the rapid startup may be solved. What about running virtual Linux under a Windows host? This would retain the gaming performance while at the same time solve the issues wrt. the file system. Of course, if you want to use your full RAM for Linux, this approach will not work. From my experience it is already difficult to run a 8 GiB Linux VM on a 16 GiB Windows machine (tested with Windows 10, Debian 10 and Microsoft Hyper-V). HTH Linux-fan öö pgpsdHpHL6qE9.pgp Description: PGP signature
Re: NTFS partitions can't be mounted [SOLVED]
Kanito 73 writes: [...] > Linux-fan: > Which parts went onto the SSD and which onto the HDD in the end? > Which of the two systems do you intend to use more often? > Which of the two systems will run computation-intensive (CPU, RAM, GPU) > applications? HDD: sda1-sda4 Windows 10 (100Gb+) sda5-sda7 Debian 10 (100Gb+) sda8 Data partition (700Gb NTFS) SDD: sdb1 Data partition (128Gb NTFS) Thanks for sharing! My primary OS is Linux, rarely I use Windows but I require a native installation to run some programs with direct access to the hardware... Most programs can be run into a VirtualBox machine but some games will not work and a few programs would run better directly on the computer. The most hungry programs (processor and memory) are in Linux, basically KDEnlive and OBS, except for a few games and programs in Windows (not commonly used but they are available when I need them). Thanks all of you (and the other people who helped me along the installation and configuration) for your help and comments. My Windows 10 / Debian 10 box is up and running. Once everything was working and tuned up I made a Clonezilla backup of all partitions, so when one of the systems gets saturated or damaged I just need to restore the corresponding boot partition or the whole system partitions with the system already installed and configured, since my work and files are stored in the external data partitions I have not to make or restore a backup... [...] OK. I still wonder wheter NTFS is the best file system to use under the condition that it is mainly Linux with some Windows use? What do you do against the mangled Unix permissions on NTFS drives? Usually, in such a scenario of mostly Linux, I'd recommend using two data partitions, one "main" data partition (e.g. 500 GiB) with ext4 and one "exchange" data partition (e.g. 200 GiB NTFS) for data explicitly shared with Windows. This way, most Windows malware will not access or damage all of the data but just the parts shared with Windows. And, the Linux applications will not incur the problems arising from wrong permissions. In case you are interested in some experimentation, I suggest trying another virtualization software on Linux (e.g. you used VirtualBox so how about virt- manager + KVM?) Maybe another virtualization software can make your hardware accessible to guest (Windows) systems in a way that it performs well. USB redirection works quite well for me on virt-manager + KVM on Debian 10. PCIe redirection not so much (maybe I tried to advanced things with too old hardware :) ). There does not seem to be a convincing solution for gaming on Linux, though. A few years ago, the only virtual machine solution that could provide some (reduced, but OK for me) 3D gaming performance for Windows VMs was VMWare. I personally like `playonlinux` (avoids the virtualization and "real" Windows altogether), but not all games will run with it. HTH Linux-Fan öö pgphNRyuiUKJG.pgp Description: PGP signature
Re: Sharing files LINUX-LINUX / LINUX-WINDOWS / WINDOWS-WINDOWS
Kenneth Parker writes: On Tue, Dec 1, 2020, 9:10 AM Anssi Saari <mailto:a...@sci.fi>a...@sci.fi> wrote: Kanito 73 <mailto:kanit...@hotmail.com>kanit...@hotmail.com> writes: > At first I thought to use both SAMBA for LINUX-WINDOWS and maybe NFS for LINUX-LINUX but I used NFS long time > ago and it was slow as a turtle. Is there another networking service available that runs faster only for > LINUX-LINUX or it is better to use SAMBA for everything_ Personally I don't bother with Samba for file sharing on my home network since Microsoft's seen the light and Windows 10 (Pro) includes NFS support. And no, NFS is not slow as a turtle. [...] Can you provide a Microsoft (or related) link to the Windows NFS Support? (On this List. You do not need to cc me). The straight-forward explanation: https://graspingtech.com/mount-nfs-share-windows-10/ The official documentation: https://docs.microsoft.com/en-us/windows-server/storage/nfs/nfs-overview I have not used NFS support on Windows yet :) HTH Linux-Fan öö pgpp1yvxfvejb.pgp Description: PGP signature
Re: Release status of i386 for Bullseye and long term support for 3 years?
Andrew M.A. Cater writes: On Tue, Dec 15, 2020 at 06:42:41AM -0700, Charles Curley wrote: > On Tue, 15 Dec 2020 13:42:37 +0200 > Andrei POPESCU wrote: > > > That is, if you and other list subscribers care about continued i386 > > support you should probably look into contributing. > > And how does one do that? [...] If you have "real" 686 32 bit hardware that you can press into service that isn't being used: pick up a Debian i386 disk and try reinstalling Debian. If you have "real" 686 32 bit hardware - get a copy of a Debian live CD and boot it - you may face probelms if there isn't a lot of memory. [...] Which version of Debian should be tested in such cases -- testing or stable? Any specific image files that I should use -- I'd head for the most recent official netinst image if not :) ? I have a few i386 systems (which will not boot amd64) here and could do a few installs and live CD tests. Is there anything of interest to report apart from obvious "problems" and "successes"? Does it make sense to test different "boot paths"? I normally burn actual CDs for the installation because that's what all (?) of my i386 systems support booting from, but some of them may do USB-pendrive based installations if triggered from an existent GRUB 2 prompt, too. I have got seven i386 (non-amd64-capable) machines in total, all of which are known to work with "Debian 10 stable" although not freshly installed but upgraded from earlier releases. At least two of them can be easily used for testng and one of the others has very low RAM (will not run live systems that is...). HTH Linux-Fan öö pgpbuJtV1XdwQ.pgp Description: PGP signature
Re: Release status of i386 for Bullseye and long term support for 3 years?
Andrew M.A. Cater writes: On Mon, Dec 21, 2020 at 07:06:45AM -0800, Rick Thomas wrote: > > On Mon, Dec 21, 2020, at 3:48 AM, Andrew M.A. Cater wrote: [...] > > If you have "real" 686 32 bit hardware that you can press into service > > that > > isn't being used: pick up a Debian i386 disk and try reinstalling Debian. > > > > If you have "real" 686 32 bit hardware - get a copy of a Debian live CD > > and boot it - you may face probelms if there isn't a lot of memory. [...] > I'll file an installation report soon. After that, I guess I'll try > installing Bullseye and file a report on it. > > Does anybody know if there's an i386 Live DVD for Bullseye? Well, that's a good start :) The test suite we used to test for stable release CDs is here: https://wiki.debian.org/Teams/DebianCD/ReleaseTesting/Buster_r7?highlight= %28testing%29%7C%28cd%29%7C%2810.7%29 - as you can see, we try and exercise as many paths through the standard Debian installer as we can. Just knowing that there are people out there who could help us test some more installs on real hardware helps. You can always do these on KVM / virtual machines of some sort on AMD64 hardware but it doesn't exercise real hardware. The text-speech installs for visually impaired folk are always tested on real hardware but more tests are always welcome - as would be any comment on anything that's unreasonable on low memory. Debian live CDs are not put out by the image creation team but by the debian- live team: I suspect they'll be there before Bullseye main release. For testing live CDs - download one or more and try them? [...] I tried to do a few tests for live CDs and the contained installers. Additionally, I tried the new Bullseye-Alpha3-Installer. I can report that except for the "Calamares" installer which did not want to run on a 512 MiB system, all of the tests I did, worked. As far as I can tell, the corresponding Wiki-lines that would get a "PASSED" from me are these: 1105a debian-live-10.7.0-i386-lxde.iso BIOS Install from DI (Simple disk: single fs EXT4 Network enabled) 1105a debian-live-10.7.0-i386-lxde.iso BIOS Start Live Image 1106a debian-live-10.7.0-i386-standard.iso BIOS Install from DI (Simple disk: single fs EXT4 Network enabled) The more detailed report is currently at [1]. Is it OK for me to just fill- in the three lines above in the Wiki? It looks like the page is mostly used by experienced Debian contributors (and I do not even have a Debian Wiki account yet...)? Does a similar page exist for the Bullseye installer? Or am I on the wrong track and installation reports should go to an entirely different location? It is to be noted, that live GUI systems are _extremly_ slow on my i386 machines. Even LXDE which I thought to use the lowest ressources from the Debian live systems available cannot help speeding up heavy applications like Firefox -- it takes literally minutes to open Firefx and process the URL given as parameter. Xterm runs well, though :) [1] https://masysma.lima-city.de/37/debian_i386_installation_reports.xhtml HTH Linux-Fan öö pgpUaSuMTMw0x.pgp Description: PGP signature
Re: No GRUB with brand-new GPU
Georgi Naplatanov writes: On 12/27/20 12:19 AM, The Wanderer wrote: > I have for some years been running Debian with an older model of AMD GPU > (Radeon HD 6870) for graphics. > > I recently purchased a relatively recent model of GPU (Radeon RX 5700 > XT), and today swapped it in and attempted to boot with it. [...] > With the new GPU in place, I get video output during POST and in the > BIOS (yes, this machine is old enough that it doesn't have a UEFI) > without problems. That demonstrates that the GPU isn't dead on arrival, > and that signal is getting through to the monitor on a basic level. Curious, what machine does not do UEFI yet but still benefits from large GPUs such as the RX 5700? [...] > Any suggestions for what to try? [...] I'm not a hardware expert but I found the following on Internet: - the interface for this card is PCI-Express 4.0 and I guess that your old computer doesn't support that PCIe 4.0 is backwards-compatible (down to at least PCIe 3.0, possibly further). - BIOS Support - Dual UEFI - I'm not sure what this means but is it possible this card not to be supported by non-UEFI systems ? (Unsure on this) [...] I have a RadeonPro W5500 which is also too new, to be supported on Debian stable out of the box. What I did to get it running for basic 2D graphics (enough for me most of the time) was to install the following packages from debian-backports: * linux-image-5.8.0-0.bpo.2-amd64 * firmware-amd-graphics Additionally, I tried to enable amdgpu in xorg.conf, although I am not sure whether that is actually needed: $ cat /etc/X11/xorg.conf Section "Device" Identifier "AMD" Driver "amdgpu" EndSection This system is using UEFI to boot and I do not recall having the same issue (black screen) even before, though. Here are some other suggestions: As you mentioned GRUB not loading correctly, could there perhaps be an explicit graphics resolution configuration entry? I do not have any special configuration enabled there: $ grep -E '(GFXMODE|TERMINAL)' /etc/default/grub #GRUB_TERMINAL=console #GRUB_GFXMODE=640x480 Apart from a live system, it could already be helpful to boot a Debian installer to see if it can load a Linux if GRUB is not invovled. If that works (i.e. presents language selection screen), checking with a live system seems to be a reasonable next step. If you need the 3D accelleration performance (why buy an RX 5700 if not?), I'd suggest to setup a Debian testing environment for that. I managed to get some (fast enough for me) 3D accelleration working by creating a Debian sid chroot (testing did not have some dependencies I needed) and I can use it by logging in through TTY2 -> chroot -> startx. BUT: Please note that this setup is brittle -- dual-boot or testing-only configurations can be expected to run _more stable_ in this scenario :) HTH Linux-Fan öö pgpUgcLw5pUVE.pgp Description: PGP signature