Re: Illegal Instruction Using sudo in Bookworm on i686

2024-06-08 Thread Sven Mueller
https://www.compmall.de/VDX3-EITX-75S-505669 is in stock.I found a variety of other shops selling similar boards, some having them in stock, some not.Am 08.06.2024 13:29 schrieb rhys :Yes, this is a known issue.  This is because Bookworm only supports 32-bit CPUs that are fully Intel compatible.  You will find that there are other binaries such as ffmpeg that fail with the same problem.  (This is from memory.  I have a similar system that is "almost" Intel compatible, but cannot run Bookworm due to these issues.)

The fact that these processors are still sold today is interesting, though.  A big part of the argument for limiting 32-bit support has to do with the assumption that they are all "very old" systems.  So a NEW 32-bit processor might change that discussion.

This is perhaps a better link for that, though, since it shows a product based on that CPU, rather than just a discussion about it:

https://icop.com.tw/product/VDX3-PITX#!

(Note:  The site uses a self-signed certificate.  Bleh.)

It doesn't list a price or availability, though, which suggests that it may NOT actually be sold any more.  

Do you have an example of a site where these are available for purchase as new (not used or refurbished)?

--J

> On Jun 8, 2024, at 02:25, Laszlo Merenyi  wrote:
> 
> Message-id: 
> In-reply-to: <529d4728-d26f-43ff-a957-54b29652d...@neoquasar.org>
> References: <529d4728-d26f-43ff-a957-54b29652d...@neoquasar.org>
> 
> Hello,
> I encountered a similar sudo issue with Bookworm installed on a Vortex86DX3 CPU based embedded computer. 
> Vortex86 series chips are less known x86 CPUs, that are manufactured and are available on-the-market still today. Their type detection in Linux Kernel was implemented in 2021. They are 32-bit only and they are stated not to be fully i686 compatible CPUs. 
> See for example: https://www.icop.com.tw/news/858#! 
> 
> I was able to make sudo (and visudo) executable working on this CPU, by recompiling the sudo-1.9.15p5 source code package on the target with manually removed "-fcf_protection" hardening option.
> 
> I did not yet met any other program in Bookworm's i386 release having similar "illegal instruction" issue. So, by using a recompiled sudo, Bookworm seems to work on Vortex86DX3.
> 
> Regards,
> Laszlo
> 
> 
> 




Re: Security updates for sarge?

2004-10-23 Thread Sven Mueller
Ingo Juergensmann [u] wrote on 22/10/2004 18:35:
On Fri, Oct 22, 2004 at 06:13:46PM +0200, Martin Schulze wrote:
Because they have set up and maintain the buildd network.
Yes, nice, well done, thank them for their initial work, but it seems as if
it's up for others now to take over that job, because they obviously failing
continuously doing it now.  
I must admit I thought something similar:
Why the hell are there only two people who know how to do it, when two 
people doesn't seem to be enough? It might be better if they postponed 
further work on the buildd network and used that time to introduce 
others to the job. In the end, this might very well speed up the whole 
process. At least, it gets some more redundancy (what happens if one of 
them gets ill while the other is on a prolonged journey?).
Two people who can do the job certainly isn't nearly enough for such 
important jobs in a project as big as Debian. I would think it should be 
at least 5-6 people.

Similar things could be said about ftpmasters. New packages are supposed 
to be added to unstable within at most one week, but I'm waiting for ten 
days now. (Yeah, I know, still not _that_ long.) I'm not complaining, 
just wondering.

Heck, If I were a DD, I would be glad to help whereever needed. The most 
pressing bits seem to be (from my POV):
1) buildd network (especially because of sarge/security)
2) ftpmaster (seems to be overwhelmed in work for months now)
3) new-maintainer process (though it seems to have sped up considerably
   during the last year)
4) security team (though I'm not sure how bad the situation is)

So, if my help is wanted with one of the first three of those, I will 
gladly file a NM application immediately.

cu,
sven



Re: Security updates for sarge?

2004-10-23 Thread Sven Mueller
Manoj Srivastava [u] wrote on 23/10/2004 21:43:
I must admit I thought something similar: Why the hell are there
only two people who know how to do it, when two people doesn't seem
to be enough?
Are you volunteering to go out and better educate yourself to
 take on this work?
You know perfectly well that there _are_ people out there who know how 
to do it.
Also: I offered my help if it is wanted (see below), but I see no point 
in learning what's needed to work as a buildd or ftp admin for debian 
while I know perfectly well that helping hands in these areas is 
regularly declined by those in charge.

  It might be better if they postponed further work on
the buildd network and used that time to introduce others to the
job. In the end, this might very well speed up the whole process. At
least, it gets some more redundancy (what happens if one of them
gets ill while the other is on a prolonged journey?).  Two people
who can do the job certainly isn't nearly enough for such important
jobs in a project as big as Debian. I would think it should be at
least 5-6 people.
	Again, are you volunteering to go out and learn how to do it?
If my help is indeed wanted: Yes.
Under the current circumstances (with no definite acknowledgement that 
my help will be accepted): no.
Also you are in no way responsive to my main point: Why are there only 
two people doing the job when quite a few more people have already 
offered to help (and are indeed qualified to do the job)?

 Or is this yet another time wasting rant?
You mean like your post?
Heck, If I were a DD, I would be glad to help whereever needed. The
Ah. Just a spectator, booing and hissing at the people who
 have stood up to be counted.
And who decline help every time the subject of their work load comes up?
Also: No, not just a spectator. I have been advocating and deploying 
Debian for quite a while. Also I helped new users of Debian quite a lot. 
And my first Debian package has been uploaded almost two weeks ago and 
is still waiting in the NEW queue.

So, if my help is wanted with one of the first three of those, I
will gladly file a NM application immediately.
	Please do.
Fine. Where do you want me to help? When I know where my help is wanted 
and accepted, I will gladly file the application. Until then I currently 
see no point in doing so (putting more load on the DAM without having 
actual work for me to do).

>  We need more workers, and less lawyers.
Exactly my point. Problem is that the current workers are doing 
everything to keep others from being able to do their work.

cu,
sven



Re: Bug#277582: ITP: kwin-baghira -- A MacOSX-like theme for Apple junkies ;)

2004-10-26 Thread Sven Mueller
José Luis Tallón [u] wrote on 25/10/2004 03:53:
 [I hope my post won't be doubled, my MUA just crashed exactly when I 
hit CTRL-enter to send the mail]

Wouter Verhelst wrote:
>
On Fri, Oct 22, 2004 at 04:31:52AM +0200, Adeodato Simó wrote:
I would suggest a name like kde-$FOO-style to be used (e.g.,
kde-baghira-style) for packages that provide a widget style for
QT/KDE, and include kwin decoration (if they exist) in the same
package. (*)
For the sake of consistency, I would suggest kde-theme-$FOO. This is
what enlightenment, jsboard, opie, and even previous incarnations of KDE
itself use (kdeartwork-theme-*).
This is NOT a theme... the source package is *of course* called 
Baghira... i assumed kwin-$FOO for a kwin complement made sense. If it 
does not, i can rename it to simply 'baghira' and be done with it... 
additional suggestions?
If it's not a theme but a style, why not name it like this?
kde-style- (in your case: kde-style-baghira)
That would make all kde styles apear next to each other in 
dselect/aptitude and be consistent with the naming scheme of kde themes.

By the way, it is already packaged and works pretty well ( i have been 
using my own packages of baghira for the last 3 months)... it has not 
been uploaded already because my sponsor hasn't got the time to do it yet.
Those sponsors are always packed with work ;-)
cu,
sven



Re: any comments on diagram?

2004-11-09 Thread Sven Mueller
Kevin Mark [u] wrote on 09/11/2004 18:29:
i made a diagram in xfig of what I think is the debian development
model. Could folks give me a few comments on what's not correct.
http://kmark.home.pipeline.com/debian.png
AFAICT from my limited debian expertise ;-), there is at least one 
mistake in that diagram:

backports are ports of packages from unstable and/or testing to stable,
   not ports of 3rd party software to stable.
Also, backports are _not_ part of Debian/stable.
volatile (not volitile) has been discussed for some time now, but to the 
best of my knowledge, it doesn't currently exist.

If you list "security source" as a source for sources (SCNR), the arrow 
from "debian sources" to stable/security is wrong.  Security fixes may 
be merged into the debian sources tree, but you will most likely never 
see any direct introduction of a package from the generic upload area 
(which you seem to call "debian source") to stable/security.

If I'm not completely mistaken myself, a "normal" DD can upload to:
unstable
testing-proposed-updates
stable-proposed-updates
From unstable, a package goes into testing when it has no RC bugs filed 
against it and is at leats 10 days old.

From testing-proposed-updates (which is only useful when testing is 
frozen during preparation of a new stable release), it goes into testing 
if the release-manager is convinced that it does more good than harm to 
include it.

From testing, a package goes into stable when testing was frozen, the 
package has no RC bugs filed against it and a new stable release is 
published.

From stable-proposed-updates, a package goes into stable when the 
release manager is convinced this update is needed and its negative 
side-effects (if they exist) are unavoidable and less important than the 
positive effect of doing that update.

security updates are done by the security team. they can (and usually 
do) upload directly into the security archives (stable/security and 
sometimes testing/security).

volatile is still non-existant and therefore has no policy in place. But 
I would assume that it should be handled similarly to 
stable-proposed-updates, but with a different release manager (team) and 
a very slightly less strict policy regarding acceptable negative side 
effects.

backports are completely outside Debian for now. There has been some 
talk about it during the volatile discussion, indicating that there 
should be some backports archive within Debian should be a sort of 
componentized archive allowing to install selected applications from 
testing on a stable platform. If this is implemented, the package would 
not come from a 3rd party source, but from the testing archive (with 
fixes to allow it to run on stable).

regards,
sven
--
"Pinguine können ja bekanntlich nicht fliegen und stürzen deshalb auch 
nicht ab."
RTL-Nachtjournal über Linux, 29.10.2004




Re: NoX idea

2005-03-05 Thread Sven Mueller
Henning Makholm wrote on 05/03/2005 12:17:
Scripsit Benjamin Mesing <[EMAIL PROTECTED]>
Just wondering if Debian should switch to LSB recommendation

LSB recommends:
0   halt
1   single user mode
2   multiuser with no network services exported 
3   normal/full multiuser   
4   reserved for local use, default is normal/full multiuser
5   multiuser with xdm or equivalent
6   reboot
That seems awfully restrictive, only giving the local admin a single
runlevel to customize to his own needs.
Well, for one, SysV-Init isn't restricted to the 7 runlevels listed 
above. And also, I can't imagine which other runlevels a local admin 
might want. And finally: There is nothing which could keep the local 
admin from modifying the runlevels. His system would just not comply 
with LSB anymore.

cu,
sven
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: For those who care about lesbians

2006-01-17 Thread Sven Mueller
Andrew Suffield wrote on 15/01/2006 05:20:

[I know the below quote has been directly linked to the 2005/08 incident
of which I know no details - not being a DD yet myself - but I assume
you would hold the same opinion with respect to your recent d-d-a post]

> I fail to see how expressing a simple opinion like that, which is not
> even an uncommon one, *on a private mailing list*, could possibly be
> 'detrimental to the project'. That is pure slander.

Well, your post to d-d-a is potentially somewhat detrimental to the
project. Had you simply made a post like the following, probably nobody
would have cared. I even wouldn't have cared if you had made your d-d-a
post to d-d instead. Now here is what would probably been OK:

- cut here -
Subject: To all who care about the quality of d-d-a

This list is about the Debian project and important news to and from
it's developers. Please refrain from posting off-topic stuff on this
list. Needing to make it a moderated list would be a shame.
- cut here -

Irony and sarcasm on big public mailinglists is always a problem,
especially if there are some corporate staff people also reading the
list and - at least partially - judging the whole project by the
contents of that list.

Apart from your posts intention (which I wholeheartedly agree with), I
can't agree with the form you took. However, even though I agree with
your intention (of keeping d-d-a as close to its topic as possible), I
don't really see anything too bad about Raphael Herzog's post. Sure, he
talks about Ubuntu a lot, but his whole point is how Debian and Ubuntu
could cooperate more closely, giving Debian Developers information about
where they can find stuff on Ubuntu's "side". Hell, his post asking both
sides for cooperation is on topic on d.d.a as on (whatever the
equivalent for ubuntu would be) IMHO.

regards,
Sven


signature.asc
Description: OpenPGP digital signature


Re: Upcoming removal of orphaned packages

2005-06-16 Thread Sven Mueller
Martin Michlmayr wrote on 16/06/2005 19:18:
> findimagedupes -- Finds visually similar or duplicate images [#218699]
>   * Orphaned 590 days ago
>   * Package orphaned > 360 days ago.

Though I probably can't adopt it (due to lack of time), it would be a
pity to loose this since there is no comparable commandline tool
available and it works quite well.
If all else fails, I might re-think adopting it.

> rio500 -- command-line utilities for the Diamond Rio500 digital audio player 
> [#225259]
>   * Orphaned 534 days ago
>   * Package orphaned > 360 days ago.

don't know anything about the state of this package, but I know that the
rio500 is still used by a lot of people.

cu,
sven


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Upcoming removal of orphaned packages

2005-06-16 Thread Sven Mueller
Laszlo Boszormenyi wrote on 16/06/2005 23:13:
> On Thu, 2005-06-16 at 22:13 +0200, Andreas Tille wrote:
> 
>>Perhaps this might be true for the initial Perl implementation, but:
>>
>>"[2001/03/03 10:05] Markus Schoder has contributed finddupes.cpp, GPL'ed 
>>source code for a C++ based version of my horribly slow compare routine. In 
>>his testing on a directory of 35,000 images, it was about 300 times faster 
>>than findimagedupes' perl implementation."
> 
>  Yup, but that's even more old, more than four years old. Will download
> it and check if its still compilable even.

It's only compilable in its current state with g++-2.95 (regarding
compilers in Debian stable). There is a single error when compiling with
g++-3.4 which I am unable to fix (as I don't know the STL at all).

Apart from that, it works quite fine.

>>>and, I would say, both make it obsolete.
>>
>>If both would have a command line interface.
> 
>  I do not think they have.

Exactly.

>>  I do not know both but
>>the page you quoted mentioned that findimagedupes is the only command line
>>tool.
> 
>  Yes, and this is sad. What I need is a command line tool as well. I can
> not have any GUI where I would like to use it.

Same for me.

>>I would really love a command line alternative.  If you tell me any
>>I will be quiet immediately.
> 
>  I do not know any. But if any of you find an alternative, then please
> tell me as well.

/aol

>>  But I would love to have a test first.
>>Please give me two weeks.
> 
>  Thanks, the time is on your side as I also would like to have a command
> line based tool.

Yes, plaaasse.

cu,
sven


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Upcoming removal of orphaned packages

2005-06-17 Thread Sven Mueller
Rich Walker wrote on 16/06/2005 23:23:
> Sven Mueller <[EMAIL PROTECTED]> writes:
> 
>>Martin Michlmayr wrote on 16/06/2005 19:18:
>>
>>>findimagedupes -- Finds visually similar or duplicate images [#218699]
>>>  * Orphaned 590 days ago
>>>  * Package orphaned > 360 days ago.
>>
>>Though I probably can't adopt it (due to lack of time), it would be a
>>pity to loose this since there is no comparable commandline tool
> 
> fdupes?
> 
> Doesn't do partial matching, but is otherwise excellent.

That's (almost?) the point:
findimagedupes (or the c++-implementation finddupes.cpp) is intended for
the location of duplicate images, with a given fuzziness (not too far
apart in size, very similar content).

There are a lot of tools which find exact duplicates of files, but those
matching images are really rare.

cu,
sven


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Linux / Debian / Ubuntu

2005-07-05 Thread Sven Mueller
Roger Lynn wrote on 03/06/2005 00:29:
> On Tue, 31 May 2005 21:37:28 -0700, Stephen Birch <[EMAIL PROTECTED]> wrote:
> 
>>Darren Salt([EMAIL PROTECTED])@2005-05-31 21:49:
>>
>>>For those who've missed the first three broadcasts today, there's one more at
>>>01:05 GMT; also see http://news.bbc.co.uk/2/hi/technology/1478157.stm>.
>>
>>Why on earth does the BBC force its listeners to all hit its servers
>>at the same time.  Doesn't make sense at all, why not ogg the program
>>up and put it on its servers so the audience can listen when they
>>want.
> 
> 
> Huh? You can listen to the programme any time you like. (Admittedly you're
> restricted to RealPlayer or Windows Media Player, but at least there are
> cross-platform players available for RealPlayer.)

By now, there is also a "download as MP3" link available. And it seems
to work fine.

cu,
sven


signature.asc
Description: OpenPGP digital signature


Re: Debian Installer etch beta 3 released

2006-08-13 Thread Sven Mueller
Frans Pop wrote:
> The Debian Installer team is proud to announce the third beta release
> of the installer for Debian GNU/Linux Etch.

For what it's worth, I created a .torrent file for the first i386 DVD of
the set. If anyone is interested, I'm currently seeding with two
machines at a max total of 120kB/s but might increase or decrease that
as needed. I intend to keep the seeds up for at least 1 week or until
there are at least 3 other seeds.

My reason for this: I expect that I'm not the only one who doesn't like
jigdo much and hope that torrents would give the installer betas a wider
audience (even if not much wider).

The d-i beta 3 i386 torrent is available at:

http://mail.incase.de/debian-incase/debian-testing-i386-binary-1.iso.torrent

cu,
sven



signature.asc
Description: OpenPGP digital signature


Re: Debian Installer etch beta 3 released

2006-08-13 Thread Sven Mueller
Joey Hess wrote:
> Sven Mueller wrote:
>> My reason for this: I expect that I'm not the only one who doesn't like
>> jigdo much and hope that torrents would give the installer betas a wider
>> audience (even if not much wider).
> 
> Official torrents are linked to from
> http://www.debian.org/devel/debian-installer/

True, but none are available for the DVD images.

cu,
sven




signature.asc
Description: OpenPGP digital signature


Bug#395816: ITP: fim -- Free Image Manipulator

2006-10-27 Thread Sven Mueller
Package: wnpp
Severity: wishlist
Owner: Sven Mueller <[EMAIL PROTECTED]>

* Package name: fim
  Version : 0.2.2
  Upstream Author : Kacper Bielecki <[EMAIL PROTECTED]>
* URL : http://www.nongnu.org/fim/index.html
* License : GPLv2 or higher
  Description : tool t

The Free Image Manipulator is a graphical tool to do various things to a
set of pictures. 

Features
- You can resize many images (you only set their maximum size and images ale 
scalled automatically so that ratio is not changed)

- You can add text (you choose font, size, color of background and foreground, 
position, spacing, opacity of background and foreground)

- Despite the fact that images had different sizes, after resizing, added text 
will look on every image the same (all chosen options are relative)

- You are able to save or load images from one of the formats: jpeg, png, gif 
(every image in the set can be in different format, it doesn't matter)

- You can paste several image on all loaded images preserving its opacity or 
even changing it!

-- System Information:
Debian Release: 3.1
  APT prefers stable
  APT policy: (990, 'stable'), (90, 'testing'), (50, 'unstable'), (40, 
'experimental')
Architecture: i386 (i686)
Shell:  /bin/sh linked to /bin/bash
Kernel: Linux 2.6.11.12-incase
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Bug#364652: ITP: squid3 -- Internet Object Cache (WWW proxy cache) version 3

2006-04-25 Thread Sven Mueller
Luigi Gangitano wrote on 25/04/2006 01:32:
> So I'm packaging Squid-3.0 from new sources (using CDBS for the first  
> time, great!). The resulting packages will be named 'squid3, squid3- 
> common, squid3-client, squid3-cgi' and will conflict with the  
> existing squid packages. 

Why do you conflict? Would squid3 require such big changes to make it
installable side-by-side with the squid 2.5 packages? Wouldn't it make
sense to allow as many people as possible to test it with their
production squid (2.5) still available?
Note that I do realize that you would need to revert any such changes
once squid3 becomes stable (and a possible upgrade from 2.5). I'm just
wondering wether the changes were so big that this is infeasible.

Regrads,
Sven


signature.asc
Description: OpenPGP digital signature


Re: Bug#364652: ITP: squid3 -- Internet Object Cache (WWW proxy cache) version 3

2006-04-25 Thread Sven Mueller
Luigi Gangitano wrote on 25/04/2006 14:19:
> 
> Il giorno 25/apr/06, alle ore 13:57, Sven Mueller ha scritto:
>>>Luigi Gangitano wrote on 25/04/2006 01:32:
>>>
>>>>So I'm packaging Squid-3.0 from new sources (using CDBS for the first
>>>>time, great!). The resulting packages will be named 'squid3, squid3-
>>>>common, squid3-client, squid3-cgi' and will conflict with the
>>>>existing squid packages.
>>>
>>>Why do you conflict? Would squid3 require such big changes to make it
>>>installable side-by-side with the squid 2.5 packages? Wouldn't it make
>>>sense to allow as many people as possible to test it with their
>>>production squid (2.5) still available?
> 
> Whould you really use your production machine to test some  
> experimental software?

Some people don't have much of a choice there (budget constraints,
floor/rack space etc.). And there isn't too much contradicting such
practice if the "experimental" software doesn't conflict (in some way,
not necessarily the "Conflicts:" Debian meaning) with the production
software. It actually is quite common to do that, even though it is not
what the admins in question would like best.

>>>Note that I do realize that you would need to revert any such changes
>>>once squid3 becomes stable (and a possible upgrade from 2.5). I'm just
>>>wondering wether the changes were so big that this is infeasible.
> 
> Changes are not that big. But since squid 2.5 would still be  
> available and reverting to it is a simple 'uninstall squid3 and  
> reinstall squid' operation that doesn't impact configuration files  
> and the caches, I don't see why it's needed to keep them separated.

Well, I gave one reason: People might want to test it alongside with a
production squid still running. I see a small number of problems with
that though. Especially if people choose to run squid3 (the version I
suggested which is installable alongside squid_2.5) as their production
proxy, they would need to change configuration to switch to the squid
package once squid3 is stable and gets renamed. However, I would still
consider it favorable to make squid3 installable with squid(2.5) still
running. The possible benefits by a larger number of installations are
bigger (IMHO) then the drawbacks.

cu,
sven


signature.asc
Description: OpenPGP digital signature


Re: loosing dependencies: Depends: on logrotate

2008-01-22 Thread Sven Mueller
Ivan Shmakov schrieb:
>   Since I've already started this thread, I'm going to ask for
>   opinions on the one more issue with the current (Etch, at least)
>   dependencies in Debian to bother me.
> 
>   Is `logrotate' really necessary for those 46 packages or so in
>   Etch to include it in their `Depends:'?
> 
>   Debian Policy reads:
> 
> --cut: www.debian.org/doc/debian-policy/ch-relationships.html--
> The Depends field should be used if the depended-on package is
> required for the depending package to provide a significant amount
> of functionality.
> 
> The Depends field should also be used if the postinst, prerm or
> postrm scripts require the package to be present in order to
> run.  Note, however, that the postrm cannot rely on any
> non-essential packages to be present during the purge phase.
> --cut: www.debian.org/doc/debian-policy/ch-relationships.html--
> 
>   My opinion is that since `logrotate' is not required neither for
>   the maintainer scripts in order to run, nor ``for the depending
>   package to provide a significant amount of functionality'', this
>   dependency should be either relaxed (to `Recommends:' or
>   `Suggests:') or discarded completely.

Exactly. If any of the old, rather inflexible syslog implementations
depended on logrotate, I would say that would be perfectly fine. But for
applications (even if they write their logs themselves like apache or
samba usually do), I would only expect a simple Recommends.
On my servers, I'm forced to have logrotate installed due to
applications like samba, even though I immediately disable logrotate
after installation and use my own rotation scripts (for those
applications not using syslog - syslog-ng in this case) instead.

I do see the argument about a maintainer best making as sure as possible
that a user doesn't run into problems unless he overrode the packages
defaults. But in this sense, a Recommends is a package default, and the
user should be expected to know what he does if he doesn't install
Recommends.

Given the example of Samba, logrotate isn't _needed_ to provide any
amount of functionality of the software packaged and more specifically
not needed to provide a _significant_ amount of functionality. Following
this argument, one could even suggest that listing logrotate as a
Depends is a policy violation (and as such release critical). The
exception made for maintainer scripts doesn't fit here either, since the
maintainer scripts don't use logrotate.

I didn't check, but I would guess that the same is true for many of the
other packages in Sid that depend on logrotate.

So IMHO, most of those packages really should only Suggest logrotate.

cu,
Sven



signature.asc
Description: OpenPGP digital signature


Re: How to cope with patches sanely

2008-02-05 Thread Sven Mueller

Simon Huggins wrote on 29/01/2008 02:51:

[wig&pen]

The meta data can easily be in these version control systems that
everyone on these threads seems to love so much.

If you want to keep more patches than you expose through wig&pen then
just don't publish them in the dsc.


That won't work well for one of the packages I (co-)maintain (and I 
assume for some other packages, too): I want a patch to be included in 
the Debian source, since it implements an often requested feature, but 
it can break other things. In other words: It conflicts with another 
patch (or maybe even two), but I want somebody who does apt-get source 
on the package to be able to do a (trivial) modification to the unpacked 
source and rebuild the package to have that feature (though knowing that 
this breaks something else).



I think all Debian really needs is tools to generate a wig&pen source
package and the appropriate tools to then deal with it e.g. dak and
sbuild updates.


But still, wig&pen looks nice to me. I could include the necessary patch 
files (and short docs on how to get alternative build working) in the 
debian.tar.gz instead of the .patches.tar.gz IIUIC and reach the same 
goal I mentioned above. If wig&pen supported a series file (quilt) or 
00list file (dpatch) in the .patches.tar.gz archive, I wouldn't need to 
work around anything.
I don't mind getting the patches applied during unpack of the source 
package as long as the tool that is able to generate the source package 
from that unpacked source has a way to find out that someone changed the 
source after all patches were applied and handling that in a sane way 
(e.g. creating and enlisting an addition patch in .patches.tar.gz).


On another note, I have a slight problem with the (other) proposal of 
using a (git/$DVCS) repository as the form of source package 
distribution. Mainly because I think this will usually add bloat. While 
developing/packaging, I often see intermediate commits which need to be 
heavily modified or even reverted to finally build the patch I wanted to 
get working. Though these commits are fine in the Repository, I wouldn't 
want to see all the intermediate steps in the source package.


cu,
Sven



signature.asc
Description: OpenPGP digital signature


Re: dpatch -> quilt

2008-02-07 Thread Sven Mueller
gregor herrmann schrieb:
> On Mon, 28 Jan 2008 09:09:28 +0100, Andreas Tille wrote:
> 
>> Is there any easy conversion from dpatch to quilt for a given package
>> that is using dpatch?
> 
> The following two links might give an idea:
> * manual conversion:
>   
> http://blog.orebokech.com/2007/08/converting-debian-packages-from-dpatch.html
> * script (but svn-centric):
>   http://svn.debian.org/wsvn/pkg-perl/scripts/dpatch2quilt?op=file&rev=0&sc=0 

The script should (IMHO) make sure QUILT_PATCHES is set correctly.
(Cc'ing dmn because of this)

However, I wonder wether quilt has any way (through a known wrapper
perhaps) to support the thing I like (though I'm not too attached to
that feature) with dpatch:
Being able to keep only debian/* in the repository along with the
orig.tar.gz

As said, just wondering, I'm not too attached to that feature.

cu,
Sven


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Relation between Suggests and Enhances

2009-08-21 Thread Sven Mueller
Gunnar Wolf schrieb:
> James Westby dijo [Wed, Aug 19, 2009 at 01:44:49AM +0100]:
>> I see it as an almost bi-directional relationship, but one that allows
>> you to add it to the package that "cares" about it more.
> 
> I agree with your analysis. But, if this is the case, the treatment
> should be symmetrical as well: If a user has his apt{-get,itude}
> configured to auto-install Suggests:, it should also auto-install
> reverse-Enhances:, right?

I don't really think so.

For example there is no doubt that any plugin enhances
icedove/thunderbird, while icedove/thunderbird itself might only suggest
a small subset of plugins (though a larger one than what is in
Recommends). Something along the lines of: We recommend anything useful
to 95% of our users, suggest what at least 50% would like, but the
"enhances" can be given in any package that might enhance the referred
program in any way.
Comming back to icedove, an example would be the calendar plugin (which
is not in Debian as far as I know). Sure it enhances icedove, but I
doubt that a large portion of icedove users would install it, since it
doesn't have much to do with icedoves primary use. So I wouldn't even
set a suggests to it from the icedove package, but setting the enhances
field on the package itself would be OK, but IMHO it should not even
cause it to be installed if Suggests: are installed automatically.

Regards,
Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: dash pulled on stable when APT::Default-Release is used

2009-08-25 Thread Sven Mueller
Philipp Kern schrieb:
> On 2009-07-29, Michael S. Gilbert  wrote:
>> it is a bug in the sense that stable's behavior is being unduly
>> influenced by unstable's "essential packages" list.  i would suggest
>> submitting a report to the bts so the problem can be tracked and
>> eliminated in future releases.
> 
> That's somewhat by definition, sorry.  If you have unstable packages
> activated they may be relying on essential packages from unstable to
> work.  So they have to be installed.  No bug there.

I know I'm replying a bit late here, but I think it is a (perhaps only
wishlist) bug. In my opinion, behaviour should be:
At any time, use the essentials list relevant to the packages you install.
In other words, apt should keep track of the newest release it touched
while installing.
For example, on one of my systems, where I often backport stuff for
internal use, I have both testing and stable listed in my sources.list,
but I never installed any package from any of these "releases". So I see
no reason why apt should pull in new "essential" packages from the
testing or unstable list.
On the other hand, once I would install a package from squeeze, I would
expect apt to also install all new essential packages listed for squeeze.

Regards,
Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Policy about debian/compat

2007-07-09 Thread Sven Mueller
Steinar H. Gunderson wrote on 09/07/2007 16:57:
> On Mon, Jul 09, 2007 at 04:48:05PM +0200, Bruno Costacurta wrote:
>> Hello,
>> what is the policy about file debian/compat ?
> 
> It's a part of the interface to debhelper, and as such is documented in the
> debhelper documentation.

True.

>> I was not able to find any policy about it in 'Debian Policy Manual' on 
>> debian.org. It seems it should contain package version mentionned in 
>> field 'Build-depends:' in debian/control for building from source package ?
> 
> Debian policy does not describe or discuss debhelper; you can build packages
> just fine without it. (It isn't very common, though, but some prefer it.)

Also true, but as I frequently need to backport packages to (old)stable,
 I would really like people only to increase the debhelper compat level
if they really _use_ new features in the new level.

While I backported packages to sarge during the etch release cycle, I
frequently (i.e. I can't remember a package where it wasn't like this)
all I needed to do to make a package compile cleanly on sarge was to
decrease the compat level back to 4. So specifying 5 there was utterly
useless in the first place. Just because debhelper supports a new compat
level doesn't necessarily mean you should depend on that. Unless, of
course, you actually rely on some effect of that new compat level during
your build.

regards,
Sven



signature.asc
Description: OpenPGP digital signature


Re: proposed release goal: DEBIAN/md5sums for all packages

2007-08-17 Thread Sven Mueller
Kurt Roeckx schrieb:
> On Fri, Aug 17, 2007 at 11:25:38AM -0700, Russ Allbery wrote:
 Some packages (aspell and ispell packages in particular) ship files
 that they then modify in maintainer scripts and intentionally exclude
 them from the md5sums file for that reason. 
> 
> The hash file, which is architecture dependend, is created on install.
> This is the only file in the package that is architecture dependend.

If it is created on install, why is it in the packages filelist in the
first place? Other packages also generate (supposedly architecture
dependend) files during postinst, without shipping a placeholder in the
.deb - so what is the reason why [ia]spell does that?

Uhm, also: I couldn't find any such example in the [ia]spell packages
themselves nor in wamerican, myspell-de-de, ispell-de-de so perhaps
(some of) those packages used to do that sort of stuff, but refrain from
doing so now?

Regards,
Sven



signature.asc
Description: OpenPGP digital signature


Re: proposed release goal: DEBIAN/md5sums for all packages

2007-08-17 Thread Sven Mueller
Russ Allbery schrieb:
> Sven Mueller <[EMAIL PROTECTED]> writes:
> 
>> If it is created on install, why is it in the packages filelist in the
>> first place? Other packages also generate (supposedly architecture
>> dependend) files during postinst, without shipping a placeholder in the
>> .deb - so what is the reason why [ia]spell does that?
>>  
>> Uhm, also: I couldn't find any such example in the [ia]spell packages
>> themselves nor in wamerican, myspell-de-de, ispell-de-de so perhaps
>> (some of) those packages used to do that sort of stuff, but refrain from
>> doing so now?
> 
> All I know about this topic is at:
> 
> http://lists.debian.org/debian-mentors/2006/10/msg00075.html
> http://bugs.debian.org/324111
> http://bugs.debian.org/346410
> http://bugs.debian.org/374949
> http://bugs.debian.org/401070
> 
> I'm happy to remove the exception again if this has changed.
> 

In all those mails, the only justification for shipping these files in
the package - though they are changed (rebuilt?) during postinst - is
the following sentence from Brian Nelson (pyro):

> Also, not
> including these files in the .deb packages significantly complicates the
> packaging.  I really don't want to change to manangement of the files in
> maintainer scripts without a very good reason to do so.

He doesn't give any information _why_ this complicates packaging that
much, while his decision imposes additional work and complexity on
others (be it the exception in lintian and probably linda or the
difference between "dpkg -L" and the contents of the md5sums file, which
makes integrity checking a bit harder).

IMHO, packages (.deb) should only include files which are either listed
in conffiles or in md5sums.

The hash files in aspell/ispell/wordlist packages (for example*:
aspell-en, idutch) are neither conffiles nor in md5sums. They are said
to be arch-dependend and if I understand the aspell-en debian/rules
correctly, they are shipped as empty files. I don't see why they
couldn't just be created empty by the postinst before building the hash
tables. I especially don't see how that complicates packaging.

cu,
Sven

* Thanks to Kurt Roeckx for the examples

PS: I just verified that the files in question are indeed zero-length
files at least in aspell-en


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: About new source formats for packages without patches

2010-03-29 Thread Sven Mueller
Wouter Verhelst schrieb:
> I might want to have a file with "1.0 (non-native)" to have dpkg error
> out when I accidentally don't have a .orig.tar.gz file somewhere, for
> instance. As long as the absense of that file does not make things
> suddenly break, I don't think there's anything wrong with that.

I wholeheartedly agree here.

> Of course, this all conveniently ignores the fact that the above
> explicit non-native option isn't actually supported, which is
> unfortunate...

Didn't check for this: Is a bug open to request such a feature to
explicitly say "1.0 (native)" or "1.0 (non-native)"? I would also find
this option really useful

> [...]
>> I did say until dpkg is fixed. I think the fix in dpkg needs to be that
>> the lack of debian/source/format uniquely identifies source format 1.0
> 
> Unfortunately, "source format 1.0" actually encompasses *two* formats:
> native packages and non-native packages. I'm sure you've also
> incorrectly gotten native source packages on occasion when what you
> wanted was a non-native package.

Oh yes, unfortunately, I even once accidentally uploaded such a package
while doing a sponsored upload (did a rebuild but somehow managed to not
have the orig.tar.gz at the right place). Oh the shame ;-)

Regards,
Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bb0fea7.5030...@debian.org



Re: About new source formats for packages without patches

2010-03-30 Thread Sven Mueller
Ben Finney schrieb:
> Raphael Hertzog  writes:

>> There's a default value currently and it's "1.0", and I want to remove
>> the existence of a default value in the long term because it does not
>> make sense to have a default value corresponding to a source format
>> that is no longer recommended.
> 
> That's the part I don't see a reason for. Any future formats will be
> unambiguously distinguishable. Those format-undeclared source packages
> can't be eradicated from the earth entirely. So why not simply declare
> that they are source format 1.0, as is without changes, and will always
> be recognised as such even *after* that format is utterly deprecated?

What you describe is actually really a "default to 1.0" behaviour.

And though I dislike the way lintian warned about a missing
debian/source/format file, I understand quite well why the dpkg
maintainer would like to remove that default: 1.0 in ambiguous in many
ways (for example in changing silently to native package format if the
orig.tar.gz is missing)..

I'm not going to convert my few packages to a 3.0 format any time soon
if it doesn't prove to be beneficial for me, but I will add an explicit
"1.0" format specification to those packages I upload in the meantime.

My main reason for not yet switching is that hg-buildpackage and
svn-buildpackage don't completely support the 3.0 format yet as far as I
can tell.

Anyhow, I would really welcome if dpkg-source would support some
additional values in debian/source/format:

1.0 (native)
1.0 (non-native)
default (native)
default (non-native)

Which would allow me to explicitly follow the current recommendation of
the dpkg maintainers (last two) or explicitly state that my package is
format 1.0 of either flavour.

Regards,
Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bb20ec7.9090...@incase.de



Re: About new source formats for packages without patches

2010-03-30 Thread Sven Mueller
Julien BLACHE schrieb:
> Raphael Hertzog  wrote:
> 
>> I expect this to be of particular interest when we'll have VCS-powered
>> source formats (say "3.0 (git2quilt)") that generate source packages that
>> are plain "3.0 (quilt)" based on the git repository information.
> 
> This is becoming crazy, really.

I agree. dpkg-dev should not be depending on any VCS and it should not
promote any particular VCS either. I know that git is the new black (oh,
wait, that was something else), but I personally don't like it. And I
especially dislike how so many git lovers are trying to push it onto
others, while there are perfectly good reason (not applicable to all
teams or projects of course) not to use git but some other VCS (be it
distributed or not.

Regards,
Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bb21032.6050...@incase.de



Re: why are there /bin and /usr/bin...

2010-08-17 Thread Sven Mueller
Am 10.08.2010 17:54, schrieb Russ Allbery:
> Simon McVittie  writes:
> 
>> It might be worth having a Lintian check for the situation you describe,
>> since missing libraries will generally prevent the executable from
>> starting up at all, whereas missing bits of /usr/share or /var might not
>> be so important.
> 
> Unfortunately, there isn't any way to check this in Lintian since Lintian
> has no idea whether a given library will be found in /lib or in /usr/lib.
> It's one of those things that needs to be done in a cross-archive check
> that has access to data about multiple packages at once.

I might be wrong here, but if lintian finds the library a package
depends on installed on the system where lintian is run, it could at
least catch some of these errors. Alternatively, if apt-file is
installed, or a Contents. is available, it could use the
information from there.
Still, this would require implementation by someone knowing lintian
well. Also, I admit that a cross-archive check would be more efficient.

regards,
Sven


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c6ada42.50...@incase.de



Re: possible mass bug filing: spamassassin 3

2004-10-06 Thread Sven Mueller
Steinar H. Gunderson [u] wrote on 06/10/2004 18:29:
>> On Wed, Oct 06, 2004 at 04:33:37PM +0200, martin f krafft wrote:
>>
>
I say: remove all the others from Sarge unless or until they comply
with SA 3.
>
>>
>> OK, so you want to remove exim4 from sarge, thus breaking installation
>> altogether?
??? Since when does exim4 use SA by default? AFAIK, you have to
specifically configure it to use it. Right? If so, there should be no
reason to remove it or for it to conflict with SA3.
>> Eh. spamassassin has had a long-standing, well-known API, and suddenly
>> changes it. It is _SA_ which broke this, not the other applications.
If a program is a front-end for SA and only works if SA is installed,
then it should keep up with any changes SA is doing to its API. SA3 has
long before been in beta testing AFAIK, so it should've been quite easy
for those maintainers to fix their programs. And SA3 API isn't _that_
much different from SA2.6 API for the most used interfaces. I had to
change as much as 3 lines in SpamPD 2.12 to make it work with SA3,
though upstream did a cleaner job by supporting SA <2.7, SA <3.0 and SA
3.0 in one script as of SpamPD 2.2(0).
cu,
sven



Re: possible mass bug filing: spamassassin 3

2004-10-06 Thread Sven Mueller
Steinar H. Gunderson [u] wrote on 06/10/2004 22:37:
>> On Wed, Oct 06, 2004 at 09:10:39PM +0200, Sven Mueller wrote:
>>
>
>>>>??? Since when does exim4 use SA by default? AFAIK, you have to
>>>>specifically configure it to use it. Right? If so, there should be no
>>>>reason to remove it or for it to conflict with SA3.
>
>>
>> Well, if I dist-upgrade my mail server so a new spamassassin comes 
in, my
>> mail setup breaks. Now, that is clearly broken, and an RC bug on some
>> package.

If it were happening in stable, I am 100% with you.
It it is with regards to testing, well, I'm between 30% and 70% with
you. More on 60% currently with the release pending any month now  ;-)
With regards to unstable, I'm more like 10% with you on this matter.
If it is possible for a maintainer to avoid such breakages, he should
100% do that. But with spamassassin, I don't really see how the
maintainer should do that. Many frontends work with the new spamassassin
with no change at all, and those would certainly like to benefit from
SA3 without needing to change their packages. Many other frontends
break. But how should the SA maintainer know which frontends break and
which don't? He would only want to Conflict with those that break,
leaving the others alone.
>>>>If a program is a front-end for SA and only works if SA is installed,
>>>>then it should keep up with any changes SA is doing to its API.
>
>>
>> Wrong. SA should not change its API in an incompatible fashion 
without also
>> bumping its soname (ie. the package name in this case), like any 
other library.

Well, perl modules don't have an SO name. And actually, the "library"
part of SA isn't intended to be the most often used one. More
specifically, the spamassassin has always stated that though they try to
keep the API, it is not guaranteed to be compatible from one revision
(sic!) to another. The only part that was guaranteed to maintain their
interfaces for at least major 1 version (obsoleting but still supporting
in 3.x what was supported in 2.x) where the "executables" spamassassin,
sa-learn, spamc, spamd.
>>>>And SA3 API isn't _that_ much different from SA2.6 API for the most 
used
>>>>interfaces.
>
>>
>> In that case, it should provide a backwards-compatible interface.

Well, they could probably do that for many parts (the most often used
ones, actually), but I doubt that it is possible for all parts.
cu,
sven



RFD: Draft for a volatile.d.o policy (was: Re: Updating scanners and filters in Debian stable (3.1) )

2004-10-09 Thread Sven Mueller
Thomas Bushnell BSG [u] wrote on 08/10/2004 18:18:
Will Lowe <[EMAIL PROTECTED]> writes:
My argument is just that even if you backport the important features
of a new release into an old codebase, it's hard to make any valuable
claims about the resulting product if the "backport" changes more than
a few lines of code.
This is true if you don't know what you just did.  If you know what
you did, you may well be able to make a claim like "no new command
line features are added".
Doing a backport of some upstream change is usually a pretty difficult 
task (except for smaller security fixes). It's pretty easy to claim "no 
new command line feature added", but it is pretty difficult to claim "no 
new bugs added" or "all necessary security fixes added".

I agree that a pure security fix (like s.d.o security updates) should 
not introduce new code functionality, but only fix the security bug. At 
times, this might be a hard task, but usually it is an easy to medium 
level (though usually also very time consuming) task.

However, what we are talking about here are packages that become less 
and less useful over time, these are namely:
1) anti-spam software
2) security scanners (snort and the like)
3) virus scanners (clamav etc.)
And since they are pretty closely bound to the above:
4) mail-scanners (amavisd etc.)
These are packages that become less useful over time, not because 
upstream releases new versions with new features, but because the old 
features aren't enough to fulfill the original tasks anymore.

Therefore, and because you asked for a policy for v.d.o before (more or 
less directly), here is a new try for such a policy (sorry for sending 
the original one directly to you, Thomas, instead of the list):

==
Draft for a volatile.debian.org packaging and update policy.
Target:
volatile.debian.org (or short: v.d.o) is intended to be a repository for 
packages which degrade over time with respect to their usefulness. These 
include, but might not be limited to:
1) Virus scanners (clamav etc.)
2) Intrusion detection systems and security scanners (snort, nessus,
   etc.)
3) Anti-Spam software (spamassassin etc.)
4) Tools which are so closely related to a software in 1-3 that they
   might need to be updated when a new version of the related software
   is introduced to v.d.o

Policy for v.d.o
- Packages in v.d.o are generaly limited to the kind of software listed
  above.
- v.d.o is devided into two parts:
  - volatile.debian.org
  - volatile-testing.debian.org (short v-t.d.o).
- A new (version of a) package always enters v-t.d.o first, direct
  uploads to v.d.o are strictly forbidden.
- Before a package enters v.d.o, it must prove it's stability by living
  in v-t.d.o for 2 weeks first without a qualified bug report against
  it. A qualified bug report is hereby defined as a report for a new bug
  in the version uploaded to v-t.d.o which doesn't exist in the version
  in v.d.o. If there is no corresponding version in v.d.o, it must live
  in v-t.d.o for 6 weeks without any critical bug being reported against
  it. No package may enter v.d.o with RC bugs open.
- A new version uploaded to v.d.o should restrict itself to new code
  which is needed to keep fulfilling the original task of the package
  when it first entered v.d.o.
- A new upstream version which doesn't adhere to the previous rule may
  enter v.d.o under a new name. Alternatively, it may enter v.d.o if
  it is certain that it doesn't break compatibility with any package
  using it either in the main stable release of Debian or v.d.o
- If a new upstream version breaks compatibility with existing packages,
  it may only enter v.d.o if it lived in v-t.d.o for at least 6 weeks
  and all packages where compatibility has been broken are
  simultaneously entering v.d.o (or entered it before).
- If a new version of a Software (A) introduced to v.d.o requires new
  versions of software (X) which are not yet part of v.d.o, these new
  versions might only be introduced to v.d.o if that new version of X
  does not break compatibility with the current Debian stable version
  and is introduced to v.d.o simultaneously with the new version of A.
- Only DFSG-free software may enter v-t.d.o or v.d.o!
- Software in section 4 of the "targets" paragraph may only be updated
  in v.d.o if this is needed to either keep support working for a newer
  version of a software in v.d.o or if this introduces support for a
  new software in v.d.o (for example if bogofilter support is added
  to amavisd). Other new features are no reason to update such a
  package.
==
Like said before, this is a very basic and preliminary version of a 
v.d.o policy, but it should make the most important things quite clear.

I know this policy is not really to the taste of Thomas Bushnell, 
especially because new features _might_ be introduced. But with the 
pretty

Re: RFD: Draft for a volatile.d.o policy

2004-10-09 Thread Sven Mueller
Thomas Bushnell BSG [u] wrote on 09/10/2004 19:12:
Sven Mueller <[EMAIL PROTECTED]> writes:
Doing a backport of some upstream change is usually a pretty difficult
task (except for smaller security fixes). It's pretty easy to claim
"no new command line feature added", but it is pretty difficult to
claim "no new bugs added" or "all necessary security fixes added".
It's in fact so difficult, that this is exactly why we don't just
allow arbitrary changes to stable things, and relabeling them
"volatile" and "optional" doesn't actually change the matter.
Right. We don't want arbitrary changes - as you call it - allowed into 
the main stable release. What is proposed here however is a way to allow 
users to opt-in on changes like that.

We might need a method for allowing really important upgrades in to
stable, which preserve stability, and we have that now for regular
stable proposed updates, for security, and we could add it for virus
scanners and the like.  But in all those cases, we need the same
concern for stability.
Well, that would be nice to have. However, these updates would most 
likely still be far too infrequent. And there are quite a few people 
around, which are ready to accept a (very) low possibility of added 
instability for having optimal performance with this kind of software.
What most people in this thread seem to see in volatile.d.o is an opt-in 
to get certain packages ported to stable in a cutting-edge like fashion. 
Not really bleeding-edge, but newest upstream release which is stable 
enough for production.
I certainly wouldn't like to see - say - SpamAssassin 4.0 in volatile on 
the very same day it was released upstream. But I would like to see it 
in there as soon as it has proven to be stable enough for püroduction 
and it has also proven to not interfere with the stability of the rest 
of the system.

Saying "it's really hard" is not a good excuse!  People are doing it
for those other packages all the time.
You are comparing apples and oranges here. Backporting a security fix 
for some software usually only affects a few lines of code. Backporting 
an updated scanning engine for a virus/spam/network scanner is something 
completely different. We might want to set some sort of limit as to 
which kind/size of change has to be backported and which kind/size of 
change can warrant an update to the current stable upstream release.

These are packages that become less useful over time, not because
upstream releases new versions with new features, but because the old
features aren't enough to fulfill the original tasks anymore.
Right, and I'm happy to see that done, provided that only the new
features are allowed which actually keep the particular utility in
place.
I'm sorry, but for the idea of volatile.d.o most people in this thread 
expressed (i.e. IIRC only you seem to object to that idea), this 
limitation is not intended.
However, I see the value of that kind of limitation, but I would like to 
see the kind of policy you seem to have in mind to be used for updates 
to the regular stable release.

I know this policy is not really to the taste of Thomas Bushnell,
especially because new features _might_ be introduced. 
Heh, but compromise is always possible, and I'm interested in hearing
what other people say about this proposed policy before I comment
further on its details.
You're welcome.
cu,
sven



Re: possible mass bug filing: spamassassin 3

2004-10-09 Thread Sven Mueller
Tollef Fog Heen [u] wrote on 07/10/2004 09:52:
* Duncan Findlay 
| Umm... I'd like to see that

7122 root  15   0  660m 332m 4692 D  0.0 43.8   8:18.64 spamd
7123 nobody15   0  287m 257m 4692 D  0.0 34.0   0:17.01 spamd
| Furthermore, you should use the -m option to limit the number of
| children to something sane, like 5 or so.
This is per child, as I wrote.
Do you have any additional rulesets like those SARE rules installed? If 
so, how many/what total size?

SA3 really explodes in size when more rules are given.
Without additional rulesets, i.e. just with the rules SA 3.0 ships with, 
I have about 20M of memory usage by spampd (spamassassin based SMTP/LMTP 
proxy).
With an additional 460k of rules (some SARE, some others), this shoots 
up to 40M.
With 5M of additional rules, it consumes like 170M of memory.

So I would guess that you use additional rulesets totalling about 10-15M 
in size. Am I right?

If so, try removing the biggest rulesets installed, probably something 
like SARE BigEvil or the like.

cu,
sven



Re: possible mass bug filing: spamassassin 3

2004-10-09 Thread Sven Mueller
Tollef Fog Heen [u] wrote on 07/10/2004 10:00:
* Sven Mueller 

| Well, perl modules don't have an SO name.
: [EMAIL PROTECTED] /usr/lib > apt-cache show libvideo-capture-v4l-perl| grep 
^Depends
Depends: perlapi-5.8.3, perl (>= 5.8.3-2), libc6 (>= 2.3.2.ds1-4)
Seems like perl provides an API that the module depends on, no?
perlapi seems to be some sort of a pseudo package. But anyway: What does 
a package version have to do with SO names?

| And actually, the "library" part of SA isn't intended to be the most
| often used one.
If it is provided, it must work. 
So if someone provides a huge program which consists of various 
specially crafted dynamic libraries (whose APIs are documented but not 
really intended for use outside of this specific program), these 
libraries may not change in a way which changes those APIs?
Sure seems strange to me.
The only official interfaces SpamAssassin ever provided (to the best of 
my knowledge) are:
1) calling spamassassin directly (as a commandline tool)
2) calling the spamc client (again, as a commandline tool)
3) accessing spamd over its defined interface (which is used by spamc)

> If it is changed in an incompatible fashion, it must bump soname.
> Or make SA into a library proper, with
libmail-spamassassin-perl being the module part and spamassassin
depending on that. 
Well, in that case, libmail-spamassassin-perl would be the size of the 
current spamassassin package, and the new spamassassin package (which 
depends on the libmail-spamassassin-perl package) is about 2k in size, 
description and packaging overhead included. Sorry, that doesn't make 
much sense.

> You'd still have to bump soname, but only for the
library part.
Go learn perl, than come back. Perl Modules might have version numbers, 
but they certainly don't have SO names. BTW: Give a descend definition 
of what you refer to as soname, and I might apologize and say that you 
are right. But for now we either have different ideas of what a soname 
is or you don't have much knowledge about perl (heck, I don't know perl 
well, but I know enough to be certain that perl modules don't have 
anything I would call a soname).

This is _not_ hard to get right, and there is really no exuse not get
it right.
The only way to get it right (in your sense of the word) would be to 
rename the Perl Mail::SpamAssassin module (along with its sub-modules) 
to Mail::SpamAssassin3. However, this would make some programs break 
which are otherwise able to cope with v3 Mail:SpamAssasin quite well.

spampd for example has a total of 10 lines which differentiate between 
versions v being < 2.7, 2.7 <= v < 3.0 and v >= 3.0 _and_ do what's 
needed to work with either of the three possible categories of 
SpamAssassin versions. If SpamAssasin v3 would be renamed to 
Mail::SpamAssassin3, the changes would be more like 120 lines.

And given the fact that the SA3 API has been published more than 7 month 
ago (more like 8: 2004-02-18 was the last date on which the API was 
changed in an incompatible way), each tool had more than enough time to 
adjust to this. Note: The outside API (i.e. the API to _use_ 
SpamAssassin as opposed to the inside API used to enhance SpamAssassin 
by plugins) only had pretty minor changes.

Regards,
Sven



Re: possible mass bug filing: spamassassin 3

2004-10-09 Thread Sven Mueller
Sven Mueller [u] wrote on 10/10/2004 04:46:
Go learn perl, than come back.
Sheesh, I shouldn't write mail after prolonged discussions with my boss...
I apologize for the rudeness of that comment. Even though it was meant 
somewhat jokingly, I realize it probably was too harsh. Sorry.

cu,
sven



Re: about volatile.d.o/n

2004-10-09 Thread Sven Mueller
Jesus Climent [u] wrote on 09/10/2004 02:28:
On Fri, Oct 08, 2004 at 03:51:29PM -0400, Daniel Burrows wrote:
 I generally have to resort to backports or unstable when installing Debian 
on recent hardware, because we don't update hardware drivers in stable.  
Would the kernel and X be candidates for volatile?
I dont see any reason why not, if they can be marked as NotAutomatic.
I certainly see reason not to include X in colatile. X pulls in xlibs, 
and xlibs is depended on by a _huge_ amount of programs if I'm not 
mistaken. It is near impossible to tell wether an X upgrade might break 
things or not.

As for the kernel: If I were a volatile RM-aequivalent, I might consider 
getting the kernel into v.d.o, even though it is a huge beast, it 
usually either breaks completely on some specific system or it doesn't 
break anything. But I'm not sure about this.

regards,
Sven



Re: possible mass bug filing: spamassassin 3

2004-10-10 Thread Sven Mueller
Tollef Fog Heen [u] wrote on 10/10/2004 13:01:
* Sven Mueller 

From the front page of spamassassin.org:
: Flexible: SpamAssassin encapsulates its logic in a well-designed,
: abstract API so it can be integrated anywhere in the email
: stream. The Mail::SpamAssassin classes can be used on a wide variety
: of email systems including procmail, sendmail, Postfix, qmail, and
: many others.
You are right. This has changed since I last checked (two years ago or 
so, not sure when). I should have re-checked this before posting.

| > [splitting SA into libmail-spamassassin-perl and spamassassin]
| 
| Well, in that case, libmail-spamassassin-perl would be the size of the
| current spamassassin package, and the new spamassassin package (which
| depends on the libmail-spamassassin-perl package) is about 2k in size,
| description and packaging overhead included. Sorry, that doesn't make
| much sense.

: [EMAIL PROTECTED] ~ > for f in $(dpkg -L spamassassin | grep -v perl \
  |grep -v man3 ); do [ -f $f ] && echo $f; done | xargs du -shc  |
  tail -1
1,1Mtotalt
SA currently ships nearly 600k of rules.
 I don't understand what you are trying to say. If you yre trying to 
say that libmail-spamassassin-perl wouldn't include the rules, but 
spamassassin would, I would like to propose splitting into 3 packages: 
libmail-spamassassin-perl, libmail-spamassassin-perl-rules and 
spamassassin, so that Programs relying on libmail-spamassassin-perl can 
also depend upon the rules package without depening on the spamassassin 
package. Also note that we actually already have 2 packages: 
spamassassin and spamc.
But what I meant to say is that it doesn't make much sense to split the 
spamassasin package into several packages: neither the perl modules nor 
spamassassin itself would be useful without the rules. So you would need 
to include the rules in the modules-package (or a third package, upon 
which the lib package would probably depend). After you did that, the 
spamassassin executable doesn't add much to that package (29k to be 
precise).
As a side note: SA3 ships with 452k of rules if you count 
/etc/spamassassin/local.cf and 448k if not:
mail2:/tmp# dpkg -L spamassassin| grep -E 'etc|usr/share/spamassassin' | 
grep .cf| xargs du -hc| tail -1
452Ktotal
mail2:/tmp# dpkg -L spamassassin| grep -E 'usr/share/spamassassin' | 
grep .cf| xargs du -hc| tail -1
448Ktotal

You didn't exclude all man pages and you didn't exclude start scripts 
and configuration ;-)

soname is here used a bit loosely meaning «ABI/API version»; this is
technically not correct (as you point out), but it's shorter than
writing «ABI/API version» all over the place.
OK.
(And, given that perl modules can be normal shared objects, they
certainly _can_ have sonames proper, but I agree that's not the norm.)
In a way, yes. But this is only true for binary modules.
They can try to import Mail::SpamAssassin3 first, if that fails, try
Mail::SpamAssassin.  A nice thing with this is you actually know what
API you use.
Yes.
| spampd for example has a total of 10 lines which differentiate between
| versions v being < 2.7, 2.7 <= v < 3.0 and v >= 3.0 _and_ do what's
| needed to work with either of the three possible categories of
| SpamAssassin versions. If SpamAssasin v3 would be renamed to
| Mail::SpamAssassin3, the changes would be more like 120 lines.
BEGIN {
  eval {
   require Mail::SpamAssassin3;
   import Mail::SpamAssassin3 qw(foo bar baz);
  }
  if ($@) {
 require Mail::SpamAssassin;
 import Mail::SpamAssassin qw(foo bar baz);
  }
}
Doesn't look like 120 lines to me.
Problem is that SA doesn't work well with that sort of namespace 
mangling. At least most programs which I looked at using SA modules use 
it in a object oriented way (if you can call it that). So they have a 
multitude of lines referring to Mail::SpamAssassin.

| [SA3 API published half a year ago] 
This is orthagonal to the discussion -- how much and when the API
changed doesn't mean it shouldn't be done right.
Well, yes.
This is Debian.  We don't break stuff arbitrarily. 
I.e. "We try not to break stuff arbitrarily." ;-)
Problem with SA3 is that by renaming Mail::SpamAssassin to 
Mail::SpamAssassin3 for SA3 makes it difficult for many programs to 
adjust.  Especially because this introduces a new modulename which isn't 
used on any other platform, causing it to be a debian-only change to the 
programs. Not renaming it breaks some programs, which had months to 
adjust to the new API (upstream) with that adjustment being a pretty 
small change. Also, the adjustment needs to be made in upstream versions 
anyway.
I certainly don't want to see SA3 enter the testing/stable archive as 
"spamassassin"/Mail::SpamAssassin before each and every program which 
uses it can cope with the change or 

Re: Bug#275897: ITP: tiff2png -- TIFF to PNG converter with alpha channel (transparency) support

2004-10-10 Thread Sven Mueller
Steinar H. Gunderson [u] wrote on 10/10/2004 23:41:
On Sun, Oct 10, 2004 at 04:27:21PM -0500, John Goerzen wrote:
This package can convert a TIFF image directly to a PNG image without
the need of any intermediary format.  Unlike the netpbm package,
this program can preserve transparency information during the
conversion.

What's the gain of this over "convert file.tiff file.png"?
I had the same question in mind ;-)
One thing I can see: It's smaller and therefore probably less error 
prone. However, a note on the upstream homepage made me think even more:

This can lead to the inadvertent loss of source files, at least on Unix
>>> systems (where the shell would expand ``tiff2png *.tiff'' into a list
>>> of actual filenames, the second of which would be treated by tiff2png
>>> as the output filename).
[Note that the issue of overwriting files is _not_ the problem with 
upstream 0.91]

What makes me think is the fact that if given numerous filenames on the 
commandline, the programm takes the second one as the output filename 
(and probably ignores the rest). IMHO (IANADD), this should be fixed 
before this program enters the archive. There are two options of how to 
fix it _if_ it is accepted into the archive:
1) Bail out if more than two filenames are given
2) (which I would prefer) Take all the filenames, loop over them and
   use inputfilename (minus /\.tif+//) plus .png as output filenames

However, if I have the software on a computer to create tiffs with 
transparency, I see no reason to use tiff2png in favor of 
ImageMagick/convert, especially as the latter has been around for years 
and is pretty well tested.

regards,
sven



Re: RFD: Draft for a volatile.d.o policy

2004-10-11 Thread Sven Mueller
Frank Küster [u] wrote on 10/10/2004 19:17:
>> Sven Mueller <[EMAIL PROTECTED]> wrote:
>
>>>>==
>>>>
>>>>Draft for a volatile.debian.org packaging and update policy.
>
>> [...]
>
>>>>Policy for v.d.o
>
>> [...]
>
>>>>- A new version uploaded to v.d.o should restrict itself to new code
>>>>   which is needed to keep fulfilling the original task of the package
>>>>   when it first entered v.d.o.
>
>>
>> Why not: "of the package when the last stable distribution was
>> released"?
Quite simple:
Say a new open source network security scanner enters the world, and it
works well when compiled against Debian stable, we might want to add it
to v.d.o even though it wasn't available when the last stable
distribution was released.
Or a new version of clamav is released, which sadly breaks
compatibility, so we rename it to clamav2 and it can still be released
through v.d.o, similarly to exim4 entering debian alongside exim a while
ago.
>> Besides that, it sounds quite well.
Thanks.
BTW: I am prepared to help volatile.d.o to spring to life as much as
possible. This includes helping to keep orphaned packages up-to-date in
v.d.o if the need arises for some reason. This also might include
working on a sort of security team for v.d.o (I think both jobs should
actually be combined in v.d.o). IANDD though, but if needed, I will
apply to become one.
cu,
sven

PS: Sorry for also sending this reply in private, didn't mean to do that.



Re: about volatile.d.o/n

2004-10-11 Thread Sven Mueller
Henning Makholm [u] wrote on 11/10/2004 19:48:
>> Scripsit Andreas Barth <[EMAIL PROTECTED]>
>>
>
I could however see the possiblity to add a new package "mozilla1.7",
that users can optionally install. However, I also won't like it.
>
>>
>> Me neither. For example, if I was already using somebody else's
>> backport of mozilla1.7, I wouldn't like it if volatile.d.o hijacked
>> that package and attempted to update it with maintainer scripts that
>> know nothing about the backport I'm using.
Either you are using a backport, which implies that the version you are
using is actually somewhere in the debian archive (probably testing or
unstable) or you are using an unofficial package in which case Debian
can't help you.
It is impossible to tell which unofficial packages are available.
www.apt-get.org does quite a good job at listing most unofficial
repositories, but I don't think that volatile.d.o should actually check
each of them for possible clashes with software entering v.d.o.
If you install something unofficial on your system and it breaks because
of some conflicting version/package in the official archive (be it main,
non-us, security or the proposed volatile), this is _your_ problem or
that of the provider of that unofficial package, not Debian's.
If you are using a backport from backports.org, there won't be a
problem, but if there was one, it would still not be up to Debian, but
to the backporter.
regards,
Sven
PS: Sorry, didn't mean to send this reply in private first.



Re: about volatile.d.o/n

2004-10-11 Thread Sven Mueller
Henning Makholm [u] wrote on 11/10/2004 20:22:
[volatile.debian.org]
Security fixes should be handled by security.d.o.
Perhaps yes, perhaps no. At least it should follow two rules:
1) If not handled by security.d.o, it should at least be handled
   in close cooperation with security.d.o
2) It has to have a seperate security archive so that a security
   fix for a package in v.d.o does not cause a package from the
   main stable release to be updated.
What I mean by the second rule is that if I have to put
deb http://volatile.debian.org stable main contrib
into sources.list, I would also need to put
deb http://security.debian.org volatile/stable main contrib
there to fetch the security fixes. I don't want the "normal" stable 
security deb line to cause security updates to v.d.o to be fetched.

cu,
sven



Re: about volatile.d.o/n

2004-10-12 Thread Sven Mueller
Henning Makholm [u] wrote on 12/10/2004 16:05:
For instance, suppose there are Packages A and B in volatile.
(A) has an interface (1) that is only used by (B) in the whole of debian.
"In the whole of Debian" is not the only concern here; I would say it
is not even relevant. Admins of un*x systems are *supposed* to write
a truckload of local scripts to adapt their system to their wishes.
That's the way it works. And our commitment in calling stable "stable"
is that those local scripts will keep working across security updates,
unless breaking them is necessary to close a hole. Similarly, if
volatile is to be any value added over pinning unstable or testing,
it must respect the same commitment.
If you put it that way, I have to agree with you. However, I would make 
one restriction:
- packages in volatile have to keep their commandline (both input and
  output) interfaces compatible, but may change C/Python/Perl
  interfaces if no unresolved incompatibility in the whole of Debian is
  created that way. However, as far as possible, they _should_ also keep
  those compatible.
An exception can be made (like you said) if a change in interface is 
needed to close a security whole. Another reason for exception is an 
addition to the interface (as little interfering as possible) to allow 
scanning for and reporting of new security holes or viruses (for 
security/virus scanners).

If upgrading will break existing interfaces, then "when" means "after
the current testing freezes and is released as stable".
Like I said above, I generally agree. But I would only expect scripting 
interfaces in the meaning of commandline interfaces to remain 
compatible. If a sysadmin writes scripts which depend on an actual API, 
I think it is reasonable for him(/her) to track changes in the APIs 
(s)he is using and/or to mark the packages containing those APIs as hold.

You've said mozilla belongs in backports, which I'll take to mean:
mozilla does not belong in volatile.
I'm saying that *new upstream versions* of mozilla with hundreds of
little interface changes, new preferences file formats, et cetera,
does not belong in volatile.
OK, I would like to see it implemented approx. like this (names subject 
to change):
- volatile.d.o: security and virus scanners, anti-spam software and
similarly fast moving software needed mostly on servers
- backports.d.o: New (versions of) user interface software (new KDE, new
 Mozilla and the like)
Both might need security team support, which I would think is difficult 
for the later.

cu,
sven



Re: about volatile.d.o/n

2004-10-12 Thread Sven Mueller
paddy [u] wrote on 12/10/2004 18:14:
If you put it that way, I have to agree with you. However, I would make 
one restriction:
- packages in volatile have to keep their commandline (both input and
 output) interfaces compatible, 
would that be 'have to' as in 'MUST'?
Yes.
define compatible.
Not really a good definition, but explains what I have in mind:
If package A is updated, for it to still keep compatibility, the 
following needs to be fulfilled:

Any pre-existing piece of software (let's call it X) which interfaces 
with A must stay fully functional. New features may be added to A and 
might not be available via the original interface, but any feature 
previously available must still work in the same way (less bugs being 
fixed). This also means that spelling mistakes in the C locale must be 
preserved (they may get fixed in other locales though, including en_US) 
as well as any (possibly even weird) output formatting.

That last sentence also implies that any script using the commandline 
interface of A must reset LC_ALL and LANG to "C" or unset it. Otherwise 
the output format and wording might change from one revision to the 
other. This is good practise anyway, since you couldn't rely on a 
specific output formatting or wording without specifically setting a 
well-known locale.

 but may change C/Python/Perl
 interfaces if no unresolved incompatibility in the whole of Debian is
 created that way. 
Yeah I've never liked those C/Python/Perl people either, 
not enough soldering irons! ;)
 It wasn't meant that way. But I think that locally written 
scripts shouldn't directly use the Perl module API (as in 
Mail::SpamAssassin for example) but use the commandline interface instead.

 However, as far as possible, 
what is the basis for the trade-off.
You mean where I would draw the line for that? I would want to decide 
that case-by-case. If the change to the API is minimal and the work for 
avoiding it is tremendous (or other reasons make it important to 
incorporate that change), it seems wise to allow it in.

they _should_ also keep those compatible.
so that's just a bug then?
??? You mean the incompatibility? Suppose so.
Another reason for exception is an 
addition to the interface (as little interfering as possible) to allow 
scanning for and reporting of new security holes or viruses (for 
security/virus scanners).
This is part of the definition of compatible?
In a way, yes. see explanation above. The addition should interfere with 
existing interfaces as little as possible.

If one wishes to make guarantees about APIs then it might be good to
identify the APIs.  It is surprising how much people's opinions can vary 
in the edge-cases, and too much hand-waving is bad for the circulation.
(okay, so I made the last part up).
;-)
API is an application programming interface. So it is not interfering 
with scripting interfaces (aka commandline interfaces) ;-)
Actually I think that any machine readable input/output of a program (at 
least when called with the C locale selected) is regarded by many people 
as an API. I don't think so, but I think it should be handled in a 
similar way, but with rules a (very) little less strict.

Sven, you're clearly having a good crack at that here.
Sounds like a compliment to me. thanks. :-)
OK, I would like to see it implemented approx. like this (names subject 
to change):
- volatile.d.o: security and virus scanners, anti-spam software and
   similarly fast moving software needed mostly on servers
- backports.d.o: New (versions of) user interface software (new KDE, new
Mozilla and the like)
Both might need security team support, which I would think is difficult 
for the later.
Depending on how big it is I imagine.  Certainly, when staying close to
upstream versions, one hopes to have a relatively easy time applying 
security fixes (or even just upgrading to the next version).
Yes, staying close to upstream makes it easier to backport their 
security fixes for certain. But depending on the release practise of 
upstream, it might not be advisable to always simply update to their 
newest version.

I'm inclined to think that packages like mozilla belong in stable or
backports, because I can't see why they _have_ to be volatile.  I don't
think being immature software would make a good criterion for entry.
Right.



Re: about volatile.d.o/n

2004-10-12 Thread Sven Mueller
Henning Makholm [u] wrote on 12/10/2004 15:46:
Scripsit Sven Mueller <[EMAIL PROTECTED]>
Henning Makholm [u] wrote on 11/10/2004 20:22:

[volatile.debian.org]

Security fixes should be handled by security.d.o.

Perhaps yes, perhaps no.
Security fixes *to* packages already in volatile is a grey area, yes.
No argument there
I thought I was talking about security fixes to stable packages
going in volatilie instead of security.d.o.
Certainly.
volatile should _not_ be just another security archive.
Security fixes to packages in stable should go to security.d.o as they 
used to.
Volatile might get updated due to the same security problem, but if so, 
then that would most likely be by an upgrade to a fixed upstream version 
instead of a backport of that specific security fix. _If_ the new 
upstream version is stable and doesn't break compatibility (in a way 
contradicted by volatile policy).

cu,
sven



Re: Maintenance of User-Mode Linux packages

2004-10-19 Thread Sven Mueller
Matt Zimmerman [u] wrote on 19/10/2004 03:51:
Is anyone (other than martin f krafft) interested in co-maintaining some or
all of the UML-oriented packages in Debian?  This includes the following
source packages which I currently maintain:
- user-mode-linux
- kernel-patch-uml
- uml-utilities
Things are a bit chaotic upstream at the moment, and due to real life
concerns I have fallen behind on their maintenance.
Manoj has recently added UML support to kernel-package, and user-mode-linux
should be reworked to use that, rather than its current special-case code.
I would be interested to help. Due to my own real-life restrictions (I 
don't have huge amounts of time to spend) I would probably try helping 
with uml-utilities though. IANADD yet though.

regards,
sven



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Sven Mueller
Am 07.05.2024 22:56 schrieb Richard Lewis :Luca Boccassi  writes:



> qwhat would

> break where, and how to fix it?



Another one for you to investigate: I believe apt source and 'apt-get

source' download and extract things into /tmp, as in the mmdebootstap

example mentioned by someone else, this will create "old" files that

could immediately be flagged for deletion causing surprises.`apt download` and `apt source` download to your current working directory. Same for apt-get.
(People restoring from backups might also find this an issue)I would not expect people to restore to /tmp and expecting that restore to work across a reboot. And to be honest, I find the expectation to have any guarantee on files in /tmp or /var/tmp across a reboot quite surprising. The directories are named "tmp" because they are meant as temporary storage, which implies automatic deleting at some point IMHO.Now, bad choices by various tools have been mentioned, so a cleaner for these directories that runs outside a reboot has to be careful anyhow. But during a reboot? I don't think that should be too much of a problem. Cheers, Sven

Re: finally end single-person maintainership

2024-05-23 Thread Sven Mueller
Am 23.05.2024 20:16 schrieb Bernd Zeimetz :On Thu, 2024-05-23 at 11:01 +0900, Simon Richter wrote:
> Yes, but unironically: experimental is a side branch, unstable is a

> MR, 

> and testing is the main branch.

> 

> It is entirely valid to be dissatisfied with the turnaround time of

> the 

> existing CI, but what we're seeing here is the creation of a parallel

> structure with as-of-yet unclear scope.



You are wasting a lot of hardware resources we don't have by abusing

experimental as a CI. Don't do that.I have a string feeling that it is easier for Debian to set up more hardware (I'm thinking of the "how should we spend the money we have" mails) than it is to find more maintainers. Using the resources we have if that saves time on the developer side seems reasonable to me. But IMHO, it would be better to use Salsa CI für that, not experimental.



Re: Welcome to your new installation of Debian GNU/Linux bookworm/sid

2022-10-09 Thread Sven Mueller
Am 09.10.2022 12:20 schrieb Luca Boccassi :On Sun, 9 Oct 2022, 10:23 Johannes Schauer Marin Rodrigues,  wrote:
I do not understand enough about systemd to be able to say whether an empty
value or "uninitialized" is the correct default value for tools like
debootstrap or mmdebstrap to set. If nobody else chimes in, I'll change
mmdebstrap to write the empty string as suggested by Bastian.

Thanks!Empty machineid is the right default, we don't support firstboot semantics in Debian for now (users that want to try it can opt in and change it).
Two main questions:1) How can a user meaningfully change this? The only time this is relevant is during initial boot after installation.Secondly, I know we ran into trouble with an empty (but existing) machine ID file, though I'd have to search for more info when I'm back at work. I seem to recall some issues with actually creating the systemid when the system booted for the first time, but I'm far from sure. It might have been the semantics: as soon as /etc becomes writeable, systemd tries to commit the generated ID to the file and assumes that this will persist from then on. This is a different from the first boot semantics timing, where it is only written once the first-boot-complete.target is reached.And (2) what exactly are the unsupported  first boot semantics you talk about? Simply starting the firstboot service (which is basically a no-op unless you specifically hook into it, from what I understood)? Side questions:Who is "we"? The maintainers of specific packages? Debian as a whole? If the latter: seems I missed any discussion of this.What is the downside of enabling the semantics of ConditionFirstBoot? As mentioned above, I've seen evidence that not doing so might be problematic. Since we modified our installation system to enable it, we haven't seen any issues, neither in physical nor virtual systems.If you are doing bootstrapping for a chroot, you are unlikely to actually start systemd there (well, for the use cases I know). If you are bootstrapping for a VM or physical system, you likely want the ability to do some stuff during first boot and can easily skip doing anything, since you actively would need to hook up into the first boot semantics to use them. (ConditionFirstBoot).Kind regards,Sven

Re: DEB_BUILD_OPTIONS=nowerror

2023-02-28 Thread Sven Mueller

Am 2023-02-28 01:39, schrieb Steve Langasek:
[some precursor


I can see that for bootstrapping a new architecture, it will sometimes 
be
useful to use a toolchain newer than the one that is currently default 
in
Debian, and as a result it is useful to also be able to bypass new 
stricter
-Werror behavior from gcc upstream that breaks compatibility with 
existing

code.


I agree, this is one use case.

It isn't clear to me from the discussion history that this is the 
actual use
case at issue.  But supposing it is, that's one use case; and it's a 
use
case that can be addressed without having to make any per-package 
changes

and without having to make any changes to dpkg-buildflags interfaces by
exporting

  DEB_CFLAGS_APPEND=-Wno-error
  DEB_CXXFLAGS_APPEND=-Wno-error

as part of the bootstrap build environment, for all packages.


Now, if all packages would just use the flags from dpkg-buildflags as 
is,
or as the final part of CFLAGS/CXXFLAGS, that would be nice. However, 
IME,
that is not always the case and some maintainers append -Werror in 
particular.


Of course, dpkg-buildflags also exports flags for other languages than 
C and
C++ (and should do), so if you want to have full package coverage you 
would
want your set of _APPEND variables to match the set of per-language 
flags
that dpkg-buildflags already handles.  Having to export 7 variables 
instead
of 1 is annoying.  But it also doesn't require reaching consensus on a 
new
interface in dpkg.  And I remain unconvinced that the particular 
proposed

interface is the right way around for Debian at large.


Following this discussion, I fear we might not reach consensus. But my 
ideal

(strong) suggestion to package maintainers would be:

1) Don't use -Werror (or equivalent for your packages language) during 
normal

builds (i.e. on buildd)

2) Do use -Werror via some mechanism (DEB_CFLAGS_APPEND)? during CI 
builds


3) Actually utilize CI builds to detect any breakages early.


(1) helps in multiple cases, all of which are rebuilds of some sort.

Security: Minimize the patch to the package if the compiler was updated 
since

your package was last built.

Derived distros: I see this quite regularly at Google - packages not 
rebuilt
in Debian for months, now fail to build with newer gcc. While we use 
testing
as our base (so usually not *too* far off from unstable), we do rebuild 
packages
in testing more often than Debian would. Not strictly necessary in most 
cases,
but most of the time, if package bar build-depends on libfoo-dev and 
that gets

an update, bar will usually be rebuilt (not always, but whatever).

Porting (as mentioned by Steve and others) - Porting to other 
architectures

often requires a different (newer) compiler version, which might lead to
failures with -Werror.

Cheers,
Sven



Re: proposal: Hybrid network stack for Trixie

2024-09-27 Thread Sven Mueller
Am 27.09.2024 08:31 schrieb Christian Kastner :On 2024-09-23 13:09, Lukas Märdian wrote:

>> So on desktop installations including NetworkManager, netplan will be

>> configured to do nothing? Why install netplan at all on desktop systems

>> then?

> 

> Because it allows to add configuration in a way that is common with

> server, cloud

> and other instances of Debian. It's not about enforcing this, or

> breaking people's

> use-cases. But about working towards unified network configuration.



That doesn't answer the question though. When it does nothing by default

on those systems, why install it?



On those systems, if people want to explicitly configure and make use of

netplan, why wouldn't they just `apt-get install netplan.io`?I also don't buy the argument about finding information on the Internet about finding solutions. Most desktop and laptop users will likely never configure their networks via netplan unless that is what the default GUI network configuration dialogues use. Even if someone were to actively document how to solve certain network issues using netplan, it would lead to confusion. Let's say I used the default dialogues all the time and then applied a WiFi fix via netplan while I was at some conference. Now I come back home and suddenly my laptop can't connect to my home WiFi anymore because netplan overrode the interface config in NetworkManager.I'm really opposed to using netplan as a "common interface" by default until it has better integration.Cheers,Sven

Re: A 2025 NewYear present: make dpkg --force-unsafe-io the default?

2024-12-31 Thread Sven Mueller
Am 31.12.2024 15:08 schrieb Michael Stone :On Mon, Dec 30, 2024 at 08:38:17PM +, Nikolaus Rath wrote:

>If a system crashed while dpkg was installing a package, then my

>assumption has always been that it's possible that at least this package

>is corrupted.

>

>You seem to be saying that dpkg needs to make sure that the package is

>installed correctly even when this happens. Is that right?



dpkg tries really hard to make sure the system is *still functional* 

when this happens. If you skip the syncs you may end up with a system 

where (for example) libc6 is effectively gone and the system won't boot 

back up.If the intention is to make as sure as possible to keep a system in a bootable state, wouldn't it make sense to either mark packages as boot-essential and only give those the sync-heavy treatment (or alternatively allow packages to be marked as not needed for booting and skip most of the syncs on them)?It feels wrong to me to justify such a heavy performance penalty this way if only a relatively small subset of dpkg status files and core packages are actually needed to have a bootable system. Apart from the problem that the syncs are often completely useless in virtual machines. Kind regards, Sven