Re: piece of mind

2014-10-20 Thread Christoph Biedl
Matthias Urlichs wrote...

> We don't do a GR among our users. We do that among Debian
> members/maintainers/developers/take-your-pick.
> 
> Of those, most …
> * are perfectly happy with the TC's decision
> * can live with it
> * are unhappy, but think that to continue discussing this is way worse
>   than biting the bullet and getting on with actual work
>   * you do know that we plan to release Jessie sometime this decade,
> right?
> * are disillusioned about it all and decided to stand aside

* sit and watch the things happen they don't agree with at all, things
that were not at all difficult to forsee one and two years ago. In
feelings that switch between amusement, bewilderment, and sheer
horror. And sometimes confusion whether their position is perhaps the
one of grumpy old people who object _any_ change just because things
are considered good enough the way they are right now. Or rather the
one of those who have seen a lot of changes in the past and learned
the hard way they usually cause a lot of anger and surprisingly little
improvement, so in general they are better avoided at all. Staying
with the present quirky situation might be uncomfortable but at least
it's familiar.

Since in short

- never in Debian before there has been a change that intrusive, and
  also that controversial
- systemd slowly changed from "default" to "de-facto only" init system
- systemd grew from the sysv-provided feature set to something that
  eats more and more components of a Linux system, being everything
  but unix philosophy.
- this is is fast-moving target, and no one has an idea where it will
  evolve into. So people who advocated systemd one year 2013 opted for
  an idea not for the systemd that will enter jessie.
- *many* other components require adjustment, and it just happens
- upstream shows little respect for people who object systemd
- double vendor lock-in, both upstream and Debian. Read: Runing not pure
  Debian installations I have at least three serious issues that prevent
  systemd based systems from booting or being usable¹.

* consider skipping jessie altogether in the hope jessie+1 will
provide alternatives or, at least has a systemd where development
speed went down significantly.

* or prepare moving away from Debian.

> Judging by the last couple of months, the rest appears to number <6 people.

In my personal environment, nine of ten people oppose, most of them
unsure, some seriously concerned. Especially after they gave it a try.
Those who are in favour do this in a strong fashion that bystanders
could easily take as "fanboy". Surprisingly enough, there's no "I can
take it or leave it".

Christoph

¹ All the requirements found in
  
  are met, so this list is incomplete.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1413786...@msgid.manchmal.in-ulm.de



Re: jessie release goals

2013-05-15 Thread Christoph Biedl
Christoph Anton Mitterer wrote...

> 2) No more packages that bypass the package management system and secure
> apt:
> a) There are still several (typically non-free) packages which download
> stuff from the web, install or at least un-tar it somwhere without
> checking any integrity information that would be hardcoded in that
> package.
> 
> b) Another problem are IMHO plugins like Firefox extensions, kinda
> bypassing APT. I think at least those that are installed via a package,
> shouldn't be upgradable/overwritable anymore with online versions.

I'd like to enhance that topic to the question under which
circumstances a package is allowed to "phone home", i.e. to contact a
service provided by upstream without the consent of the user. For the
records, I wouldn't mind much if the rule is "never".

Still an answer might be not as easy as it seems, a few situations:

* Automatic update checks don't make sense, mostly they confuse users.

* As an example, nagios3 upstream embedded several requests to the
  nagios homepage on the start page of any local installation. That
  I consider both annoying and a privacy breach, so I patched that
  away locally. But perhaps such behaviour should be banned entirely.

* On the other hand, there are packages that do need frequent updates,
  virus scanners to start with, also ad blockers. Not sure whether
  these should be granted an exception. If not, somebody would have to
  take the task to provide these updates in an APT way.

Just sharing a few thoughts on that ...

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1368645...@msgid.manchmal.in-ulm.de



Re: jessie release goals

2013-05-15 Thread Christoph Biedl
Another thing: Hardening already has been a release goal but there
still are packages around without.

After having seen the proctetion catching a programming bug I think
more importance should be put on that, either by considering all
packages rc-buggy that should be built with hardening wrappers but are
not - or at least packages providing code that, in some sort of order:

* has the setsuid set,
* usually/regulary runs as root,
* is a daemon.

Also, debhelper 9 has eased usage of hardening wrappers as lot so a
major excuse not to add them is now void.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1368647...@msgid.manchmal.in-ulm.de



Re: bugs.debian.org: something's wrong...

2013-03-18 Thread Christoph Biedl
Paul Gevers wrote...

> Is it just me or am I the only one getting bug reports from bugs that
> don't seem to exist on bugs.debian.org.

Now it's appearently back online, but a web tracking has been
added, as seen in #703298. That's disgusting.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1363597...@msgid.manchmal.in-ulm.de



Re: Bits from the Security Team

2014-03-08 Thread Christoph Biedl
Moritz Muehlenhoff wrote...

> Security archive
> - 
> 
> * In order to avoid bottlenecks and to open up the security process
>   further we're planning to allow maintainers which are not part of
>   the security team to release security updates on their own. This
>   applies to packages which have frequent vulnerabilities and where
>   the maintainers are involved in the update process anyway.

The current model at least theoretically allows someone (read: the
security team) to review the patch provided by the maintainer. I like
that four-eyes principle and wouldn't want to give it away.

But perhaps you plan is rather about moving the task of the actual
upload to the maintainer *after* some discussion? Or will you stand
being surprising by an unannounced security upload? (This is none of
my business, I'm just curious.)

> Others
> - --

> * In some cases the scope of security support needs to be limited (e.g
>   webkit-based browsers in Wheezy) and sometimes packages need to
>   end-of-lifed before the security support time frame ends. Currently
>   this information needs to be retrieved from the release notes or
>   announcement mails. We'd like to see a more technical solution which
>   displays the unsupported packages for the installed packages on a
>   specific system. If anyone wants to work on such a script, please
>   contact t...@security.debian.org and we can hash out the details.

That's much-needed, especially with an upcoming LTS. Expect mail.

> LTS
> - ---
> 
> * At the moment it seems likely that an extended security support
>   timespan for squeeze is possible. The plan is to go ahead, sort out
>   the details as as it happens, and see how this works out and whether
>   it is going to be continued with wheezy.

At least worth a try. I was wondering whether popcon gather data to
learn how many people will actually use LTS (I think it does).

>   The rough draft is that updates will be delivered via a separate
>   suite (e.g. squeeze-lts), where everyone in the Debian keyring can
>   upload in order to minimise bottlenecks and allow contributions by
>   all interested parties. Some packages will be exempted upfront due
>   to their volatile nature (e.g. some web applications) and others
>   might be expected to see important changes. The LTS suite will be
>   limited to amd64 and i386. The exact procedures will be sorted out
>   soon and announced in a separate mail.

Be prepared to answer some questions, like:

Are maintainers expected to support "leap-frog" upgrades, i.e. from
squeeze-lts to jessie? If no, users will try this anyway at EOL of
squeeze-lts (in two years or so), brace for nasty bug reports. If
yes, some maintainers already might had have dropped the squeeze-to-
wheezy upgrade scripts in their packages, thus possibly causing
breakage. At least I did. No evil intentions, that was before the LTS
discussion came up.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1394299...@msgid.manchmal.in-ulm.de



Re: Bits from the Security Team

2014-03-09 Thread Christoph Biedl
Matthias Urlichs wrote...

> IMHO the decision to designate release N to be a LTS release has too be
> made at release time of N+1 _at_the_latest_, so maintainers know that they
> may not remove their "old" upgrade script snippets.

Agreed, but given the long intervals between releases: Waiting until
wheezy enters that state means at least two years, and I'd expect the
enthusiasm I've seen for LTS to cool down a lot in that time.

So let's see squeeze-lts as an experiment, and I read it is declared
as such: Limited set of architectures, no promises this will last.
Then all people involved can learn about any pitfalls. Once the
jessie release get closer, enough experience should have been gathered
to tell whether LTS is feasible. If yes, package maintainers should be
encouraged to support leap-frog upgrades.

The right time for that decision was freeze in my opinion.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1394353...@msgid.manchmal.in-ulm.de



Re: Bits from the Security Team

2014-03-10 Thread Christoph Biedl
Didier 'OdyX' Raboud wrote...

> I, for one, have been routinely dropping transitional binary packages 
> that were in the latest stable; they were needed to migrate from (the 
> releases which are now) oldstable to stable but are only archive noise 
> now. Delaying that cleanup for an additional stable release cycle really 
> feels like unnecessary delay, during which we pretend to maintain code 
> that hardly anyone tests.

The missing tests are indeed a problem. For migration code in
(usually) postinst or transitional packages I don't see the big issue,
besides a maintainer's notion of "I'd like to get rid of that old
stuff".


Face the fact: Users *will* skip upgrade from squeeze-lts to (then
stable) jessie, you cannot bar them from doing this. If they break
their system, any "That was not supported, told you so" is not
helping, neither them nor Debian's reputation. Better support such
upgrades from the beginning.

So I was thinking whether there was a way packages that do not support
skip upgrade may enforce an upgrade path via the intermediate
distribution. First idea was a new Pre-Conflicts: package
relationship, then I was wondering whether it shouldn't be simply
possible to write (for a package in jessie):

Package: foo
Conflicts: foo (<< $[wheezy-version]~)

But Policy 7.4 has the answer, it's "no".

Assuming we could somehow turn off that exception, a skip upgrade is
impossible for that package, but it's blocked /before/ things break.
The solution for the user then was to add the skipped release to
sources.list temporarily, and go on. The search engines soon will
learn that trick from the first bug reports, and users can find it
using the specific error message they see.

Benefit: Only the packages that do not support skip upgrades will need
that step, also reducing bandwidth usage and walltime needed for the
skip upgrade.

> The problem is that there is no policy in place to make us support 
> oldstable-to-testing upgrades. If there's interest, that'd need to be 
> decided with a more firm policy than "encourage maintainers".

Would you have preferred to read something "putting the burden onto
the maintainers"?

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1394480...@msgid.manchmal.in-ulm.de



Re: Bits from the Security Team

2014-03-17 Thread Christoph Biedl
Didier 'OdyX' Raboud wrote...

> I was trying to say that there is no policy currently in place to ensure 
> that skip-upgrades actually work,

Agreed. If LTS is going to be a permanent thing, this has to
change. For any squeeze-lts to jessie upgrades, the ride might become
a bit bumpy although I suspect the number of affected packages is
*that* big. But no doubt it's above zero.

Preventing skip-upgrade for a certain package using technical means
doesn't look easy. The only solution available now I can think of is a
"come from" version number check in preinst. That's ugly.

So again, let's see squeeze-LTS as an experiment. But time is running
up if any finding should result in updates policy etc. before the
jessie freeze.

> and at least one maintainer has 
> already started to cleanup pre-wheezy stuff from his packages [0]. 
> [0] I'd be surprised to be the only one, who knows.

Just in this thread, I've counted two :)

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1395122...@msgid.manchmal.in-ulm.de



Re: Bits from the Security Team

2014-03-18 Thread Christoph Biedl
Moritz Muehlenhoff wrote...

> With the current level of commitment an LTS is unlikely.

Um, not good. Awaiting your announced separate message, then it's time
for those who promised commitment back in last August to prove they
are still interested.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1395126...@msgid.manchmal.in-ulm.de



LTS (was: Bits from the Security Team)

2014-03-19 Thread Christoph Biedl
Moritz Muehlenhoff wrote...

> LTS
> - ---
> 
> * Anyone interested in contributing, please get in touch with
>   t...@security.debian.org. We'll setup an initial coordination list
>   with all interested parties. All policies / exact work will be
>   sorted out there.

Ups, I have to admit, it took Raphael Geissert's blog article to
realize that *this* was the call for commitment.

So please add me to that coordination list. At least I'd like to see
how much work LTS will be.

Christoph


signature.asc
Description: Digital signature


Re: Conflicting package names

2014-03-24 Thread Christoph Biedl
Pablo Lorenzzoni wrote...

> How should I proceed?

I suggest not to spend any time on this. Mostly since the old
conquest's upstream[0] isn't dead, unlike stated in #591487. This just
might have changed in the meantime. But now, if ever anyone brings the
old "conquest" back into Debian but you have taken the name in
meantime, expect a lot of disturbance.


In general and assuming upstream was dead, buried and forgotten: If
you want to re-use the source name, you might confuse pts and security
tracker, and this should better be discussed with all parties involved
beforehand. I think they can handle this after such a long time, but
since the source name is barely visible to the end-user, why start the
fuss?

If you want to re-use any binary name, keep in mind some users still
might have installed the old package, that causes a few problems:
Co-existence is not possible, but according to Murphy one users wishes
to have both, bad. If your version number is lower and users actually
wish to install your package and kick the old one (not very likely I
admit), they will - from a technical point of view - have to
downgrade, or remove the old package first, not good. 

If I'd really want to re-use a binary name, although I cannot see why
I should, I'd assert my version is higher, using epoch if required,
and add a come-from version check to preinst, alerting if applicable
that something potentially unwanted is going to happen, and offer to
abort the upgrade.


This has been done, I don't know to which extend, a few years ago when
the "git" package's meaning changed to the version control system from
"gnome interactive tool". But, honestly, git is a completely different
league. So, for your package, it's not worth the efforts.

Additionally, the long name might be a bit unwieldy but also prevents
users from mistaking it for the other "conquest" package.


In the long run, this will create problems in the package name space.
I have some ideas how to deal with that issue, mostly be adding a new
package interdependency. But that will not happen any time soon.


Christoph

[0] http://www.radscan.com/conquest.html


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1395706...@msgid.manchmal.in-ulm.de



Re: systemd - some more considerations

2014-04-04 Thread Christoph Biedl
Chow Loong Jin wrote...

> Some references would be helpful. I can't seem to find anything on this 
> through
> some cursory googling.

Perl scripts, when installed by ExtUtils::MakeMaker or similar, do
have

| eval 'exec /usr/bin/perl  -S $0 ${1+"$@"}'
| if 0; # not running under some shell

in the very first lines. Yes, it works. However I wasn't aware there
still was a need for that.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1396605...@msgid.manchmal.in-ulm.de



Re: gnutls28 transition

2014-05-05 Thread Christoph Biedl
Dimitri John Ledkov wrote...

> Should we start transition to gnutls28 by default, for all packages
> that are compatible?

Given the fact libgnutls26 has issues like #708174 and cannot handle
SHA-512 signed certificates as issued by CACert¹: Yes, please let's
get rid of that old stuff whereever possible and as soon as possible.

Christoph

¹ http://danielpocock.com/double-whammy-for-cacert.org-users came a
  few hours too late, I had to re-compile openldap using openssl.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1399307...@msgid.manchmal.in-ulm.de



Re: systemd-fsck?

2014-05-09 Thread Christoph Biedl
Steve Langasek wrote...

> I don't think systemd integration is in a state today that this is ready to
> become the default.

As long as packages like network-manager depends on systemd and that
dependency causes the init system to be changed accordingly, you can
expect the vast majority of desktops will switch, making systemd not
only the default but also without a feasible alternative.

Using something different from systemd will not be impossible but
introduce that many shortcomings few people will take the burden.

Whether you like it or whether you don't.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1399662...@msgid.manchmal.in-ulm.de



Re: Upgrade troubles with Perl

2014-05-13 Thread Christoph Biedl
gregor herrmann wrote...

> Which kind of problems did you see with new Perl versions (I could
> imagine incompatible old third-party software), and is there
> something the Debian perl maintainers and/or the Debian Perl Group
> can do to improve the situation?

At first I was about to say those who never went past 5.6 in their
coding style are perfectly safe. But then I remembered not even that
is (probably) true: At some time in the past, I don't even know when,
the semantics of \d changed to something different from [0-9] - which
kills performance and might be a promising vector to break into code.
Although I have to admit I haven't seen that in the wild.


Nevertheless, two things where I got bitten:

In 5.18, upstream decided to discourage usage of smart matches and
given..when after these have existed since 5.10 (or: more than six
years) by marking them as experimental, and did this in a very harsh
way. This will drive me away from using Perl as my preferred
programming language.

While no doubt these constructs have major design flaws, upstream
chose the approach of maximum annoyance by creating a warning message
for each and every usage. Instead of deciding first where to change
the semantics and then creating warnings only for places it might
break in the future.

Not going further into details, there are ways to work around it, none
of them is really charming. Especially since my code must run without
modification in every distribution since squeeze. So in a way I hope
jessie will not ship 5.20 if released until then. Since I can expect
I'll have to work through virtually every Perl I've written a *second*
time. If that wasn't obvious: I had code breakage due to that change,
not only cosmetical.


Second, there's a regression in the handling of in-memory file handle.
Broke my code when giving it a first try on jessie. And I am still
*very* upset why #747363 should be anything below RC.

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1400018...@msgid.manchmal.in-ulm.de



Re: Upgrade troubles with Perl

2014-05-14 Thread Christoph Biedl
Paul Wise wrote...

> On Wed, May 14, 2014 at 6:33 AM, Christoph Biedl wrote:
> 
> > In 5.18, upstream decided to discourage usage of smart matches and
> > given..when after these have existed since 5.10 (or: more than six
> > years) by marking them as experimental, and did this in a very harsh
> > way. This will drive me away from using Perl as my preferred
> > programming language.
> 
> I'm using this comment and one-liner to workaround this issue:
> 
> # Silence warnings about smartmatch being experimental
> # The smartmatch we use works under recent Perl versions
> no if $] >= 5.017011, warnings => 'experimental::smartmatch';

Yes, but it's one of those "none of them is really charming". 

I did this, but it's BAD. It required changing each affected file. It
hides messages that might be important in a future release. It will
require a second walk through each file.

Upstream's message so far is: "Don't use new features, even if they've
been out for years." I'm not keen on products where the developers
show such an attitude against their users *hint*

Christoph


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1400049...@msgid.manchmal.in-ulm.de



Re: ppp plugins and dependencies

2015-06-14 Thread Christoph Biedl
Chris Boot wrote...

> The main problem that I see is that there isn't a built-in mechanism for
> tracking such a situation, as far as I can tell. There aren't any shared
> libraries involved, so I don't have the benefit of sonames, symbols
> files or symbol versioning.

(...)

disclaimer: I might not overlook the implications so just take this
as a small bit from a maintainer of one of the packages involved (pptpd).

In general I prefer solutions that require a lot of work but for
just one time over those that need some small attention every now and
then; and also those where work needs to be done just in one place
instead of many. Not just because I'm lazy but mostly since this
should be more error-prone.

Therefore, the debhelper/dh_ppp_plugin looks like a good idea unless
there is an even more generic approach as suggested by James McCoy.
AFAICT this could also create the lower and upper bound versioned
dependencies on ppp.

However, I'll follow any changes your decision will imply. Choose wisely.

my 2¢

Christoph


signature.asc
Description: Digital signature


Re: Raising the severity of reproduciblity issues to "important"

2015-08-24 Thread Christoph Biedl
Santiago Vila wrote...

> Making a great percentage of packages in the archive to be "suddenly"
> buggy is unacceptable.

Nobody would consider making failing r12y "serious" at the current
state where 13 to 17 percent of the packages fail, depending on how
you read the numbers.

> We all want Debian to build reproducibly, but goals are achieved by
> submitting bugs, changing packages and making uploads, not by rising
> severities.

Yes, and this work is currently being done. Or rather, has been done
to a suprisingly huge extent. A *lot* of patches have been prepared
and submitted to BTS, just waiting for the maintainers to pick them
up.

The question is, how many packages do need attention beyond that, i.e.
fail for reasons that still need investigation? At the moment the
number is somewhere between 200 and (wild guessing) 1000. If it's less
than 50 in a year, this seems realistic, why not finish the job?
Setting these to "important" then seems acceptable. Making this a
release goal still could be left for stretch+1.

Christoph



Bug#912745: ITP: libregexp-wildcards-perl -- converts wildcard expressions to Perl regular expressions

2018-11-03 Thread Christoph Biedl
Package: wnpp
Severity: wishlist
Owner: Christoph Biedl 

* Package name: libregexp-wildcards-perl
  Version : 1.05
  Upstream Author : Vincent Pit 
* URL : https://metacpan.org/pod/Regexp::Wildcards
* License : Artistic or GPL-1+
  Programming Lang: Perl
  Description : converts wildcard expressions to Perl regular expressions

(as created by dh-make-perl)

 In many situations, users may want to specify patterns to match but don't
 need the full power of regexps. Wildcards make one of those sets of
 simplified rules. Regexp::Wildcards converts wildcard expressions to Perl
 regular expressions, so that you can use them for matching.
 .
 It handles the * and ? jokers, as well as Unix bracketed alternatives {,},
 but also % and _ SQL wildcards. If required, it can also keep original (...)
 groups or ^ and $ anchors. Backspace (\) is used as an escape character.
 .
 Typesets that mimic the behaviour of Windows and Unix shells are also
 provided.



signature.asc
Description: PGP signature


Re: Bug#912745: ITP: libregexp-wildcards-perl -- converts wildcard expressions to Perl regular expressions

2018-11-03 Thread Christoph Biedl
Jakub Wilk wrote...

> * Christoph Biedl , 2018-11-03, 12:41:

Thanks for proof-reading. Both issues you've reported are upstream, of
course I'll bring them there.

> > It handles the * and ? jokers,
>
> s/joker/wildcard/ ?

Not sure here. Seems in English "joker" is unusual to describe a
wildcard, unlike other languages. If some native English speaker
could comment?

> > Backspace (\) is used as an escape character.
>
> I like the idea of backspace as escape character, but you probably meant
> "backslash" here. :-)

Nice idea indeed ...

Christoph


signature.asc
Description: PGP signature


file(1) now with seccomp support enabled

2019-07-19 Thread Christoph Biedl
tl;dr: The file program in unstable is now built with seccomp support
enabled, expect breakage in some rather uncommon use cases.

Hello,

Upstream of the file package added seccomp support a while ago, and
probably everyone with even a small concern about security will agree
the file program, often being used on dubious or even doubtless
malicious input, should use seccomp to make the attack surface smaller.
However I refrained from enabling this feature back then just weeks
before the buster freeze, in restrospect: indeed the right decision.
Now this early moment in the bullseye development cycle is a good time,
so there's version 1:5.37-2, accepted in unstable a few moments ago.

This however comes with a price: Some features are no longer available.
For example, inspecting the content of compressed files (disabled by
default, command-line parameters -z and -Z) is now supported for a few
compressions only: gzip (and friends, see libz), bzip2, lzma, xz.
Decompressing other formats requires invocation of external programs
which will lead to a program abort (SIGSYS).

Also, when running in LD_PRELOAD environments, that extra library may
use blacklisted syscalls. One example is fakeroot which caused breakage
in debhelper (#931985, already fixed). In both cases you should see a
log message in the kernel log then.

There is a workaround for such situations which is disabling seccomp,
command line parameter --no-sandbox.

But I have no idea about the impact this will cause. Checking all
packages that (install-)depend on file for usage of these parameters
turned out to be a fairly though job. Probably I've killed
codesearch.d.n a few times, the term "file" is just very generic :)
Some 53 binary packages have a dependency on the file package, two of
them (cloud-utils, cracklib2) are very likely affected and will receive
an extra bug report.

Overall, I'm just asking to keep an eye on possible breakage, also
check the kernel log. If you encounter one and can imagine a better
solution than simply disabling seccomp in that case, let me know via
the BTS.


Finally, a clarification: Applications that link libmagic instead of
calling the file executable are not affected by any of this. But the
respective program authors might consider enabling seccomp on their
own, for the above reason.

Cheers,
Christoph


signature.asc
Description: PGP signature


Re: file(1) now with seccomp support enabled

2019-07-19 Thread Christoph Biedl
Paul Gevers wrote...

> Hi Christoph,
> 
> On 19-07-2019 17:18, Christoph Biedl wrote:
> > tl;dr: The file program in unstable is now built with seccomp support
> > enabled, expect breakage in some rather uncommon use cases.
> 
> This probably warrants an entry in the bullseye release-notes. Should we
> already forward your original mail to the BTS for that, or do you care
> to provide better text once you learn better what breaks (and what not)?

For the time being I'd suggest to just take a note somewhere and
re-visit the situation in a year from now. Too many more changes might
come, hopefully for the better. So as of now, writing elaborated texts
about the situation might be wasted time.

Christoph


signature.asc
Description: PGP signature


seccomp woes (was: file(1) now with seccomp support enabled)

2019-07-19 Thread Christoph Biedl
Russ Allbery wrote...

> Christoph Biedl  writes:
>
> > tl;dr: The file program in unstable is now built with seccomp support
> > enabled, expect breakage in some rather uncommon use cases.
>
> Thank you very much for doing this!  Here's hoping this sets a trend.  It
> will provide so much defense in depth against malicious files.

Thanks for the positive feedback. While I agree seccomp is something
nice to have, I'd like to share two two very different thoughts that
arose while doing this.


The first one is Debian-specific: Declaring build-dependencies on
libaries that are not available in all archs, like seccomp.

This is not at all specific for seccomp, but perhaps it's one of the
places where this problem is seen relatively often. So read "seccomp"
as "$seccomp", describing a library that does not exist in all
architectures.

The build system of the file package uses autoconf to check for
presence of the seccomp library and will just disable that feature if
support is missing. But just adding "libseccomp-dev" will break the
build on e.g. alpha for an unsatisfyable build dependency - I don't
wish to simply ignore that. So I have to make sure lack of seccomp in
these architectures does not break the build.

Solutions I've seen (use codesearch to find examples):

* People don't care
* People add a hard-coded list of archs into the dependency clause
  like "libseccomp-dev [amd64 ...]"

The first I consider plain ignorant.

The second puts work on each package maintainer who uses libseccomp in
the build dependencies: The list of supported archs may change, and
having to maintain this in many places is unrealiable and also stupid
work. Still it does the job - I am just looking for a better way.

Solutions I can think of:

* Centralize the list of supported archs in the seccomp packages. By
  either creating an empty libseccomp-dev for the archs where seccomp
  is not supported, or by creating a "libseccomp-dev-dummy" for these.
  In the latter case package maintainers would have to do a one-time
  change of the build dependency into "libseccomp-dev |
  libseccomp-dev-dummy" and can focus on other issues then.

* Add an always-satisfyable alternative clause, like
  "Build-Depends: libseccomp-dev | base-files".

  Yuck.

* Introduce a statement for relaxed build dependencies. In other words,
  a new "Build-Depends-Try:" or "Build-Recommends:" that will be tried
  to be satisfied, but failure to do so will emit a warning at most.

Honestly, the last one has a lot of charm since it means a one-time
effort only. That effort however is huge and includes convincing
several people to implement it.



The second is questioning whether seccomp is something feasible on the
big scale. The domain of a seccomp filter set is the application, and
the way more and more libraries might be linked in during development,
the more syscalls have to be whitelisted, defeating the idea of
seccomp. So in consequence you'll either create a lot of small programs
or several threads with different sets of whitelisted syscalls. In
either way a lot of IPC is needed which is time-consuming and
error-prone, in implementation, execution, and debugging.

Christoph


signature.asc
Description: PGP signature


Re: file(1) now with seccomp support enabled

2019-07-26 Thread Christoph Biedl
Christoph Biedl wrote...

> tl;dr: The file program in unstable is now built with seccomp support
> enabled, expect breakage in some rather uncommon use cases.

Several issues popped up in the last days as a result of that change,
and in spite of some band-aiding to current implementation of seccomp in
the file program creates way more trouble than I am willing to ignore.
So, sadly, I've reverted seccomp support for the time being to avoid
further disruption of the bullseye development.

However, Helmut Grohne has suggested to confine only that part of the
code that is most likely susceptible to vulnerabilities, details in
#932762, and I agree this is possibly the better way to go. This
requires co-ordination with upstream and will take a bit of time.

Christoph


signature.asc
Description: PGP signature


Re: file(1) now with seccomp support enabled

2019-07-26 Thread Christoph Biedl
Vincas Dargis wrote...

> On 2019-07-26 18:59, Christoph Biedl wrote:
> > > tl;dr: The file program in unstable is now built with seccomp support
> > > enabled, expect breakage in some rather uncommon use cases.
>
> Interesting, what are these uncommon use cases? Maybe we could confine it
> with AppArmor instead, since we have it enabled by default?

LD_PRELOAD ruins your day. From the kernel's point of view there is no
difference between a syscall coming from the actual application and one
coming from the code hooked into it. And while the syscalls done by the
first (i.e. file) are more or less known, the latter requires
examination of each and every implementation and whitelisting
everything. Eventually fakeroot-tcp, wishes to open sockets, something
I certainly would not want to whitelist.

TTBOMK apparmor would not provide a sane solution for that problem.
There still might be another use case: The file program should[citation
needed] not write to any file. Reading however must be possible for
every item in the entire file system.

Christoph


signature.asc
Description: PGP signature


Re: file(1) now with seccomp support enabled

2019-07-27 Thread Christoph Biedl
Philipp Kern wrote...

> That being said: It feels like if you face this situation, you could also
> fork off a binary with a clean environment (i.e. without LD_PRELOAD) and
> minimal dependencies and only protect that with seccomp. Of course you lose
> the integration point of LD_PRELOAD that others might want to use if you do
> that, in which case I guess one could offer a flag to skip that fork.

... and I'm back at a point where I've already been: The default has to 
be the secure way else it's not worth the time. So if application really
would break otherwise and can with some reason trade the security,
they'll either have to provide that flag (patching, patching), or 
file(1) would have to detect that situation (hacky, fragile).

> In terms of prior art SSH also forks off an unprivileged worker to handle
> network authentication in preauth and only seccomps that one rather than its
> main process. But it's also not doing the environment cleanup AFAICS.

Yeah, but as I already wrote here, this requires some sorts of IPC and
of lot of joys come with it.

> Kind regards and thanks for making all of us more secure! :)

Trying my best but my hopes are getting low.

Christoph


signature.asc
Description: PGP signature


Re: file(1) now with seccomp support enabled

2019-07-28 Thread Christoph Biedl
Philipp Kern wrote...

> On 2019-07-27 10:01, Vincent Bernat wrote:
> > I am upstream for a project using seccomp since a long time and I have
> > never been comfortable to enable it in Debian for this reason. However,
> > they enable it in Gentoo and I get the occasional patches to update the
> > whitelist (I am not doing anything fancy).
>
> But technically it should be possible to test this in an autopkgtest, no? I
> don't think perfect has to be the enemy of good here, as long as we can
> detect breakage and remediate it afterwards?

Ayup, already working on this, for precisely that reason. There a
question releated I haven't worded yet, stay tuned.

Christoph


signature.asc
Description: PGP signature


Re: 64-bit time_t transition for 32-bit archs: a proposal

2023-05-18 Thread Christoph Biedl
Steve Langasek wrote...

> I don't have any inkling how widespread this problem will be nor do I see
> any path towards automatically detecting such issues (a codesearch on time_t
> would return far too many false-positives to be useful).

While I doubt there is a perfect way to auto-detect this, I see some
things that could be tried in order to make the haystack smaller.

My first idea was to check for "Using 32bit time_t in a packed
structure", something a C compiler could warn about - assuming
on-disk/wire format data is handled in C structs, and omitting the
"packed" attribute should have created trouble years ago.

That would not help though if programmers just use an 32bit integer
to store time information. and I guess they did. Idea: Re-use some code
analyzer to look for 32bit integers that have "time" in the name. To
support this, a rough codesearch for "[iu]32..*time[^o]" instantly gave
me some moments of "Oh, not looking good", although also several
false-positives.


And, assuming there is a sane testing available (autopkgtest would be
really handy), run it with a mocked system time.

Set the time to some point pre-2038, hook the write operations, inspect
all data for four-octect sequences that resemble the time (with some
offset) but lack adjacent four octets of value zero that would make it a
64bit time. Repeat for various times and for both endianesses to improve
precision.

And of course, set it post-2038 and see what happens. Brace for impact.

Christoph


signature.asc
Description: PGP signature


Re: lpr/lpd

2023-09-17 Thread Christoph Biedl
Thorsten Alteholz wrote...

> Maybe this is a good opportunity to get rid of some old legacy stuff. Is
> there anybody or do you know anybody who is using the old BSD lpr/lpd
> stuff?

Well, not me. But the thing that puzzles me is the popcon numbers:
lpr has 755, lprng 233.

Assuming most of these installation were not done deliberately but are
rather by-catch, or: Caused by some package that eventually draws them
in, via a dependency that has "lpr" (or "lprng") in the first place
instead of "cups-bsd | lpr". For lpr, that might be xpaint. For lprng, I
have no idea. And there's little chance to know.

Christoph


signature.asc
Description: PGP signature


Re: [idea]: Switch default compression from "xz" to "zstd" for .deb packages

2023-09-17 Thread Christoph Biedl
Stephan Verbücheln wrote...

> If you want to open that debate (again?), one should probably switch to
> lzip. It uses the same LZMA compression like xz, but has a way more
> sane file format.

Besides the fact dpkg already has zstd support while lzip is missing, so
that was a way bigger changes: In case you've missed that, lzip is not a
mine field in Debian, it's completely burnt ground. It's better to never
mention it.

Christoph



signature.asc
Description: PGP signature


Re: Bug#1052421: ITP: control -- Python Control Systems Library

2023-09-21 Thread Christoph Biedl
Kurva Prashanth wrote...

> * Package name: control
>   Version : 0.9.4
>   Upstream Author :  >
> * URL : http://python-control.org/

While I cannot judge whether this package is a sensible addition to
Debian - I strongly ask you to re-consider the package name as "control"
can apply to many different areas, and is therefore not helping when
trying to figure if that package helps in a particular situation.
Also, as there's the debian/control file in each source package, this
will create some confusion and possibly even to users asking you for
help with their packaging.

Just from the above website, perhaps something like
python-feedback-control-systems or a bit shorter variant would be more
appropriate. I might be wrong.

Christoph


signature.asc
Description: PGP signature


Re: lpr/lpd

2023-09-22 Thread Christoph Biedl
Russ Allbery wrote...

> Since I wrote my original message, I noticed that rlpr is orphaned.

If only rlpr were the only one :-|

When looking into the reverse dependencies of lpr/lprng at the beginning
of this thread, I found several orphaned packages, some for already for
more than ten years. To name a few:

* lprng
* apsfilter(D)
* e2ps(R)
* ifhp(R)
* magicfilter(R)
* tk-brief(R)
* trueprint(R)
* xhtml2ps(S)

Plus those NMU-only-maintained packages where the maintainer is probably
MIA.

That's not surprising: lpr is an old technology, it may be simple but it
has quirks. People moved on, and if they cared a little, they let go.

(...)

> If anyone else who still prints
> regularly prefers the simple command-line interface, you may want to
> consider adopting it, although it looks like you're likely to have to
> adopt upstream as well since it seems to have disappeared.

Dead upstream applies as well to most of the packages listed above. And
that brings me to another, bigger question:

Do we provide our users a good service if we keep such zombies alive
for such a long time?

What I'm trying to say: All the maintenance that happens to such
packages is on an emergency base. If some changes (policy, debhelper,
stricter gcc checks, etc.) trigger RC bugs, someone might do the
necessary adjustments, also because it's often a low-hanging fruit. But
regular care does not happen, and after a few years it shows.

Plus, most of that code is in C, and I take the liberty to state they
all have security issues. They are just unknown, because no one bothered
to take a closer look. But these bugs exist simply because twenty and
more years ago, secure programming wasn't that common.


When kicking isdnutils out of Debian (since Linux kernel hat dropped the
support) I coined the phrase that Debian is not a museum. The lpr/lprng
area feels like it.

On the other hand, becoming museal is a natural result of how we do
things in Debian: Packages are kept alive as long as one single person
cares, while their interest might rather be eliminating RC bugs than the
actual functionality. And proposing a cleanup like in this thread
reliably triggers negative reactions of a few people who want to keep
it.

Without an external limitation (mostly specific hardware) it's hard to
draw a line when it's obviously the time to remove near-dead packages,
at least from the stable releases. I don't have a good idea either what
to do here. I doubt simple rules will really work out, rules like that
one I had in mind "Packages are removed from testing once they have been
orphaned/last maintainer-uploaded more than five years ago". And please
don't promote that, it's obviously flawed. But I'm left with a bad
feeling how things currently are.

Chri- "Might do an abandonware BoF at next DebConf" stoph


signature.asc
Description: PGP signature


Re: Reaction to potential PGP schism

2023-12-21 Thread Christoph Biedl
Daniel Kahn Gillmor wrote...

(...)

Thanks for your exhaustive description. I'd just like to point out one
point:

> In practice, i think it makes the most sense to engage with
> well-documented, community-reviewed, interoperably-tested standards, and
> the implementations that try to follow them.  From my vantage point,
> that looks like the OpenPGP projects that have continued to actively
> engage in the IETF process, and have put in work to improve their
> interoperability on the most sophisticated suite of OpenPGP tests that
> we have (https://tests.sequoia-pgp.org/, maintained by the Sequoia
> project for the community's benefit).  Projects that work in that way
> are also likely to benefit from smoother upgrades to upcoming work in
> the IETF like post-quantum cryptographic schemes:
>
> https://datatracker.ietf.org/doc/draft-wussler-openpgp-pqc/

There was a presentation at the recent MiniDebconf in Cambridge about
post-quantum cryptography, including the consequences for Debian (that
was by Andy Simpkins):

https://wiki.debian.org/DebianEvents/gb/2023/MiniDebConfCambridge/Simpkins

The key point AIUI is Debian must take precautions *very* *soon* as
there's a realistic chance QC will - within the lifetime of trixie -
evolve to a point where it seriously weakens the cryptographic security
as we know it. In other words, Debian must prepare for PQC within the
trixie development cycle, so within 2024.

Therefore, my answer to "How can Debian deal with this [schism]?" is
basically: Debian needs to change things in that area anyway, let's
first find an implementation that provides what we need and has a sane
implementation. If that means turning away from GnuPG, so be it. The
transition will be painful anyway.

Christoph


signature.asc
Description: PGP signature


Bug#1063829: ITP: tftp-proxy -- proxy to redirect TFTP requests to HTTP

2024-02-12 Thread Christoph Biedl
Package: wnpp
Severity: wishlist
Owner: Christoph Biedl 
X-Debbugs-Cc: debian-devel@lists.debian.org, debian.a...@manchmal.in-ulm.de

* Package name: tftp-proxy
  Version : 1.0.0
  Upstream Contact: Arnoud Vermeer 
* URL : https://github.com/openfibernet/tftp-proxy
* License : Apache-2.0
  Programming Lang: Go
  Description : A TFTP server that proxies request to an HTTP
backend if a file is not found.

 This program is basically a minimalistic TFTP server. As an extra
 however, it will forward request that cannot be served to a
 configurable HTTP backend.
 .
 This is useful in a network where the actual TFTP server is relatively
 far away: Due to the simple design of TFTP, already 40ms of latency
 result in a very poor performance, tftp-proxy can shortcut that to
 network speed.
 .
 Additionally, the requests may be directed to a caching HTTP server.



signature.asc
Description: PGP signature


Re: archive.debian.org mirrors

2024-04-30 Thread Christoph Biedl
Johannes Schauer Marin Rodrigues wrote...

> speaking of mirroring problematic debian.org services [1] by adding more 
> copies
> of terabytes of data [2]: is there an update of the situation regarding
> snapshot.d.o? I do not see any activity in bugs like #1050815 and #1029744. 
> And
> bug #1031628 was just closed as wont-fix.

About debian-ports, see the notes of the last meeting:
https://lists.debian.org/msgid-search/87msq2ns8c@nordberg.se
So the imports will be resumed after some hardware upgrades, somewhen in
the second half of 2024.

As mentioned in in #1060922, I am collecting the debian-ports mirror,
four times a day, and this ought to be merged into the regular snapshots
after that date. So end of July 2023 until end of January 2024 is likely
lost, unfortunately, everything after should become visible some day.

Christoph




signature.asc
Description: PGP signature


dpkg-source reproducibility

2020-12-06 Thread Christoph Biedl
Hello,

over all the years I had assumed the -x and -b operations of dpkg-source
are inverse, and the other way around as well. In other words, I
expected to rely on the following:

Running "dpkg-source -x" on a .dsc, and then "dpkg-source -b" on
the unpacked tree re-creates the initial .dsc file.

Having a bitwise identical result was certainly nice to have, but I
consider it sufficient if the resulting .dsc, unpacked as well, results
in a file tree with identical file list and content¹.


Now I came across a (native) package that fails that rule - after
running "dpkg-source -b", the top-level .gitignore file was missing.
While I could work around this, it felt wrong. Therefore I'd like to
understand whether the above approach is not the right one, the source
package has been created in an interesting way (alternative
implementations of "dpkg-source -b"?), whether this is acceptable, and
how to sanely deal with it.

And I wouldn't care if that hadn't been an impediment when preparing a
NMU for that package since debdiff showed the top-level .gitignore was
removed. Something that certainly should not happen in a NMU.

Christoph

¹ File permissions is another story.


signature.asc
Description: PGP signature


Bug#977536: tang: provide the nagios plugin as Debian Package

2020-12-16 Thread Christoph Biedl
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: nagios-tang
  Version : 7
  Upstream Author : Nathaniel McCallum 
* URL : https://github.com/latchset/nagios-tang/
* License : GPL-3+
  Programming Lang: C
  Description : A Nagios plugin to check the health of the Tang server


-Description: monitoring plugin to check the tang server
- This package provides a plugin to monitor a tang server, a service for
- binding cryptographic keys to network presence.


The nagios check for a tang service was part of the tang package in
ealier times¹, but later upstream moved it into a separate package.
By request of a user I wish to bring it back to Debian.

Cheers,

Christoph

¹ http://snapshot.debian.org/binary/tang-nagios/


signature.asc
Description: PGP signature


Re: move to merged-usr-only?

2020-12-29 Thread Christoph Biedl
Andreas Metzler wrote...

> I am all for declaring a cutoff date (and release) which makes usrmerge
> mandatory and the only supported setup. (Or alternatively make usrmerge
> an unsupported setup.) The current state where we are trying to support
> both and end up dealing with fallout and cannot make real progress to
> the clean state (bash being shipped as /usr/bin/bash and therefore dpkg
> knowing about it) is a waste of developer time.

Indeed. I just lost two hours to realize the build failures of one of my
packages resulted from upstream being post-usrmerge in mind while the
buildds are appearently pre-, and with my local build chroots already
migrated I hadn't noticed beforehand. And now I can only hope the
resulting binaries will run everywhere. The sooner we get out of this
mess the better.

Christoph


signature.asc
Description: PGP signature


Possible breakage due to new http-parser library in unstable and testing, later in stable

2020-12-29 Thread Christoph Biedl
Hello,

the http-parser library was updated from 2.9.2 to 2.9.4 in unstable and
testing, the only change upstream worth mentioning was implementing a
protection against "request smuggling" in a rather restrictive
understanding of RFC 7320. The issue is also known as CVE-2019-15605.

As a result, applications using that library may experience errors in
situations that worked in the past. The reverse dependencies in Debian
passed a rebuild, with ruby-http-parser.rb as exception (already fixed
via NMU). Outside that, there was no way of testing, so this heads-up.

After some settling I plan to address the issue in Debian 10
(stable/"buster") as well, with forseeably the same effects. If you
think this will break things in an inacceptable way, let me know.

Aside, http-parser upstream is dead. Debian 11 ("bullseye") will still
ship the package but I'll try to have it removed before 12. If anyone
wishes to package the designated successor "llhttp", that would make
quite a few people happy. RFP is #977716.

Christoph


signature.asc
Description: PGP signature


Re: Bug#981113: ITP: root -- open-source data analysis framework

2021-01-26 Thread Christoph Biedl
Julien Cristau wrote...

> On Tue, Jan 26, 2021 at 04:34:14PM +0100, Stephan Lachnit wrote:
> > * Package name: root
> [...]
> > 
> > I want to maintain ROOT in the science team. ROOT was already in Debian as
> > `root-system` [1], but hasn't been updated since 2015.
> > I will probably go with a more easy maintainable route like I did with 
> > Geant4
> > for the start and do package splitting later.
> > 
> > [1] https://tracker.debian.org/pkg/root-system
> > 
> Please re-use the old name.  "root" is a terrible choice of package name.

At least. Even "root-system" is not very distict, I'd rather choose
something like "root-analysis-framework", assuming that name is a good
description for what the package does.

Christoph


signature.asc
Description: PGP signature


Questioning debian/upstream/signing-key.asc

2021-03-26 Thread Christoph Biedl
Hello,

a few days ago, I ran uscan on a package where I knew there was a new
upstream version - just to encounter an validation error since the
keys in debian/upstream/signing-key.asc had expired.

After that, things escalated a little, and eventually I wrote a script
that downloads d/u/s-k for each source package and examines the
expiration status. And I ended up with:


  maincontrib

Total number of   32761161
source packages

Source packages that   2157  8
ship d/u/s-k

Of those:

Failed to parse the file  3  0

Some keys have expired  306  1

All keys have expired   469  2


Another about 40 distinct keys will expire within the next three months.

So for more than 20 percent of the packages with d/u/s-k, it's
impossible to verify a new upstream tarball without extra work. Ouch.

Of course I understand there are various reasons why this happens, and
several are not the maintainer's fault. But at least in some cases it's
obvious the maintainers didn't care: When there has been an upload with
a new upstream version released after the expiration. This has
happened, hopefully they've verified the tarball by other means.

So, how to go from here?

The obvious thing to do now was a mass bug filing, and I might
do that.

However, I uncertain whether is really worth the efforts to maintain
d/u/s-k, or more precisely, ping maintainers to do so. Personally, I
really like it when uscan also validates signatures. But it seems that
enthusiasm isn't quite shared among all contributors.

Christoph

PS: Those who want to argue lintian should for check for such expired
key, I couldn't agree more. Please read the discussion in #985793 first.


signature.asc
Description: PGP signature


Re: Questioning debian/upstream/signing-key.asc

2021-03-26 Thread Christoph Biedl
Christoph Biedl wrote...

> PS: Those who want to argue lintian should for check for such expired
> key, I couldn't agree more. Please read the discussion in #985793 first.

Sorry, that should have been #964971.



signature.asc
Description: PGP signature


Re: Questioning debian/upstream/signing-key.asc

2021-03-26 Thread Christoph Biedl
Russ Allbery wrote...

> I think there's a bit of subtlety here in that if upstream uses a key with
> an expiration that they periodically extend (to provide a time-based
> cut-off if they lose control of the key for whatever reason, for
> instance), and that package is rarely updated because it's stable, it's
> quite likely that the key will have expired but I'm not sure that's a
> problem.

That's indeed a design weakness of d/u/s-k. But the way upstream handles
signing key expiration and renewal is naturally completely out of
Debian's control.

There is actually another issue besides expiration: Upstream might
have more than just one signing key, and that list changes.

In an ideal world, upstream would, at the time of a release, make sure
the signing key will stay valid beyond the time of the prospective next
one (already partially defeating the idea of a time-based cut-off).
Additionally, upstream will have to take care that next release will be
verifyable using the current keys, i.e. don't use a new key to sign the
next release. Sounds optimistic, eh? Then the Debian maintainer has to
updats d/u/s-k accordingly - which seldom happens, hence this thread,
and by the way I've failed on that as well. Then, and only then uscan
can verify a new upstream release.

So while this works in general, it has some potential for failure, and
this happens. My suggested MBF however would just address maintainer's
negligence, I'm not sure whether it's sufficient and hence worth it.

> I'm not all that familiar with the intended semantics of OpenPGP key
> expirations, but intuitively I think a signature made before the
> expiration should be considered valid, even if the key has now expired and
> thus shouldn't be used to make new signatures.

Agreed for that case here where content and signature have been
published at a some time in the past so falsifying ex post is quite
difficult.

The idea of d/u/s-k however is to validate a future release, assuming
the signing key stays the same and does not need a renweal otherwise.

Repeating myself: I like the idea uscan can verify a new upstream
release and would really like to keep that. But the longer I look at it
the more I think it was better find find a solution that overcomes the
deficiencies of the current concept. I have some ideas but I doubt they
could work out.

> I'm curious how your numbers would change if you only counted as expired
> keys that were expired at the time that the upstream tarball signature was
> made.

Upstream shouldn't be able to create such signatures at all.

A quick check however revealed it's very common the key(s) in d/u/s-k
had expired by the time the last upload to Debian was made. In in ideal
world, upstream would already have refreshed the key, and the Debian
maintainer updated d/u/s-k accordingly - which is the reason why I'd
like to see a lintian check for expired keys.

Christoph


signature.asc
Description: PGP signature


Re: ARM architectures

2021-06-05 Thread Christoph Biedl
Marc Haber wrote...

> I'd still consider the Raspberry Pi. It's unfortunate that the binary
> non-free blob is already needed to boot the box even if one doesn't
> need/use the GPU after booting, but it is reasonably common that
> people care about their software on the platform, and it's also
> affordable and has versions with enough RAM available.

Another point here: Since so many people use it, there's a good chance
even seldom-occurring hardware flaws will be found and eventually worked
around. An effect we saw some two decades ago with the cheap-but-
horrible rtl8139 network adapters.

For me, the biggest downside of the RPi4 is the need for an extra power
plug as they take up to three amps - while for example a BananaPi can be
powered using some unused USB (<= 3.0) port.

> Many of the options pointed out by Siji Sunny are a decade old and
> therfore do not fill your with for a "modern" platform. I am currently
> in the process of fading out the Banana Pis because the platform has
> never really taken off and a dual core 32 bit CPU with 1 GB RAM is
> running out of fun these days.

Depends on the planned use and the budget. I am slowly fading out the
DockStars who are really ten years old now - but mostly due to the
memory limit (128 Mbyte) and the architecture (armel) where I can expect
Debian will end support in a not-too-distant future. Otherwise they can
route 16 MBit without any problem.

For CPU-hungry tasks (package building) I've switched from a CubieTruck
to an RPi4 a few months ago and the performance boost was mindblowing.
(Aside, has anybody noticed aptitude from jessie and buster in an armel
chroot dies with SIGILL sometimes when running on the arm64 CPU of an
RPI4?)

So you if (OP) are basically want to build a console server - which is
how I read you question - almost any board will do it. With the year
2038 in mind (no kidding, this will arrive faster than you think), I'd
advise against armel and the pseudo-armhf used in the first two (or
three) RPi generations. For arm64 will can expect to have support after
that date, for armhf there's at least hope.

Christoph


signature.asc
Description: PGP signature


Re: ARM architectures

2021-06-06 Thread Christoph Biedl
Marc Haber wrote...

> On Sat, 5 Jun 2021 10:50:20 +0200, Christoph Biedl
>  wrote:

> >For me, the biggest downside of the RPi4 is the need for an extra power
> >plug as they take up to three amps - while for example a BananaPi can be
> >powered using some unused USB (<= 3.0) port.
>
> Mine run via PoE very nicely. The hat kind of destroys the form factor
> though and it runs warm.

Good to know, but not always an option in my situations.

> >For CPU-hungry tasks (package building) I've switched from a CubieTruck
> >to an RPi4 a few months ago and the performance boost was mindblowing.
>
> My Raspi Kernels are being cross-built. Wasn't it you who taught me
> to? I haven't advanced to package cross-building yet.

Possibly, and I still cross-build the kernels today, including doing nasty
things like

dh_gencontrol -DArchitecture=armel (...)

that might some people make kick me off the Debian boat. Luckily, none
of this is public.

For any other package (personal, private backports, modified packages)
I've switched to native building a long time ago. At least using the
qemu-user-static binaries caused a lot of trouble, especially if
threading was involved. Also the emulation overhead resulted in build
times close to doing it on (slower) native hardware. I guess the
situation has improved in the past years, at least mixing endianesses
should no longer triggers instant segfaults from ld (never checked), but
I've lost trust in the entire idea.

Christoph


signature.asc
Description: PGP signature


Bug#1003222: RFH: gkrellm-radio -- FM radio tuner for GKrellM

2022-01-06 Thread Christoph Biedl
Package: wnpp
Severity: normal
X-Debbugs-Cc: debian-devel@lists.debian.org
Control: affects -1 src:gkrellm-radio

[ Submitter of #440352 in Bcc: ]

Greetings,

in order to keep this package gkrellm-radio in a good shape, I've taken
over maintainership. However, I don't have such a device as an FM
receiver (yet) so I cannot test whether things work as intended.

Therefore, if someone can give this package a spin, I'd be glad. If they
are willing to do this every time I do a new upload, even better. About
the latter, I don't expect there will be more than one per year, so it
shouldn't be a big burden.

Christoph


signature.asc
Description: PGP signature


Re: Bug#1014908: ITP: gender-guesser -- Guess the gender from first name

2022-07-16 Thread Christoph Biedl
Steve McIntyre wrote...

> IMHO there are 2 points to an ITP:
>
>  * to save effort in case two people might be working on the same
>package

And having such a lock is a good thing.

>  * to invite discussion on debian-devel / elsewhere

Which might include reactions like:

* There is already a package that does the same thing. Do we really need
  a duplicate?
* It's already packaged, possibly under an obscure name or within
  another package.
* There are issues with the license, the package description and
  similar things.

> If people post an ITP and upload iummediately, then I don't think that
> helps on either count.

Indeed. And if it's about "Getting through NEW takes so much time", then
giving it a few days more instead of increasing the risk of a REJECT is
even the better way around.

> How do others feel?

To me, uploading immediately after ITP signalizes "I am 100% sure this
package and my packages will certainly not create objections of any
kind". Which I find somewhere between highly optimistic and plain
arrogant.

So, in my opintion, as a rule of thumb, have a time of three days
between these two actions.

Christoph


signature.asc
Description: PGP signature


schroot: Stricter rule for chroot names

2022-08-12 Thread Christoph Biedl
Hello debian-devel,

a new version of schroot (1.6.12-2) will hit unstable in a few hours.
As the only but major change, the rule about allowed characters in a
chroot (or session) name got a lot stricter to avoid some harm that can
be done. If you're using names with various punctation or 8bit
characters, the upgrade will croak in preinst. This hard breaking is the
lesser evil since these chroots and sessions will be invisible to the
new schroot binary.

Consider putting schroot on hold if you need some time to resolve this.


In the future, names must match the following regular expression:

^[a-zA-Z0-9][a-zA-Z0-9_.@%+-]*$

Or: Letters and digits are always okay, and in all but the first place,
the characters dot ('.'), dash ('-'), underscore ('_'), at ('@'),
percent ('%'), or plus sign ('+') are allowed, too.

Therefore, the usual convention of grouping words with dashes like for
example "buster-backports-source" will just continue to work.


Sorry for any inconvenience caused by this - given the circumstances it
seems any other approach will either require a lot of work to implement
or might leave some holes open that would require another urgent upload.

Special thanks to Julian Gilbey  who reported the
underlying issue and helped discussing a fix.

Christoph


signature.asc
Description: PGP signature


Re: schroot: Stricter rule for chroot names

2022-08-12 Thread Christoph Biedl
Christoph Biedl wrote...

> In the future, names must match the following regular expression:
> 
> ^[a-zA-Z0-9][a-zA-Z0-9_.@%+-]*$

Some last-minute inspection revealed [@%+] can *not* be considered safe,
so they'll have to be disallowed as well, at least for the time being.

In other words, if

schroot --list --all | LC_ALL=C grep -vE 
'^[a-z]+:[a-zA-Z0-9][a-zA-Z0-9_.-]*$'

lists anything, you'll be in trouble.

Christoph


signature.asc
Description: PGP signature


Debian Bug Squashing Party (BSP) in Karlsruhe, Germany 14th-16th October 2022 (Update, all green)

2022-09-23 Thread Christoph Biedl
Hello,

some updates about the Debian Bug Squashing Party in a few weeks:

First, and most important, the event is still about to happen. Please
confirm your attendance in the Wiki page[1], or ping me in IRC. That
page has also all the information about precise times, getting
there, and the like (no changes in that regard).

Our host and sponsor Unicom GmbH provides a second meeting room, so
there are a few more places available. That room might as well be used
for an ad-hoc sprint.

About the pandemic situation: While (as of today) you're not required
to do so, you're asked to take some precautions. This includes wearing
a mask indoors, doing a test, and most important: Stay home in case of
symptoms. And, as stated earlier, if the legal situation forbids or
common sense advises aginst doing such an event, it will not happen.

See you soon,

Christoph

[1] https://wiki.debian.org/BSP/2022/10/de/Karlsruhe

PS: If you want to contact me in private, Deutsch geht auch :)


signature.asc
Description: PGP signature


Configuration files, local changes, and "managed section" markers

2023-02-14 Thread Christoph Biedl
Hello,

these days, I found a package in Debian (four-digit popcon count) that
in an upgrade happily removed some some changes I had made to a
configuration file of that package, in /etc/.

My immediate reaction was to consider this a gross violation of the
Debian Policy (10.7.3 "Behaviour"). Upon closer inspection however I
found there are markers in that file that define a "managed section", an
area where the upgrade process wants to do things - local modification
ought to be placed outside of that, they will not be harmed, then. FWIW,
this functionality was implemented upstream.

So I'm a bit undecided here how to proceed:

Either we understand the policy literally - then the maintainer will see
an RC bug that will require some work to fix.

Or we adopt a pragmatic approach since an administrator still can modify
that file without losing these changes, although not in every place. It
ought to be possible to revert any of the lines in the managed section
if they are undesired. The administrator will however have to respect
the markers, though. To be honest, I failed to see them, which alone
might be reason to prefer the first, strict approach.

Thoughts?



signature.asc
Description: PGP signature


Re: Configuration files, local changes, and "managed section" markers

2023-02-15 Thread Christoph Biedl
Marc Haber wrote...

> The "split it" approach is something that comes naturally to someone
> who has been heavily socialized in the Debian Universe because we
> handle conffiles on a file level. It feels unnatural and clumsy for
> someone who is not familiar with the deep historic reasons for us to
> do it like that.

Yeah, "socialized in the Debian Universe" applies to me - no doubt a
split (or "conf.d") is the right thing to do in a package-based
distribution, for packages that provide a service for other package.
Perhaps logrotate is a good example for that.

As you mentioned, there are limits - when the configuration file format
is complex and and cannot be easily combined from various sources. And a
few other situations as well.


Anyway, my initial question was of a different kind: Is the situation
I've found worth a grave bug report? I just don't want to put some
pressure on the maintainer if there is some rough consensus such
"managed section" markers are good enough.

Fixing this then is none of by business. Still, having some ucf instead
of just writing the modified file should be enough.

Christoph


signature.asc
Description: PGP signature


Introducing Declarative Debian

2023-04-01 Thread Christoph Biedl
Heya,

perhaps it this happens more often that one might think: Huge changes
come in silence and small steps, and it takes a while to realize
something big is happening. And it seems we're just witnessing another
story of that kind.

It's about the naming of Debian packages, more precisely: about making a
statement in that very name. Perhaps not the first of that kind, but
certainly one that got broader attention was "python-is-python3". Since
then, there have been more, for example “borgbackup-is-borgbackup2” and
“usr-is-merged”. It should however be noted there a several packages
name "node-is-*" - that's just a coincidence, caused by the function
these packages provide.

Still it's about time to make this a formal concept, something I call
"Declarative Debian". Instead of dealing with all the difficulties about
packaging, licenses, problematic upstreams, endless discussions about
the right way to do things - let's just declare what we want to achieve,
and we're done.

Examples that come to mind include names like

* everything-is-fine
* systemd-is-the-salvation
* freedom-is-slavery
* sid-is-stable

But perhaps these goals are a bit too big for the moment. So at first
I'd like to gather more input on this and would appreciate suggestions
where to head for next. In the quest for final truth.

Christoph



signature.asc
Description: PGP signature


Re: IPv6-only buildds and AI_ADDRCONFIG

2020-07-25 Thread Christoph Biedl
Niko Tyni wrote...

> somewhat recently new official buildds were added with IPv6-only
> connectivity. I'm not aware of an announcement, but I noticed this
> with #962019, where src:perl suddenly failed to build due to test suite
> failures.

Thanks for catching this - I had seen one of my packages build-failing
on some archs due to this, but had no idea what is going on.

Quite frankly, I'm somewhat uneasy about introducing such a major change
in the buildd infrastructure without some testing and MBF in advance.
Which is what you did then.

>  # unshare -n
>  # ip li set lo up
>  # ip dev add dummy0 type dummy

FWIW, "ip *link* add dummy0 type dummy" was needed with my version of
iproute.

>  # ip li set dummy0 up
>
> FWIW, while the amount of breakage does not seem massive, I think the
> numbers indicate that it might be good at least not to build the stable
> suites on these IPv6-only buildds. I'm not sure what the plan is there.

Also I am somewhat concerned what will happen when removing the e
AI_ADDRCONFIG hint flag usaged. But I will have to find out.

Christoph


signature.asc
Description: PGP signature


Re: UID and GID generation

2016-08-14 Thread Christoph Biedl
Martin Bammer wrote...

> I've got an issue with the generation of UIDs and GIDs when new
> users are added. By default UIDs and GIDs for users and user groups
> are values starting from 1000 (on Red Hat from 500). When a user is
> added the next free value is chosen.

Yes, also NFS has a problem here unless you use some additional ID
mapping.

Similar, system user IDs: If you want to migrate to a new installation
but there are a lot of files that should be preserved, think
/var/lib/munin/.

For all such situations a workaround exists. Still I've been wondering
for years why appearently nobody else considers this a problem. So I
patched adduser to determine the user (also: group) ID from a static
"acount name"<->"ID" mapping. It's in the BTS somewhere eight years
ago, and I use an updated version still today. Migration of existing
installations was painful but worth it, YMMV.

> So my suggestion would be to change the default behavior of UID and
> GID generation to hash value calculation. Has values are computed by
> the user and group names as 32bit values on Debian (31bit on Red
> Hat). The minimum and maximum values should be configurable.

Given Murphy and birthday paradoxon, this will bite you much sooner
than you'd expect.

> IMHO the current implementation is a design bug which must be fixed.

I wouldn't use the b word here. The implementation is simple but
introduces problems once you have more than one machine.

Christoph


signature.asc
Description: Digital signature


Re: UID and GID generation

2016-08-14 Thread Christoph Biedl
Christoph Biedl wrote...

> So I patched adduser to determine the user (also: group) ID from a
> static "acount name"<->"ID" mapping. It's in the BTS somewhere eight
> years ago,

FTR: #243929


signature.asc
Description: Digital signature


Re: Network access during build

2016-09-06 Thread Christoph Biedl
Vincent Bernat wrote...

> One of the package that I maintain (python-asyncssh) makes a DNS request
> during build and expects it to fail. Since Policy 4.9 forbids network
> access (in a rather confusing wording "may not"), I got this serious
> bug:
>  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830568

This was my constant fear since the first day I learned about this
policy. While I consider the change the right thing, I'm somewhat
concerned the wording leads to requirements that neither were intended
nor are necessary to reach the goal that I consider the idea behind
it: The behaviour of any network activity must not affect the result
of the build. Where behaviour includes unavailability, and completely
unexpected behaviour like providing bogus data for any kind of
request. The easiest way to enforce this is to disallow network
traffic at all.

Now the funny question: Does traffic on the loopback interface count
as network access? A daemon started during build to run tests is
certainly okay. What about traffic to other daemons, most prominentely
named? Running "hostname --fqdn" unless this is handled by /etc/hosts
already? Also, I remember a certain package (name withheld) did a
*lot* of DNS traffic in the test suite, so far nobody has shown
concerns about this.

> However, I have a hard time to find this useful for anyone. To sum up:
> 
>  - patching the test suite requires maintaining the patch forever

This is one the the maintainer's chores. Many packages have patches
that will never go upstream and hence need a refresh at every new
release. Disabling a test should be a one-line patch that applies upon
new versions easily. If upstream constantly reorganizes the sources,
it'll become somewhat annoying but still feasible.

>  - both pbuilder and sbuild are using an isolated network namespace

Network isolation doesn't look like the right answer to the problem.
Also, I'd be glad if any manual dpkg-buildpackage invocation would
result in the same build behaviour.

>  - package builds reproducibly with or without network access

If I understood correctly the build will fail if the resolver does not
give a negative reply. Then your assumption isn't true.

> I have the impression that enforcing every word of the policy in the
> hard sense can bring endless serious bugs. This particular occurrence
> affected about 70 packages. I appear as a bad maintainer because I don't
> feel this is an important bug.

Again, I consider the change the right thing, mostly for the reason of
reproducibility. The wording could use some clarification though.

Christoph


signature.asc
Description: Digital signature


Re: Porter roll call for Debian Stretch

2016-09-25 Thread Christoph Biedl
John Paul Adrian Glaubitz wrote...

> On 09/20/2016 11:16 PM, Niels Thykier wrote:
> >- powerpc: No porter (RM blocker)
> 
> I'd be happy to pick up powerpc to keep it for Stretch. I'm already
> maintaining powerpcspe which is very similar to powerpc.

For somewhat personal reasons I'm interested in keeping powerpc in
stretch as well. I certainly cannot take the entire role as a porter,
especially since I don't know what amount of work this implies. But I
am willing to help.

There are two powerpc boxes in my collection, used regulary. One runs
on stable, the other on testing. I haven't done d-i tests but
certainly could do.

Christoph



signature.asc
Description: Digital signature


Re: More 5 november in the release schedule

2016-11-07 Thread Christoph Biedl
Ian Jackson wrote...

> There's still big spikes in work for our core teams around deadlines,
> so it's still best if people sort their stuff out earlier, but the new
> arrangements are a big improvement IMO.

ACK, and also looking at the way removals were handled in the past
months (Like long grace periods and lots of warnings), I am very
confident the release team will handle stretch in a sane and also
sensible way.


If I understood some remarks in IRC correctly: Filing an RC bug after
hard freeze may lead to immediate and thus irrevocable removal from
stretch[citation needed]. If this was true, a malicious attacker could
abuse this to kick arbitrary packages through exaggerated bug
severities. I fail to believe this would really work.

Christoph


signature.asc
Description: Digital signature


Re: More 5 november in the release schedule [and 1 more messages]

2016-11-09 Thread Christoph Biedl
Ian Jackson wrote...

> I think what is really worrying people is the fear that they might
> miss something, for good reasons, and then find that their work that
> they care about is thrown out of stretch.
>
> It is difficult to address this fear with logical arguments intended
> to demonstrate that "it won't happen to a responsible maintainer",
> because it is so easy to think of scenarios where, at the very least,
> it's hard to be sure that the right things would happen.

For me it's a bit different. If John S.(lacker) Maintainer ignored the
messages about debhelper compat 4 removal for ten and about the
openssl 1.1 transition for seven months, and in January suddenly finds
his packages got kicked out and cannot return for stretch - he had it
coming.

If however Jane R.(esponsible) Maintainer did everything right but did
not realize somebody else's non-action affects her packages as well,
through a build dependency or whatever ... until the "Your package was
removed from testing" e-mail arrives: That's quite a nuisance.

So if I, in Jane's position, could be certain I'll learn about a
pending removal that affects my packages early enough I can avoid this
(by kicking the maintainer or NMU), my concerns were neglectable. A
grace period of just a few days was sufficient. This mechanism is
implemented for install dependencies, but after reading this thread
I'm not sure it exists for other scenarios as well. 

> On the other hand, it would be really easy for the Release Team to
> address this fear.  All they have to say is that if there is a really
> good excuse (maintainer seriously ill; build-dependency broken and
> maintainer not notified; or whatever), they will be willing to
> consider exceptions.

I guess the Release Team plays tough in the first place so people do
their job *now* instead of asking for exceptions later. I'd call that
wise tactics. The e-thing still might happen if there's really, really
good reason. But creating false hope sends the wrong signal. 

Finally, there's a thing called "trust": I trust the Release Team does
this solely in order to keep the freeze time as short as possible,
everybody hates that time anyway. This trust was created by the very
people behind it, and the way they acted in the past months.

Christoph


signature.asc
Description: Digital signature


Building architecture:all packages

2016-11-10 Thread Christoph Biedl
Hello,

it is the nature of an arch:all binary package it can be installed on
any architecture regardless on which architecture it has been build.
Given this I deduced I'm at liberty on which architecture I'd want to
rebuild such a package, but I saw disagreement. So I'm asking for
clarification:

Given a simple source package "src:foo" that produces one arch:all
binary package "foo". The build needs the help of package "bar" which
happens to be arch:any, so technically speaking the build process isn't
identical while the result should be. (If you need an example, the
thousands of pure Perl library packages are of that kind.)

Now "bar" behaves different on different architectures, very likely
due to a bug, eventually resulting in a build failure on some while it
passes on other. More precisely, "bar" hasn't changed in years, the
problems began with a recent new upstream version of "foo". The
maintainer did the upload using a passing arch and probably never even
noticed.

Basically, there are two ways to judge this situation:

a) While "bar"'s behaviour certainly is questionable, this does not
   affect "foo" as long as it builds on at least one architecture (or a
   subset of the mostly used ones).

b) This is a serious issue as John D. Rebuilder should be free to choose
   on which architecture to build "src:foo".

Personally, I tend to b) since

* there is no sane way for the maintainer to tell the world which
  architecture should be used to rebuild this package. The .buildinfo
  file will solve this, still
* it is certainly rather unfriendly to expect John to have a box for
  that particular architecture just to be able to do the rebuilding.

On the other hand, option b) implies "src:foo" must build on *all*
architectures, and obviously such rebuild tests do not happen: Else the
build error I stumbled upon would have already been reported, and
hopefully fixed as well. In other words: This is a real story, I'm not
making it up. Also, the failing architecture isn't even an excotic one.

In my opinion "src:foo" should see a "serious" bug report FTFBS,
although it's just a victim of "bar". What about "bar"?

Other suggestions?

Christoph


signature.asc
Description: Digital signature


Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-10 Thread Christoph Biedl
Henrique de Moraes Holschuh wrote...

> There are some relevant issues, here.
> 
> 1. It does protect against passive snooping *from non-skilled
> attackers*.

Well, yes, no. The tools become better so thinking a few years into
the future sophisticated programs for that purpose might be available to
everyone. Imagine there was a time before wireshark/ethereal, and how
much work pcap analysis was back then.

> 2. It is unknown how much it can protect against passive snooping from
> skilled attackers capable of passive TCP metadata slooping and basic
> traffic analysis *FOR* something like the Debian archive and APT doing
> an update run against the Debian archive

The logical answer is pretty obvious: Not at all. It's a question of
efforts required and my gut feelings tell me it's not very much.

> Do not dismiss (2). TLS is not really designed to be able to fully
> protect object retrieval from a *fully known* *static* object store
> against traffic metadata analysis.   And an apt update run would be even
> worse to protect, as the attacker can [after a small time window from
> the mirror pulse] fully profile the more probable object combinations
> that would be retrieved depending on what version of Debian the user
> has.

Things are worse: There's a small set of clients, and their request
behaviour is quite deterministic. Another snooping aid is usage of
pdiff.

In total, I was not surprised if just given the frame metadata
(direction, high-res timestamp, payload size) it was possible to restore
the actual data transmitted with high accurancy. Even a dget/apt-get
source should have a pretty unique pattern; and I feel tempted to create
a proof of concept for all this (I can resist, though). The apt programs
could obfuscate their request behaviour, the TLS layer could add random
padding of data and time, but I doubt this would help much.

Another "wasn't surprised", applicances might already have that. If not,
the vendors could implement this easily.

> Now, hopefully I got all of that wrong and someone will set me straight.
>  It would make me sleep better at night...

Sorry Dorothy.

Christoph


signature.asc
Description: Digital signature


Re: Building architecture:all packages

2016-11-11 Thread Christoph Biedl
Nikolaus Rath wrote...

> Just fix it in "bar", and don't bother worrying about the right
> severity?

If it was that simple ... The maintainers of "bar" haven't reacted to
the bug report for a few months although it contains a clear statement
the current state breaks the build of other packages. And in general I
include patches into my bug reports, but the "bar" package is rather
big, some 400 kLOC. I have a vague idea of what precisely is supposed
to happen. Also I already spent some time to find the change. The last
hours I've been playing with gcov to find differing code paths taken,
so far no avail. 

Still, I might get this sorted out. However, this incident raised a
second, more generic question: Is this an acceptable state? If John
wants to rebuild packages like "foo", he cannot simply use his
preferred architecture. The build may fail, then he has to start
trying other ones until success. I don't consider this a satisfying
situation. On the other hand I can understand Debian maintainers
wern't very happy to learn a bug in the toolchain of a doorstopper
architecture causes FTBFS there for one of their packages, which might
even result in a removal from testing.

So I was looking whether there is a consensus about this. If not, I
wouldn't mind if this ends in a guideline (seems a bit to much in
detail for Debian policy) about packages that solely build arch:all
binary packages. Like: They must be buildable at least on certain
architectures, I called them "mostly used architectures" in my first
mail. I would strongly suggest that list should not solely consist of
amd64.

Christoph


signature.asc
Description: Digital signature


Re: Building architecture:all packages

2016-11-11 Thread Christoph Biedl
Paul Wise wrote...

> On Fri, Nov 11, 2016 at 7:32 AM, Christoph Biedl wrote:
> 
> > Other suggestions?
> 
> Include information about which packages/issues you are talking about.

How would this affect your assesment of the situation?

In my feeling, revealing the pakages' names would give the story some kind of
blaming. That's not my intention.

Christoph


signature.asc
Description: Digital signature


Re: Building architecture:all packages

2016-11-12 Thread Christoph Biedl
Colin Watson wrote...

> On Thu, Nov 10, 2016 at 08:52:15PM -0800, Nikolaus Rath wrote:

> > That's a good theoretical argument. But in practice, I think the subset
> > of architectures for which bar works correctly will always include
> > amd64, and John D. Rebuilder will have access to such a box for sure.
> 
> We know this not to have been the case in the past.
> https://bugs.launchpad.net/launchpad/+bug/217427 mentions the cases of
> palo (hppa), openhackware (powerpc), and openbios-sparc (sparc).
> (People often suggest cross-compiling for this, and that can certainly
> be a good solution in some cases, but please bear in mind that in the
> general case that still only reduces the problem to "can only build on
> architectures where somebody's uploaded the necessary cross tools".)

That's a slightly different scenario since I'm just about to rebuild
arch:all packages, so there's no need for cross tools.

> There is currently one package in the Debian archive (pixfrogger) that
> declares "Build-Indep-Architecture: i386" in its .dsc because, even
> though it builds an architecture-independent binary package, building it
> requires a package that's only available on 32-bit architectures.

*That* is really helpful as it provides a generic solution for my
problem: The maintainer can provide an architecture hint for any
rebuilder. Is there a more formal specification around? I understand
the comment in the diff[0] the header may carry a list of
architectures, not just just a single one. That's the right thing.

[0] 
http://bazaar.launchpad.net/~launchpad-pqm/launchpad/stable/revision/17338/lib/lp/soyuz/tests/test_build_set.py

(aside, codesearch revealed there is a second source package: Also
edk2 uses "XS-Build-Indep-Architecture". For amd64, though.)

> As I allude to in
> https://lists.debian.org/debian-devel/2016/11/msg00457.html, I think the
> best answer is for Debian's buildd infrastructure to follow through on
> implementing Build-Indep-Architecture.

Seems reasonable. My intention is rather to make the life easier for
folks downstream, like rebuilding and backports.

As there's a workaround available now, I feel less reluctant to reveal
the real packages: "foo" is 3270font, "bar" is fontforge |
fontforge-nox. The bugreport for the latter is #831425.

Christoph


signature.asc
Description: Digital signature


Re: More 5 november in the release schedule [and 1 more messages]

2016-11-13 Thread Christoph Biedl
Marc Haber wrote...

> This is exactly the problem I have with the current policy: I fail to
> see why this measure will shorten the freeze.

I don't. But I'd say we'll just watch what's going to happen and resume
this discussion once stretch is released.

Chri- "somewhen December 2017" stoph


signature.asc
Description: Digital signature


Unsatisfiable build-dependency in testing

2016-11-16 Thread Christoph Biedl
Hello,

two days ago, syslog-ng 3.8.1-5 migrated to testing. However, as this
package build-depends on libssl1.0-dev which is available in unstable
only at the moment, it cannot be rebuild in testing.

Please note my only interested in understanding this from a general
point, not about the particular situation. That one will be resolved
in due course, I'm certain.


However: In my opinions, such a scenario is a "this should not
happen". And while packages do not migrate for "dependencies
unsatisfiable", appearently this does not apply for build
dependencies, at least in this case.

So, does anybody else consider this a problem? Did something go wrong
on britney (or whoever controls which packages are allowed to
migrate)? Or ...?

Christoph


signature.asc
Description: Digital signature


Re: Unsatisfiable build-dependency in testing

2016-11-16 Thread Christoph Biedl
Adam D. Barratt wrote...

> britney has never considered build-dependencies (that's #145257). This
> is one of several reasons for periodic rebuilds of testing.

Oy. While this is rather unfortunate, I bet there's a reason why
this never got fixed in all the years.

Thanks for sharing the wisdom of the aged.

Christoph


signature.asc
Description: Digital signature


Re: MIA maintainers and RC-buggy packages

2016-12-04 Thread Christoph Biedl
Emilio Pozuelo Monfort wrote...

> I would suggest to come up with some algorithm to determine if a package is
> effectively unmaintained, and implement it in an automatic way that gives
> maintainers prior notice and a chance to react, like we do with auto-removals.
> Then if nothing happens in a reasonable time frame, the package gets orphaned.

This seems very reasonable, go for it. Such an approach would also
ease QA uploads for packages and should overall improve package
quality.

It is quite disturbing to see how many packages dropped out of stretch
due to the three big migrations (gcc-6, openssl 1.1, debhelper), even
if fixing it would take less that an hour. Also, in spite of all the
work the people behind them did, including long grace periods. So we
have a lot of de-facto orphaned packages and it's about time to make
such a status official sooner.

FWIW, I consider salvaging several packages that are debhelper compat
level 4 or earlier. But I always wondered whether this wasn't a good
time to go beyond the bare necessities and do all the good things in
packaging that were introduced in the past ten years, like dh7 style
debian/rules, 3.0 (quilt) source format, machine readable
debian/copyright etc.etc. - although this is rather QA work than NMU.
But assuming the package is technically unmaintained, why not do work
that has to be done anyway and will help a future maintainer?

> I think we should also have an auto-removals-from-sid[1]
(...)
> [1] With a *very* conservative criteria. We don't want to remove a package 
> from
> the archive after 30 days because of 1 RC bug.

Not so sure about this.

There still might be folks around that find that particular package
useful for their work. In an ideal world they'd let us know. In real
life I've experienced they are not sure about the procedures and are
always happy to meet a Debian guy so they can tell about their
concerns. My usual answer "File a bug, it's just an e-mail, you won't
get eaten for small mistakes in form" sometimes worked, often it was
better I took care of that myself. So we (as in Debian) suffer from
the usual problem of not getting enough feedback, having to guess.
Therefore, in case of doubt, rather keep a package.

Also resuming work on a existing though pretty broken package is a
lower bar then doing the re-entry procedure.

To add a few criteria, I'd remove a package from sid only if it

* is plain broken
  This means the code, not the packaging.
* (no longer) serves a reasonable purpose
  If a package is specific for a task or feature that is no longer
  supported (think set6x86 which was for CPUs that are now pretty
  old), there is no reason to keep it.
* no longer exists in older supported distributions
  Since still users might not learn their package is no longer part of
  the now-stable until they upgrade.
* has been orphaned for a longer time, say: a year
  So again users of that package had a grace period to ask for work on
  that package.

Personally, I'd also appreciate notifying debian-devel about an
upcoming removal from sid, somewhat similar to ITPs, so there's a last
chance for any interested party to pick the package.

And there's still the shortcut using RM:

Christoph


signature.asc
Description: Digital signature


Re: MIA maintainers and RC-buggy packages

2016-12-04 Thread Christoph Biedl
Michael Meskes wrote...

> Sure, but then it would be nice if you'd tried finding out if nobody
> cares. As usual the real world is neither white not black, but some
> shade of gray. The problem with the package is that the new version
> does not build on my system but I lack the time to debug, could be
> minor but still.

Did you document your attempts so people willing to help have a point
to start from?

> If anyone wants to help, the package is tora.

At least the current serious bug #811663 was rather easy to handle.
You should have got mail.

Christoph


signature.asc
Description: Digital signature


Re: Building architecture:all packages

2016-12-04 Thread Christoph Biedl
Adam Borowski wrote...

> I see two problems in that code:
> * it's Launchpad-specific
> * it supports only a single build-indep architecture rather than a list

Um, yes, perhaps, no. The important thing to me is there's already
something around that solves a problem I have. Of course this header
should see a formalization, and certainly this should be a list. I've
filed #846970.

> With my 3270font maintainer hat on: I wouldn't even know of this if not for
> the reproducible builds project, despite doing lots of rebuilds on various
> architectures (albeit not really for this package).  I guess there's a lot
> of other arch-dependant failures for arch:all packages.

Err, hi. I admit I deliberately did not notice you beforehand - mostly
to keep you out of some hazzle you are not responsible for at all. Since
however even the recent fontforge upgrade did not change the situation,
I might ask you to include that header once the above process has come
to a result.

Regards,
Christoph


signature.asc
Description: Digital signature


Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)

2016-12-08 Thread Christoph Biedl
Roger Shimizu wrote...

> I'm ARM porter on armel/marvell (orion5x/kirkwood).
> Stretch will be frozen and released soon, which makes me bit depressed, 
> because it means armel will be dropped out of unstable/testing as the 
> conclusion of Cape Town BoF.

Same here. My Dockstars (orion5x/kirkwood) still work like a charm and
it gives a bad feeling having to trash them some day just because
there's no support any more.

On the other hand, they face another problem I guess is typical for
that generation, just by the age: Memory. So for quite some time I
wanted to start a thread here but there are too many other things I
have to take care of. Now however that the discussion has started ...
I would have written something like the following:


As far as I know, there is no such thing as "minimal hardware
requirement" for Debian besides some lines in the installer pages. For
RAM, the minimum value is 128 megabytes, and I think it's about time
to raise that. Yes, you can run Debian on that but it's not fun:

Locale generation needs a lot of RAM. You can work around it by
installing locales-all which however takes long time to install on
slow flash drives. Or disable locales entirely. Err.

Side-effects of over-eagerly used xz compression. This is a relict of
time when xz was pretty new and maintainers added the -9 compression
option, probably completely unaware this makes no sense for small
packages, unlike gzip or bzip2. Also, even unpacking unconditionally
allocates some 65 Mbyte memory then, easiliy driving a 128 Mbyte-box
to the limits. One of the worst is the traceroute package that
actually carries some 110 kbyte; however I was unable to install it
due to the above problem.

The amount of data apt deals with reflects more or less the size of
the Packages indexes. As they get bigger each release, this will
become a problem. Already now apt gets OOM-killed every now and then.


About xz, I considered doing a MBF (severity normal the most) asking
to end this, it's about 45 packages.

About apt I have an idea to provide "reduced" Packages indexes. They
would be smaller so the memory usage is no longer a problem, also apt
should become somewhat faster. But since it's virtually impossible to
create a complete subset, I'd declare incompleteness a feature: If
somebody manages to hit the limits by requesting installation of
packages that are referenced only - tough luck, use the full index,
please. But somebody would have to promote and maintain that feature.


Another way to deal with this however was to raise the minimum memory
requirements. To 256 MB, or perhaps even 512 MB. It feels wrong since
it works around a problem instead of solving it. But since all
components become more greedy over time, this will have to be done
sooner or later anyway. And will render armel de-facto obsolete.

Still I wouldn't mind armel in buster, perhaps restricted to armv5.
But I understand the odds aren't quite in favour.

Christoph


signature.asc
Description: Digital signature


Re: future of Debian amavisd-milter package

2016-12-08 Thread Christoph Biedl
har...@a-little-linux-box.at wrote...

> While it would theoretically also be possible to file a RFA or O bug I
> currently do not consider this a good course of action: The number of QA
> packages is already very high and while adopting amavisd-milter would be
> not so much of a problem to maintain it build-depends on libmilter-dev
> which is part of the already orphaned sendmail package.

On the other hand: Being orphaned will keep the package in Debian, and
in case of emergency someone still will take care of it. And if anybody
wishes to take over later, it just needs an ITA.

> As I'm not subscribed to debian-devel please keep me in CC - without
> somebody stepping up I plan to remove the package in the near future to
> keep it from being released in Stretch.

Although I'm not a user of that package, I consider removing it from
stretch the wrong thing to do.

Christoph


signature.asc
Description: Digital signature


Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)

2016-12-10 Thread Christoph Biedl
Paul Wise wrote...

> On Fri, Dec 9, 2016 at 8:53 AM, Ben Hutchings wrote:
> 
> > Also, dedicated tiny flash partitions for the kernel and initrd.  I
> > wouldn't be surprised to be find that by the time we want to release
> > buster we can't build a useful kernel that fits into the 2 MB partition
> > that most of these devices seem to have.
> 
> Is it possible to put a bootloader like u-boot in the flash partitions
> and have it load the Linux kernel and initrd from elsewhere?

That how I've been running my Dockstars through all the years. As as
far as I know this worked with the Debian kernels as well (I use my
own kernels for reasons).

Christoph


signature.asc
Description: Digital signature


Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)

2016-12-10 Thread Christoph Biedl
W. Martin Borgert wrote...

> Quoting Ben Hutchings :
> >Also, dedicated tiny flash partitions for the kernel and initrd.  I
> >wouldn't be surprised to be find that by the time we want to release
> >buster we can't build a useful kernel that fits into the 2 MB partition
> >that most of these devices seem to have.
> 
> Non-HF devices can be very different, look at e.g. this ARM926EJ-S one:
> http://www.taskit.de/stamp9g20.html

Maximum RAM is 128 Mbytes. Wouldn't buy this to run Debian on it.

> >As it is, stretch will be supported until 2020, maybe 2022 on armel.
> >Is it really worthwhile to add another 2 years to that?
> 
> This depends on the effort, of course. But for environmental reasons
> I'ld say the longer the better.

Certainly, with some limits though. At some point new hardware is that
much more energy efficient the inital cost pays off over the intended
time of usage. Want my old P4 server?

Christoph


signature.asc
Description: Digital signature


Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)

2016-12-13 Thread Christoph Biedl
W. Martin Borgert wrote...

> The forementioned hardware needs < 0.5 W, the manufacturer even
> claims 0.18 W. AFAIK, most newer ARM boards that are capable to
> run Debian need more energy or am I wrong?

So let me play the devil's advocate another time: My Dockstar runs
24/7 and allegedly consumes 5 watts. Replacing it with a board that
takes a tenth, the electricity bill will be ten euros less. Depending
on the price for the replacement, that might be worth a thought.

It certainly is if you're still running a WRT54G at some 15 watts
where a TP-Link 741 costs less than 20 euros and takes some two or
three watts.

> (Furthermore, any replacement of hardware has many environmental
> effects apart of energy consumption: Use of rare materials,
> production side effects, transport, waste problems, etc.)

Controlling does not care beyond the bills. And there might be
transition costs as well (testing new hardware, deployment etc).

Nevertheless, there is a point where supporting old hardware makes
little to no sense. Defining that point is hard and includes personals
preferences as well. Given the arguments in this thread I'm less sure
armel already is at that point but it surely will come.

On the other hand it hurts to see Debian will no longer support
hardware that is still being sold. For me, it's a deja-vu: ARMv4
boards (v4 as in: No thumb) were sold until at least 2009 in some NAS
boxes. When I started playing with it, arm(OABI) left the building.

Christoph


signature.asc
Description: Digital signature


Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)

2016-12-20 Thread Christoph Biedl
Lennart Sorensen wrote...

> I actually highly doubt there are that many armv7 boxes running armel.
> armhf was a nice performance improvement and worth the hassle to reinstall
> if you had such a box in the first place.  I think most armel systems
> are probably armv5, often the marvell chips.  Not sure if anyone is
> running it on Raspberry pi (Original, not 2 or 3) systems (...)

That would be me. If somebody has instructions how to build (or: where
to get) a current u-boot that boots a vanilla kernel, resulting in a
system that does *not* see 8kIRQ/sec, I'll happily take a hint.

At the moment, I run the 4.1 series based on the huge Raspbian patch,
which is quite painful. Forwarding to even 4.4 failed.

Lesson learned: Never buy hardware that's not supported mainline, or
will be in a forseeable time.

Christoph

PS: I think it's about time to restrict this to debian-arm, Reply-To:
set.


signature.asc
Description: Digital signature


Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)

2016-12-20 Thread Christoph Biedl
[ limiting to devel- ]

Wouter Verhelst wrote...

> I think a proper procedure should involve a script that:

[ sane criteria ]

> We currently don't have anything remotely like the above, and I think we
> should.

Yes, but I doubt it would be used a lot. There's a wide-spread culture
of re-installing instead of upgrading, also centralized management for
system configuration and "cloud storage" for users' data make that
even easier.

Even I used architecture migration for a clean-up: New installation in
parallel, then sync the interesting stuff (this requires some additional
hacks to make it more or less carefree). And I always found a lot of
cruft that supports the notion this was the right choice.

Doing this by hand is of course neither fast nor simple. The migration
script you requested could change that, however it's a delicate job,
full of pitfalls, desaster if anything goes wrong, so nobody would want
to take the burden of maintaining it. I bet there still are a lot of
hand-crafted i386 systems out there that never made it to amd64 because
the admins don't dare to. I bet as well these systems are so filled with
quirks any automated migration will to unable to handle them.

So, however is in charge of such a script, they can expect to have a lot
of hard work while constantly receiving flames from people where the
procedure failed. Doesn't quite look tempting.

Christoph


signature.asc
Description: Digital signature


Bug#849332: ITP: auto-resize-image -- Resizer for inline and attachment images (thunderbird)

2016-12-25 Thread Christoph Biedl
Package: wnpp
Severity: wishlist
Owner: Christoph Biedl 

* Package name: auto-resize-image
  Version : 0.14.3-tb
  Upstream Author : TrVTrV 
<https://addons.mozilla.org/en-US/thunderbird/user/trvtrv/>
* URL : 
https://addons.mozilla.org/en-US/thunderbird/addon/auto-resize-image/
* License : GPL2+
  Programming Lang: XUL
  Description : Resizer for inline and attachment images (thunderbird)


Tentative long description:

 Auto Resize Image is a Thunderbird add-on which allows one to resize
 inline and attached images while composing email messages.
 .
 The main purpose is to:
  - reduce size of image files
  - give the image convenient dimensions for it to be visible directly
in email client without using scrollbars.

Personally, I'm a bit surprised this very useful tool does not exist in
Debian yet - or did I just not see it?

Additionally, my main interest is to have this plugin in Debian - while
I'm not particularily keen on maintaining it. So if anybody else wishes
to do this, perhaps under the pkg-mozext umbrella, just let me know.

Christoph



signature.asc
Description: Digital signature


Re: Migration despite an RC bug?

2016-12-29 Thread Christoph Biedl
Emilio Pozuelo Monfort wrote...

> Unforunately, the BTS exported a broken/incomplete RC bug list, and britney 
> used
> that and didn't see that some packages had an RC bug, so it allowed them to 
> migrate.

Ouch, that's quite a nightmare. While I'm curious to learn how this
happened and what is done to prevent this from happening again -
please rather focus on restoring the correct state. Installations
running testing already might have got packages they shouldn't see.

Christoph


signature.asc
Description: Digital signature


Re: compression support in kmod

2016-12-29 Thread Christoph Biedl
Eduard Bloch wrote...

> I volunteer as test subject for that experiment. I would appreciate even
> small steps, considering the current laptop in front of me with average
> magnetic HDD. Over a minute boot time, which is insane and IMHO mostly
> caused by the storm of IO operations required nowadays.

At the risk of damping your expectations - I was surprised if there was
as noticeable improvement. Perhaps on broken (i.e. slow) bootloaders
where a smaller initrd gets loaded faster. Otherwise I'd expect the
reduced I/O doesn't count compared to everything else that happens.

> And seriously, today a kernel with a couple of extra modules takes about
> 200MB. I remember times when you could install a whole desktop
> installation in that space.

200MB? Luxury!

Christoph


signature.asc
Description: Digital signature


Bug#849842: ITP: ykush-control -- control application for Yepkit YKUSH Switchable USB Hub board

2016-12-31 Thread Christoph Biedl
Package: wnpp
Severity: wishlist
Owner: Christoph Biedl 

* Package name: ykush-control
  Version : 1.0.0
  Upstream Author : 2015-2016 Yepkit Lda
* URL : https://github.com/Yepkit/ykush
* License : MIT
  Programming Lang: C++
  Description : control application for Yepkit YKUSH Switchable USB Hub 
board

 The Yepkit USB Switchable Hub (YKUSH) boards allow the user to
 selectively switch ON and OFF each of the USB devices connected to the
 hub downstream ports. This package provides the ykushcmd program to
 control the switches of all connected boards.



signature.asc
Description: Digital signature


Re: Feedback on 3.0 source format problems

2017-01-01 Thread Christoph Biedl
Guillem Jover wrote...

> On Sun, 2017-01-01 at 10:47:59 -0800, Nikolaus Rath wrote:

> > TBH this feels like you're sniping at Raphael here, which I think is
> > pretty sad and inappropriate.

Well, bringing up more old stories, even if 'The secret plan behind
the "3.0 (quilt)" Debian source' was meant to be a joke, it wasn't
quite helpful to convince the people who were not sure about the
concept. So Guillem's wording might be inappropriate but Raphael will
have to stand that.

> But then, I know that part of the resistance to the new formats was
> in part due to that very aggressive advertising campaign, that for
> whatever reason annoyed some quarters of the project. So, when trying
> in a way to start a dialogue with the people that got annoyed at the
> time, I think it would be a bad idea to cloud that with a page like
> that.

This is a good plan, go for it. Overall (and starting advocacy), I'm
not a huge fan of 3.0 (quilt) but - especially when used with DEP-3
headers - it is the best thing Debian has, way better than 1.0 and all
the other hand-crafted patch handling I happen to see in some
packages. Also, this format has been around for several years now, it
is understood, well-tested and widely accepted.

So I might suggest "Deprecate source format 1.0" as a buster release goal.

Christoph


signature.asc
Description: Digital signature


Re: Feedback on 3.0 source format problems

2017-01-02 Thread Christoph Biedl
Vincent Bernat wrote...

> For me, this is a great improvement over the previous format with
> several different patching systems (quilt, dpatch, nothing,
> custom). Now, most packages are using quilt, one less thing to
> understand.

That's for sure, and I doubt there are many people who consider 3.0
(quilt) a regression compared to other methods. Now while the thread
went a bit "What's not so good in 3.0 (quilt)", the initial intention
was a bit different: Are there technical reasons why a certain package
cannot be converted to source format 3.0 (not necessarily quilt)?
Personal preference or lack of acquaintance with the workflow is not
an excuse.

For me, besides some minor annoyances, there is just one thing with
3.0 (quilt), and I already forgot about the details: Baseline was, a
patch modified some auto* file (configure.ac, Makefile.am), and in
certain situations the patch was not applied yet or already unapplied
where it should have been. Probably I worked around it by adding some
extra statements in debian/rules.

Christoph


signature.asc
Description: Digital signature


Re: compression support in kmod

2017-01-02 Thread Christoph Biedl
Christian Seiler wrote...

> tl;dr: I don't think compression modules will increase boot times
> on HDDs in any significant manner, but it may be a good idea to
> support that just to reduce the amount of space required on disk.

Well, sometimes I remember I was shocked to learn in the MBR partition
layout 62 sectors were just left unused. All the wasted space!

Reality check, please. Disk got incredibly cheap. Unless you on some
special and expensive storage, the cost for the additional space the
uncompressed modules take is somewhere below 20 (Euro)cent, even on an
SSD. I won't stop anyone from working on this project, but I think it's
just not worth the efforts.

Christoph


signature.asc
Description: Digital signature


Call for testers: logrotate 3.11.0-0.1~exp1

2017-01-03 Thread Christoph Biedl
Hi there,

as the stretch freeze approaches, I'm getting concerned about the
status of logrotate, most notably #734688. The maintainer (CC'ed)
hasn't shown any sign of activity for a while, also no response to a
private message (I admit, it's been just a few days).

Since the fix includes switching to a new upstream version, I refrained
from doing a simple NMU. Instead I've uploaded a new version to
experimental (as 3.11.0-0.1~exp1) and would appricate tests and
feedback in the hope major breakage gets gets detected early.

My plan is to upload to unstable+2 during to weekend.

From the changelog (more extensive than I'd usually do, to ease review):

 logrotate (3.11.0-0.1~exp1) experimental; urgency=medium
 .
   * Non-maintainer upload to experimental
   * New upstream version 3.11.0  Closes: #734688
   * Refresh patch queue
 - Now upstream:
   + datehack.patch
   + mktime-718332.patch
   + man-su-explanation-729315.patch
 - deb-config-h.patch: New way to enforce status file location
   * Update watch file. Closes: #844578
   * Update Homepage: information
   * Allow failure in the clean target
   * Fix broken test suite runner

Regards,

Christoph


signature.asc
Description: Digital signature


Re: Bug#734688: Call for testers: logrotate 3.11.0-0.1~exp1

2017-01-04 Thread Christoph Biedl
Matthias Klose wrote...

> fyi, I NMUed logrotate yesterday to fix #849743, currently in delayed.  Please
> add this fix to your upload.

Thanks for the heads-up, included in ~exp2 I had to do for other
reasons as well.

Christoph


signature.asc
Description: Digital signature


Re: Packaging suggestion for the Universal Media Server program.

2017-01-21 Thread Christoph Biedl
Aldrin P. S. Castro wrote...

> I would very much like the Universal Media Server to be in the Debian
> repositories.
> He is a very good dlna server. It is done in Java.

Unfortunately, by mailing debian-devel your suggestion will very likely
not reach the people who might be willing to do the job.

Debian's procedure to find someone for bringing software into the
repository is called "Request for Packaging" (RFP). The document at
https://wiki.debian.org/RFP describes the required steps and contains
links to the details. Short version: After some checks you'll have to
send an e-mail in a certain format, but the reportbug program will help
you with that.

Christoph


signature.asc
Description: Digital signature


Call for tests: New python-magic Python bindings for libmagic

2018-01-21 Thread Christoph Biedl
# TL;DR

* The python-magic Python library for file type detection will switch
  to a different implementation soon.

* Code that relies on the old implementation should not be harmed,
  everything else is a bug.

* Such code however might need an adjustment some distant day, not
  before the buster release though.

* This is your chance to make this change as smooth as possible.


Hello,

for many years, there have been two Python bindings for the libmagic
file type detection library, both using the name "python-magic", but
with different and incompatible APIs. At the moment, Debian ships the
implementation bundled with src:file [file], maintained by Christos
Zoulas. However, there are several packages where upstream decided to
use the [pypi] implementation by Adam Hupp, Debian maintainers included
a code copy then.

Anyway, this awkward situation will come to an end: Kudos to Adam who
implemented a [file] compatibility layer in [pypi]. There is already a
python-magic package in experimental that provides both APIs, and the
created binary packages are to replace the one created by src:file.

Initial checks showed no regressions so far, but before doing the
switch by uploading to unstable I'd like to have a broader coverage,
therefore this

  Call for tests

of all the packages that depend on python-magic and/or python3-magic,
also of other applications that use the [file] implementation. The
output of dd-list on the rdeps is attached below.


# How to test

Install python-magic and/or python3-magic from experimental and re-run
your applications. Is there code breakage? Or a file type detection
change?

Maintainers for packages that use a code copy of [pypi] might give it
a try as well, although nothing should go wrong then.


# Reporting bugs

The usual recommendations about filing bug reports apply.

At first, double-check whether your observation really was introduced
by the python-magic change, i.e. downgrade to the [file] version and
check somewhere else if the problem persists.

If it's obviously upstream, you'll do me a favor if you send the
reports to the upstream bug tracker[1], mention it's about the
"libmagic-compat" feature, and just leave a pointer in Debian's BTS.
Else or in case of doubt, report to the BTS and I'll do the triaging
and forwarding.


# Outlook

Two weeks from now the [pypi] implementation of python-magic is to hit
unstable, later testing according to the usual migration rules.

After that, packages that ship a [pypi] code copy will see a whishlist
bug to drop this as it's no longer needed (some three packages,
therefore no MBF).

For the buster release (somewhen 2019), I'll go to great lengths to
make sure python-magic ships the compatibility layer. In other words,
there is no need to change implementations based on [file] for the time
being.

Beyond buster: Depending on upstream development, the [file] API might
go away some day. As mentioned above, in Debian this will not happen
before the buster release. The [pypi] implementation will emit
deprecation warnings beforehand then (code is already there but
disabled). Otherwise it's too early for detailed plans.

Cheers,

Christoph

[file] https://www.darwinsys.com/file/
   Current version in Debian sid: 1:5.32-1
[pypi] https://github.com/ahupp/python-magic/
   Current version in Debian experimenta: 2:0.4.15-1~exp2
[1] https://github.com/ahupp/python-magic/issues



Andrea Capriotti 
   autoradio

Arturo Borrero Gonzalez 
   rpmlint (U)

Chris Lamb 
   diffoscope (U)

David Paleino 
   syslog-summary

Debian Astronomy Team 
   ginga

Debian LAVA team 
   lava-dispatcher

Debian Tryton Maintainers 
   relatorio

Devscripts Devel Team 
   devscripts

Gaetano Guerriero 
   eyed3

Gianfranco Costamagna 
   s3cmd (U)

Holger Levsen 
   diffoscope (U)

Hugo Lefeuvre 
   alot (U)

Jordan Justen 
   alot (U)

Kouhei Maeda 
   swiftsc

Mathias Behrle 
   relatorio (U)

Matt Domsch 
   s3cmd

Mattia Rizzolo 
   devscripts (U)
   diffoscope (U)

Neil Williams 
   lava-dispatcher (U)

Ole Streicher 
   ginga (U)

Paul Wise 
   check-all-the-things

Paulo Roberto Alves de Oliveira (aka kretcheu) 
   rows

Python Applications Packaging Team 
   alot

Reiner Herrmann 
   diffoscope (U)

Reproducible builds folks 
   diffoscope

Ritesh Raj Sarraf 
   apt-offline

RPM packaging team 
   rpmlint

Senthil Kumaran S (stylesen) 
   lava-dispatcher (U)

Simon Chopin 
   alot (U)

Ximin Luo 
   diffoscope (U)


signature.asc
Description: Digital signature


Re: Removing packages perhaps too aggressively?

2018-01-31 Thread Christoph Biedl
Andrej Shadura wrote...

> It has happened to me in the recent years quite a few times that a
> package which I was using has a RoQA bug filed against it, and the
> package's got removed at a very short notice.

+1

You meant "RM"? Let me extend this to package removals in general since
I'm not to happy with the current situation, too.

While certainly most RMs are obviously the right thing to do, every now
and then I get surprised by a removal - to find this has happened months
ago.

Then reading the related RM I saw a vague justifications like
"Alternatives exist" a bit too often. This is not helpful, I'd at least
expect a list of packages that are actually an alternative.

Or for example the xmem removal (#733668): It indeed failed to build
due to a libprocps transition but the fix was trivial. Also "shows
nothing" was a misunderstanding, xmem really shows nothing for the first
ten seconds.[1] 

Points like these might be brought up immediatly after the RM was filed,
but not a year later.

> Should we maybe give it *a bit* more visibility? Let RoQA bugs hang
> around for *at least* a month, maybe post notification emails every
> fortnight so that they can be noticed? Encourage newcomers to pick them
> up? Prodding DDs who's last reported bugs against the package to maybe
> pick it up?

At first, let's exclude auto-cruft and similar removals since I doubt
there was much dispute about these.

More suggestions for the, say, manual removals (mostly ROM, ROP, RoQA):
To make sure they get attention, send them to d-d as this happens for
ITPs, send them to @packages.qa.debian.org so all subscribers
of the package learn it's important to take action.

And also give it some time, I'd suggest some two to four weeks.

Christoph

[1] Don't get me wrong, resurrecting xmem is not a good idea today, the
program shows garbage, probably due to kernel changes.


signature.asc
Description: PGP signature


Re: Removing packages perhaps too aggressively?

2018-01-31 Thread Christoph Biedl
Jeremy Bicha wrote...

> On Wed, Jan 31, 2018 at 3:03 PM, Christoph Biedl
>  wrote:
> > Or for example the xmem removal (#733668):
> 
> 4 years ago.

So?

> > And also give it some time, I'd suggest some two to four weeks.
> 
> I don't think we need to add an artificial delay for package removals
> that are approved by the package maintainer.

Certainly not. Last year there was a library that no longer had any
rdeps, so the maintainer decided to RM it. Too bad someone out there
develops software based on Debian. At least I learned rather soon,
thanks to some CI that started to report build failures for unresolvable
build dependencies.

Christoph


signature.asc
Description: PGP signature


Re: proposal: ITR (was Re: Removing packages perhaps too aggressively?)

2018-01-31 Thread Christoph Biedl
Adam Borowski wrote...

> Thus, I'd like to propose a new kind of wnpp bug: "Intent To Remove".

Sounds like a very good idea. For me, I could automatically parse these
and check against the list of packages installed on my systems, or are
used to build packages (thanks for .buildinfo files) outside the archive.

> After filing the ITR, if no one objects in a period time, the bug would be
> retitled to Ro{M,QA} and shoved towards those guys wearing hats with "FTP"
> written on them.  Such a period could be:
> * (if we decide to CC ITRs to d-devel): short: a week?
> * otherwise: long: 6 months?

The short period, but not *that* short. I'd expect any reaction will be
pretty soon but allow people to be offline for a week. In the situation
where removal is obviously the right thing to do, waiting months is
mostly horror.

> We could have an offshot of wnpp-alert notify you if a package you have
> installed has been ITRed.  Perhaps even this could be installed by default,
> so users in stable of obscure packages have a chance to act.

Certainly, packages to be removed from (old)stable in a point release
should go through that procedure aswell.

> However, ITRs wouldn't be mandatory: the majority of packages can be removed
> outright; you'd file an ITR only if you believe there's some controversy.

For this I'd prefer to have a guideline so this isn't entirely left to the
submitter's discretion. It boils down to "do no harm". So removing cruft
like NVIU certainly can do done straight ahead, while ROM/ROP/RoQA
should get some audience and time.

> One issue: on a small screen, crap font and no glasses, "ITR" looks similar
> to "ITP", an alternate acronym could be better.

Removal Intent for a Package? (jk)

Christoph


signature.asc
Description: PGP signature


Re: proposal: ITR (was Re: Removing packages perhaps too aggressively?)

2018-02-01 Thread Christoph Biedl
Thomas Goirand wrote...

> We already have RFA, where maintainers are asking for adoption. I fail
> to see how a different type of bug will trigger a quicker adoption. An
> ITR is going to (unfortunately) achieve the exact same thing as an RFA,
> which in most cases is ... no much.

I disagree. The messages are ...

RFA: If somebody wishes to take over, please get in touch.
O: If you want to take over, it's yours.
ITR: Somebody take over, otherwise the package will be gone soon.

> See this one (of mine) as an example:
> https://bugs.debian.org/880416
>
> it's just bit-rotting. I've told a few people vaguely interested in the
> package that I will RoM it soon. No action so far. I'm quite sure the
> only path is to actually remove the package. Someone may then pick it up
> because of the removal, but IMO that process can only be speed up by
> actually removing the package faster, not slower. Adding an ITR wont help.

Changing this to ITR would tell "This is your last chance".

Assuming the ITR gets a broader audience than the RM, like d-d and the
packages's qa address: It's a sign of high urgency, and anybody who is
even remotely interested should stand up *now*. While RFA/O mostly show
up in the weekly WNPP report, and while I read this, packages of my
interest usually trigger a feeling of: While I could take some of those,
looking at my time budget, I should rather not. And hopefully somebody
else will jump in.

Actually removing the package in the silent way it happens right now
carries a high risk the next release will ship without it, as users of
stable will not notice until the next dist-upgrade.

So for me the anger is mostly about the silence and the (sometimes)
haste of an RM. I was glad if RMs had to follow a certain procedure
which boils down to notifying more places and giving a grace period of,
say, two weeks. Which is what in my understanding an ITR would do. If
you just don't want to introduce a new name for this augmented RM, be my
guest.

Christoph


signature.asc
Description: PGP signature


Re: Call for tests: New python-magic Python bindings for libmagic

2018-02-04 Thread Christoph Biedl
Paul Wise wrote...

> It might be a good idea to do these:
> 
> Try to rebuild any packages that build-dep on python{,3}-magic and
> compare the resulting binary packages with diffoscope.

Thanks for this suggestion. Turns out one package will indeed fail to
build, fix was trivial. (alot, #889293)

> Try to run the autopkgtests for packages that dep/test-dep on 
> python{,3}-magic.

Certainly, but I haven't got a round tuit yet to create a working
autopkgtest environment.

As announced, the new version of python-magic was uploaded to unstable a
few minutes ago.

Also thanks for your test and feedback.

Christoph


signature.asc
Description: PGP signature


Bug#894185: ITP: libdigest-ssdeep-perl -- Pure Perl ssdeep (CTPH) fuzzy hashing

2018-03-26 Thread Christoph Biedl
Package: wnpp
Severity: wishlist
Owner: Christoph Biedl 

* Package name: libdigest-ssdeep-perl
  Version : 0.9.3
  Upstream Author : Reinoso Guzman 
* URL : https://metacpan.org/pod/Digest::ssdeep
* License : Artistic or GPL-1+
  Programming Lang: Perl
  Description : Pure Perl ssdeep (CTPH) fuzzy hashing

Digest::ssdeep provides simple implementation of ssdeep fuzzy hashing also
known as Context Triggered Piecewise Hashing (CTPH).

This is to be maintained under the pkg-perl team's umbrella.

Christoph


signature.asc
Description: PGP signature


Bug#894186: ITP: libtext-wagnerfischer-perl -- implementation of the Wagner-Fischer edit distance

2018-03-26 Thread Christoph Biedl
Package: wnpp
Severity: wishlist
Owner: Christoph Biedl 

* Package name: libtext-wagnerfischer-perl
  Version : 0.04
  Upstream Author : Dree Mistrut 
* URL : https://metacpan.org/release/Text-WagnerFischer
* License : Artistic or GPL-1+
  Programming Lang: Perl
  Description : implementation of the Wagner-Fischer edit distance

Text::WagnerFischer implements the Wagner-Fischer dynamic programming
technique, used here to calculate the edit distance of two strings. The edit
distance is a measure of the degree of proximity between two strings, based
on "edits": the operations of substitutions, deletions or insertions needed
to transform the string into the other one (and vice versa).

This package is a dependency of libdigest-ssdeep-perl (ITP is #894185)

This is to be maintained under the pkg-perl team's umbrella.

Christoph



signature.asc
Description: PGP signature


Re: MBF proposal: python modules that fail to import

2018-04-15 Thread Christoph Biedl
Helmut Grohne wrote...

> Note that autopkgtest-pkg-python is only applicable when the module name
> matches the package name. That's true for the majority of packages, but
> not for all (e.g. capitalization). Nevertheless, a lot of packages are
> missing the flag. Since I have the data at hand, I figured it would be
> easy to generate a dd-list of packages named after their module that
> lack the tag. You find that list attached.


> Christoph Biedl 
>file

The src:file package doesn't ship python{,3}-magic any longer, the
change was two months ago. Mind to check how file got on this list?

Christoph, otherwise happy to support qa efforts


signature.asc
Description: PGP signature


Re: Bits from the release team: full steam ahead towards buster

2018-04-17 Thread Christoph Biedl
Emilio Pozuelo Monfort wrote...

> We are about halfway through the buster development cycle, and a release
> update was overdue.

Thanks for all the updates, let's make this an exiting ride.


But briefly bleating by boldly bringing balking bits ...

> Future codenames
> 

So we'll see three consecutives releases starting with the letter "b":
buster, bullseye, bookworm - quite funny and I reckon it's no
coincidence. However, it's bad idea.

There are people who don't follow every single action in Debian, plain
stables users for example. For them it's helpful to tell the releases
apart easily as they might not have the precise names and their order in
mind. The first letter is a fairly simple way to aid this.

Also, people who do any kind of work related to releases would likely
use the names as an identifier, directory and screen session names in my
case. Different starting letters speed up tab completion and similar
matching procedures. Just another letter to type (or even two for buster
vs. bullseye) might sound like nothing, but in the long run it shows.
I might switch to the release numbers then which gives a sad feeling of
loosing some color in the work.


Probably it's too late to revert the decision. But for future codenames
I'm asking you to choose names in a way these aspects are considered as
well - and I regret I never sent this mail right after the buster and
bullseye name announcement as I wanted to.

So, please use names that are really easy to tell apart. Having unique
initial letters among all supported release is an essential part of
it. Taking also the symlinks into account was worth a consideration
although this would block any name starting with "e", "o", "s", "t", or
"u".

Also, choosing the names in sorted order (modulo wraparound) would
create a list in historic order of the releases, easing some assessment
when talking about releases. That's what Ubuntu does, although using
consecutive letters is nice but not necessary in my opintion. And yes,
sid is a problem then. It would hit us in around the year 2055. I expect
to be stable by then. Stable six feet under.

Christoph


signature.asc
Description: PGP signature


Re: Completed: lists.alioth.debian.org migration

2018-04-17 Thread Christoph Biedl
Mathias Behrle wrote...

> Big thanks to all involved also from my side, it is great to have the mailing
> lists seamlessly running!

Seconded.


A few questions, though (asking for a friend, of course). It might have
been mentioned before but I have missed it then.

What is the long-term plan for this service? Indefinitely, or are users
kindly asked to move away from these addresses when convenient? In the
latter case lintian should emit according hints since even then this
might take several years.

Also, @lists.alioth.debian.org addresses that were *not* migrated now
result in bounces as expected. Are there already plans for a MBF
severity RC against all packages with a now-failing maintainer address?
This might become rather messy, I've counted some 1450 packages.

Christoph


signature.asc
Description: PGP signature


Re: Completed: lists.alioth.debian.org migration

2018-04-17 Thread Christoph Biedl
Holger Levsen wrote...

> please file bugs, so that autoremovals can kick in. Thanks.

Sheesh, it's not about removing package but keeping them. By making sure
they are in good shape which, among many other things, means there is a
working well-defined maintainer contact address.



  1   2   >