Bug#280566: ITP: clit -- Decompiler for Microsoft's .lit ebook format

2004-11-10 Thread Russell Stuart
Package: wnpp
Severity: wishlist

* Package name: clit
  Version : 1.8
  Upstream Author : [EMAIL PROTECTED]
* URL : http://www.lubemobile.com.au/ras/debian/sarge/clit/
* License : GPL
  Description : Decompiler for Microsoft's .lit ebook format

Convert Microsoft's .lit ebook format back into the raw HTML it
was created from.

Note: this needs libtommath, which is also ITP'ed.

-- System Information:
Debian Release: 3.1
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: i386 (i686)
Kernel: Linux 2.6.8-7-lube-686-smp
Locale: LANG=en_AU, LC_CTYPE=en_AU




Bug#280568: ITP: libtommath -- Number theoretic multiple-precision integer library

2004-11-10 Thread Russell Stuart
Package: wnpp
Severity: wishlist

* Package name: libtommath
  Version : 0.32
  Upstream Author : Tom St Denis <[EMAIL PROTECTED]>
* URL : http://www.lubemobile.com.au/ras/debian/sarge/libtommath/
* License : Public Domain
  Description : Number theoretic multiple-precision integer library

LibTomMath provides highly optimized and portable routines for
a vast majority of integer based number theoretic applications
(including public key cryptography). LibTomMath is not a
cryptographic toolkit itself but it can be used to write one.

Note: This is required by clit, which has also been ITP'ed.

-- System Information:
Debian Release: 3.1
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: i386 (i686)
Kernel: Linux 2.6.8-7-lube-686-smp
Locale: LANG=en_AU, LC_CTYPE=en_AU




Bug#344359: ITP: flowscan-cuflow -- Flowscan module combining CampusIO and SubNetIO

2005-12-21 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart <[EMAIL PROTECTED]>

* Package name: flowscan-cuflow
  Version : 1.5
  Upstream Author : Johan Andersen <[EMAIL PROTECTED]>, Matt Selsky <[EMAIL 
PROTECTED]>
* URL : http://www.columbia.edu/acis/networks/advanced/CUFlow
* License : GPL
  Description : Flowscan module combining CampusIO and SubNetIO

The packaging has been done.  The packages can be found
here:
  http://www.stuart.id.au/russell/files/debian/sarge/flowscan-cuflow

Package: flowscan-cuflow
Architecture: any
Depends: flowscan
Recommends: flowscan-cugrapher
Description: Flowscan module combining CampusIO and SubNetIO
 CUFlow is a FlowScan module designed to combine the features
 of CampusIO and SubNetIO and to process data more quickly.  CUFlow
 allows you to differentiate traffic by protocol, service, TOS,
 router, and network and then generate TopN reports over 5 minutes
 periods and over an extended period of time.

Package: flowscan-cugrapher
Architecture: any
Depends: flowscan-cuflow
Description: A CGI interface for flowscan-cuflow
 CUGrapher is a Web CGI program which generates images on the fly
 based on user input with data supplied by CUFlow.


-- System Information:
Debian Release: 3.1
Architecture: i386 (i686)
Kernel: Linux 2.6.11-7-lube-686-smp
Locale: LANG=en_AU, LC_CTYPE=en_AU (charmap=ISO-8859-1)




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#354815: ITP: cups-emailpdf -- PDF printer that emails the result back the user

2006-03-01 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart <[EMAIL PROTECTED]>

* Package name: cups-emailpdf
  Version : 1.0
  Upstream Author : Russell Stuart <[EMAIL PROTECTED]>
* URL : 
http://www.stuart.id.au/russell/files/debian/sarge/cups-emailpdf/
* License : GPL
  Description : CUPS PDF backend that emails the result back the user

This is a PDF Writer backend for CUPS.  The "printed"
PDF file is emailed back to the user.

The packaging has already been done, and resulting package
can be viewed at the above URL.  Documentation is also
available at the URL.

-- System Information:
Debian Release: 3.1
Architecture: i386 (i686)
Kernel: Linux 2.6.8-16sarge1-lube-686-smp
Locale: LANG=en_AU, LC_CTYPE=en_AU (charmap=ISO-8859-1)



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: ITP: btsco -- ALSA drivers and daemons for using bluetooth audio devices

2006-09-17 Thread Russell Stuart
On Sun, 2006-09-17 at 09:46 +0200, martin f krafft wrote:
> also sprach Nobuhiro Iwamatsu <[EMAIL PROTECTED]> [2006.09.17.0559 +0200]:
> > * Package name: btsco
> 
> Please coordinate with Kel and Russell (on CC), who have been
> working on this package for a while. One of the reasons that we have
> not released it yet is because we want to make btsco a daemon that
> can automatically associate with the device. Right now, btsco
> requires manual (and root) interaction on every use.

The hold up as been me.  I will spare you details, but
I needed 2.6.17 together with a some patches - IMQ in
particular.  A beta of the IMQ patch was released last
week, and with that I spent the remainder of the week
getting 2.6.17 going.  The next couple of weeks I should
have a new version of btsco released.




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#329770: ITP: kern2deb -- Convert RedHat kernel-VER.src.rpm to a Debian package

2005-09-23 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart <[EMAIL PROTECTED]>

* Package name: kern2deb
  Version : 1.1
  Upstream Author : Russell Stuart <[EMAIL PROTECTED]>
* URL : http://www.stuart.id.au/russell/files/debian/sarge/kern2deb/
* License : GPL
  Description : Convert RedHat kernel-VER.src.rpm to a Debian package

Convert a Red Hat kernel source rpm (eg kernel-2.4.21-20.EL.src.rpm)
into Debian kernel source and binary packages (eg
kernel-source-2.4.21.redhat_2.4.21.redhat.20.el.deb and friends).
If Debian has released the same kernel version a kernel-patch-redhat
can also be generated.  The resulting packages can be installed,
compiled, patched using the standard Debian tools, just like any
other Debian kernel package.

This package has already been created.  I am after a sponsor.
Examples of its output can be found here:

  http://www.stuart.id.au/russell/files/debian/

-- System Information:
Debian Release: 3.1
Architecture: i386 (i686)
Kernel: Linux 2.6.11-7-lube-686-smp
Locale: LANG=en_AU, LC_CTYPE=en_AU (charmap=ISO-8859-1)



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Bug#465809: ITP: hpt -- Creates a TCP tunnel through http and https proxies

2008-02-14 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart <[EMAIL PROTECTED]>

* Package name: hpt
  Version : 1.1
  Upstream Author : Russell Stuart <[EMAIL PROTECTED]>
* URL : http://www.stuart.id.au/russell/files/http-proxy-tunnel/
* License : EPL
  Description : Creates a TCP tunnel through http and https proxies

This package installs http-proxy-tunnel.  Http-proxy-tunnel creates
TCP tunnels through a series of http and https proxies.  It differs
from other tunnelling programs such as corkscrew in that with the
right additional magic (described in the README) you can create an
ssh tunnel using the same TCP port web pages are served from.

The debian package has already been done.  It can be found here:
  http://www.stuart.id.au/russell/files/debian/etch/hpt/

-- System Information:
Debian Release: 3.1
Architecture: i386 (i686)
Kernel: Linux 2.6.17-8.1-lube-686-smp
Locale: LANG=en_AU, LC_CTYPE=en_AU (charmap=ISO-8859-1)



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Password Protecting GPG Keys

2014-06-13 Thread Russell Stuart
There was a thread on d-private in early March about the benefits and
downsides to to requiring every DD and aspiring DD to sign their
messages.  One of the reasons raised for not doing it is some felt
uncomfortable carrying around their GPG keys when travelling.

My initial reaction was "that's being overly cautious" particularly
given there signing every message doesn't mean you have to carry around
your master key.  However, it did make me wonder just how safe a GPG key
(or indeed any file) is, if it is protected by a password and nothing
else.

To put the problem in precise terms: Lets say I had a bitcoin wallet
with keys controlling $10M worth of coins.  (A figure is needed because
it determines the upper bound on the amount of effort a rational
attacker would devote to the problem.)  Is it possible to put an
encrypted version of that file in a public place on the web, so that
everybody knew what it was worth, and have it protected by a password I
could be comfortable in remembering in a couple of years time, and be
sure it is safe.  "Sure it is safe" here means it is going to cost the
attacher more than it's worth.

It turned out to be a far more interesting question than I first
supposed.  To cut to the chase, I think the answer is yes - it is
possible.  But not with the tools shipped by Debian today [0] [1].

I have up a web page explaining what you have to do to protect such a
file, together with a tool that makes it possible (but possibly not
convenient enough for everyday use).  You can find it here:

  http://pbkdf2.sourceforge.net (don't be put off by the name.)

Maybe it will convince some of you it is possible to carry your gpg key
around with you - or at the very least store it in the web somewhere so
you can get to it when you need it.



[0] Debian does include scrypt, which is what I based my program on.
Unfortunately it lets you set the maximum strength of the
expanded key, but not a minimum.   That's not quite what I was
after.

[1] And since it isn't possible with the tools available on Debian,
that means my gut "he's being overly cautious" reaction was wrong.



signature.asc
Description: This is a digitally signed message part


Re: Password Protecting GPG Keys

2014-06-16 Thread Russell Stuart
On Mon, 2014-06-16 at 12:01 +, Thorsten Glaser wrote:
> You completely miss http://xkcd.com/538/ and the fact that some
> legislations may require you, with jail penalty, to hand over
> any encryption keys, passwords, etc. you have with you when
> inside their territory.

Quoting the man page:

"Following these instructions ensures your password is not the
weakest link in the chain. In reality this won't stop an attacker,
they will just move their attention to the next weakest link. Avoid
malware, dementia, rubber hoses, and the UK."

That aside, the rubber hose is mostly an orthogonal problem.  What you
are really trying to protect against isn't someone just stealing your
keys.  By itself it isn't sufficient to do real damage.  The attacker
needs something more: they have to steal your keys without you knowing.

This was demonstrated when a DD had to forfeit his laptop recently.  The
project found out almost immediately, and the loophole was closed before
it could be exploited.  The situation is similar if you have your credit
card details stolen, or banking credentials leaks, or you lose your
bitcoin wallet.  Once you find out about it the risk period ends.  Ergo
if you know about it immediately there is almost no risk, period.

So the rubber hose comic is funny, but is also misleading.  After all if
someone has hit you about the head with a rubber hose the odds are high
you will know he's done it.  But if someone gets hold of your encrypted
secret key and brute forces the password, you won't.  And if you don't,
you are looking at the possibility of someone install back doors into
Debian for years.  Yes, it is a black swan event - but they only need
one.

Unfortunately getting hold of the encrypted key is made easier because
because we have to back the damned things up.  If we do it properly, we
have created multiple copies, distributed them across separate
geographic localities, probably in different countries.

The only thing protecting those backups is the password.

If you have read this far, I hope you now understand what lead me to
think about the following scenario: let's assume the worst case
scenario: those backups are public.  Is it possible to securely protect
them using a human memorable password?


signature.asc
Description: This is a digitally signed message part


Re: HTTPS everywhere!

2014-06-17 Thread Russell Stuart
On Wed, 2014-06-18 at 04:54 +0200, Christoph Anton Mitterer wrote:
> Well https with X.509 has inherent problems which we won't be able to
> solve...

Precisely.  It has a horrible design bug.

Given the nature of the net, where we want to deal securely with some
entity never dealt with or of heard of before  like, www.shop.com, we
are forced to rely on a third party to assure us the DNS name
www.shop.com really is owned by "shop.com".  This is what the X.509
does.  I am not aware of anything that could do it better.

So you need X.509 PKI (even with all its flaws) during that first
contact.  But after you've sent them money or downloaded their software
you have formed a trust relationship with whoever controls that cert far
stronger than the assurances X.509 provides.  That is true in the
positive sense if you receive your goods after paying, or the software
you downloaded works well, or in the negative sense if the reverse
happens.  Regardless, next time you deal with the entity that controls
the www.shop.com cert, you now know far more about them than the X.509
PKI does.

The bug is the current system forces you to reply on X.509 for all
future contacts, even though you have much better source of trust.
During that initial contact the protocol could have arranged for you to
download a cert signed by the owners of shop.com themselves, so you
could reply on it in the future instead of X.509.  Suddenly all X.509
issues, like MITM attacks, disappear.

Unfortunately that's just the start.  It's possible to imagine much
stronger protocols.  For example, that initial contact creates a
"bookmark" your browser stores so you can access the site again.  The
bookmark embeds the cert from shop.com.  The advantage of this bookmark
is it provides mutual authentication, so not only do you know the site
is still owned by the same people - the site knows its you contacting
them.  This means when you use the bookmark the site can reduce it's
security demands - as in there is no need for you to remember super
strong passwords.  It also means the site can pro-actively train you to
behave in a same manner - as in make life easier when you use the
bookmark to contact them.  So if you click on a phishing link  it
suddenly becomes obvious you are not dealing with the real "shop.com".

But are apparently welded to the current stuff, and as you say Debian is
not in any position to change that.  I had hoped they would address it
in HTTP2.  It's the ideal time.  They are break forwards compatibility
in all sorts of ways.  But it doesn't seem to have entered their minds.


signature.asc
Description: This is a digitally signed message part


Re: HTTPS everywhere!

2014-06-21 Thread Russell Stuart
On Fri, 2014-06-20 at 22:58 +0200, Christoph Anton Mitterer wrote:
> > But after you've sent them money or downloaded their software
> > you have formed a trust relationship with whoever controls that cert far
> > stronger than the assurances X.509 provides.  That is true in the
> > positive sense if you receive your goods after paying, or the software
> > you downloaded works well, or in the negative sense if the reverse
> > happens.  Regardless, next time you deal with the entity that controls
> > the www.shop.com cert, you now know far more about them than the X.509
> > PKI does.
> I don't quite understand what you mean here.

Sorry for not being clear.  I was comparing the relative trustworthiness
of two certificates.

One is a CA - well an unknown CA among the multitudes in your browser.
The other is a web store.  The store is utterly known to you - apart
from the fact that you sent them some money on the promise they would
deliver goods advertised on their web site, and they delivered on that
promise.  However, during that transaction they gave you a certificate
on the assumption is they control the private key to that certificate.

The situation now is you wish to purchase from this person again.  As
before you must interact with them over the web.   When you go do
www.shop.com, the only assurance you have you are dealing with the same
mob is the certificate you choose.

I was making (I hope by now obvious) point that you would trust the
shop's certificate more.

> > The bug is the current system forces you to reply on X.509 for all
> > future contacts, even though you have much better source of trust.
> > During that initial contact the protocol could have arranged for you to
> > download a cert signed by the owners of shop.com themselves, so you
> > could reply on it in the future instead of X.509.  Suddenly all X.509
> > issues, like MITM attacks, disappear.
> Well more or less... this *is* the case ... or at least it can be done
> when you use something like Certificate Patrol.
> Then you verify whether it's still the same cert that you communicate
> with (and only the shop owner should have the key).
> 
> But reality is: It doesn't really help you at all since:
> - the attacker could have MitM you in the first place and even when you
> - you loose the whole framework that allows key/cert changes
> (renewals/revocations), etc.

Does it take only one counter example to disprove this?  If so, the
DigiNota attack is it.  Quoting Wikipedia [0]:

  "300,000 Iranian Gmail users as the main target of the hack (targeted
  subsequently using man-in-the-middle attacks), and suspected that
  Iranian government was behind the hack"

In all likelihood, people died because of this.

But consider: these people were existing Gmail users.  Under my scheme,
they would have ceased needing to use the X.509 PKI infrastructure long
ago, long before the leaders of Iran realised they needed to compromise
the X.509 PKI infrastructure to suppress their dissent.


signature.asc
Description: This is a digitally signed message part


Re: HTTPS everywhere!

2014-06-21 Thread Russell Stuart
On Sat, 2014-06-21 at 17:58 +0200, Christoph Anton Mitterer wrote:
> Take Turktrust as an example... IIRC the case correctly, they
> "accidentally" (whoever believes that) issued a cert which was a
> intermediate CA and which was used to issue forged Google certs.
> After days and only after long discussion they only blocked these
> certs... and Turktrust itself is still in (see
> https://bugzilla.mozilla.org/show_bug.cgi?id=82 or some similar
> reports from others) even though they proved that they're either not
> competent enough to run a CA or they're evil.
> And such CAs (even though they're not big enough not to fail) are not
> removed, which proves: the reason to be in the Mozilla bundle (i.e.
> considered to be trustworthy) <-> money
> 
> Same example CNNIC... governmental controlled CA from a dictatorial
> communist country which is known for heavy espionage against their own
> and foreign citizens => absolutely untrustworthy per se
> 
> Any US based CA: national security letters + gag orders => absolutely
> untrustworthy per se

The problem isn't that government security agencies can in all
likelihood MITM any connection they wish.  I'm sure that's true, but I'm
equally sure they don't do it that often for fear of being caught.  It's
actually far worse than that.  The problem is where I live every school,
most government organisations, and many private organisations routinely
MITM https connections passing through their infrastructure.

Given that is so, I am struggling to understand what you hope to achieve
by setting up yet another CA.  You are operating over the same
infrastructure, with all it's problems.

There is one easy way to tighten things up.  Currently, if a Debian user
wants a netinst the best option we offer him is to use
https://www.debian.org/CD/netinst/ and rely on the X.509 PKI to ensure
he is getting the real McCoy.  That makes the download step the weakest
link in the chain, because if I can substitute that netinst for one that
includes my keys in the keyring package, I own him.  And given the state
of X.509 PKI, substituting it is relatively easy.

For existing Debian users, closing that loophole is easy: make netinst a
package, that can be downloaded and installed using apt.


signature.asc
Description: This is a digitally signed message part


Re: HTTPS everywhere!

2014-06-21 Thread Russell Stuart
On Sun, 2014-06-22 at 03:34 +0200, Christoph Anton Mitterer wrote:
> Well as it should be clear to everyone by now... with a own CA and with
> specifically checking for certs issued by *only that* CA you can fully
> secure things like apt-listbugs.

Sure, but you are no longer discussing a PKI system here.  If you are
going to abandon X.509 PKI, why not just use OpenPGP and just have
apt-listbugs ensure whatever is downloaded is signed by our keyring.  It
has the major advantage that it works across mirrors..

But I guess it depends on what you want to secure.  Perhaps were we
depart is I don't see huge value in securing the web site.

> Actually one could even go a step further,... IIRC, some domain/CA
> combinations are hardcoded in browsers like Chrome/Firefox... if that
> infrastructure is already in place, we could probably easily add a patch
> so that our debian.org/net are only accepted with certs from the "Debian
> CA".

So you want to introduce pinning.  Some browsers do that already.  For
example, Chrome pins Google's certs.  Probably would not hurt. It's just
a question of whether securing the web site is really worth the effort.

> Don't understand what you talk about... AFAICS you can't download any
> netinst images via https at all.

Hmm.  You are right.  The situation is worse than I thought.

> And the same is true when you verify via OpenPGP.

No really.  Yes, the distribution problem is the same when you verify
via OpenPGP.  The difference is for OpenPGP, Debian has already has a
solution in place.




signature.asc
Description: This is a digitally signed message part


Re: HTTPS everywhere!

2014-06-22 Thread Russell Stuart
On Sun, 2014-06-22 at 15:49 +0200, Christoph Anton Mitterer wrote:
> On Sun, 2014-06-22 at 14:21 +1000, Russell Stuart wrote: 
> > Sure, but you are no longer discussing a PKI system here.  If you are
> > going to abandon X.509 PKI
> Well first of all,... PKI is just "public key infrastructure" and not
> necessarily means X.509.

Correct.  That's why I referred to it as X.509 PKI and not just X.509.

> Well first, AFAIK, there are no mirrors for the BTS... and then
> securing something like BTS with OpenPGP is quite difficult.

There is a straight forward solution to handling BTS messages.  You just
DKIM sign them with an appropriate key when they are received.

> Given that these services are used more and more for development, I
> think it's more and more important to secure them as far as possible.

90% of what you want could be achieved with a working version of
Certificate Patrol.  Ship it as a standard part of iceweasel, pre
configured with a few certs and enabled by default.

That nice thing about getting Certificate Patrol working is it helps
everyone - not just Debian.


signature.asc
Description: This is a digitally signed message part


Bug#752398: ITP: python-spyne -- Python RPC library for HttpRpc, SOAP, Json and more

2014-06-23 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart 

* Package name: python-spyne
  Version : 2.10.10
  Upstream Author : Burak Arslan 
* URL : http://spyne.io/
* License : LGPL
  Programming Lang: Python
  Description : Python RPC library for HttpRpc, SOAP, Json and more

Spyne is a Python library for implementing RPC servers over HTTP and
ZeroMQ using a multitude of serialisation formats, such as HttpRpc,
SOAP, Json, XML.  The signatures of the published functions and the
data types passed as parameters and returned as results are obtained
by introspecting the python code.

Originally called soaplib, it was later renamed to rpclib, and then
to spyne.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20140623121223.6063.43461.report...@russell-laptop.pc.brisbane.lube



Bug#752399: ITP: python-fdb -- Python DB-API driver for Firebird

2014-06-23 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart 

* Package name: python-fdb
  Version : 1.4
  Upstream Author : Pavel Cisar 
* URL : https://pypi.python.org/pypi/fdb/
* License : BSD
  Programming Lang: C, Python
  Description : Python DB-API driver for Firebird

FDB is a Python library package that implements Python Database API
2.0-compliant support for the open source relational database Firebird®.
In addition to the minimal feature set of the standard Python DB API,
FDB also exposes nearly the entire native client API of the database
engine.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20140623122052.6505.31970.report...@russell-laptop.pc.brisbane.lube



Re: HTTPS everywhere!

2014-06-23 Thread Russell Stuart
On Mon, 2014-06-23 at 17:26 +0200, Christoph Anton Mitterer wrote:
> Maybe my understanding of DKIM is too little... but I thought it would
> be only some technique to verify the authenticity of sender addresses?

DKIM, OpenPGP, X.509 - they are all the same thing with different names.
They all compute a hash of a lump of data, and encrypt it using a
asymmetric cypher.  Given that, it just boils down to what data they
encrypt and what crypto they use.  For DKIM the signer gets to select
what he signs - it could be the entire email, and the crypto is
rsa-sha256.  Key size isn't constrained, but is generally 2048 bits.

The only reason I mentioned DKIM is the software is already written.  It
would be a few hundred lines of code, at most, to sign every incoming
email.

> And as I've said... just signing somehow all the single mails that
> arrive at the BTS, which could be verified by clients when they read it
> is not enough.
> That would allow an attacker to easily filter out single messages...
> somehow you need to secure the series of all messages,... and also
> things like negative replies (e.e. "there is no bug for package xyz).
> And since many people interact with the BTS via web (well at least for
> reading) you're anyway obliged to support some https solution.

The usual solution so that is what Debian uses for it's package archive.
It's to sign the index of messages apt-listbugs downloads in order to
find the emails to display.

> Even if you have some pinning technique like CP in place,... than a
> non-Debian rogue CA can simply attack you on your first visit of
> https://whatever.debian.org/ and your CP is useless.

You've lost me.  Whether Debian is a CA or not it irrelevant for the
initial download of software over the net.  It will be done, by
definition, using non-Debian software, which be using the X.509 PKI in
the normal way.  The normal way here implies they will trust every CA
bundled into downloaded software.  Including a Debian CA in that bundle
doesn't help Debian's security in the slightest.

Pinning is just another word for "I don't need to use the X.509 PKI,
because I obtained the Debian certificate via a side channel".  By
definition that means whether Debian is a CA or not is irrelevant -
because being a CA means "I am one of the privileged few whose
certificates are distributed by the X.509 PKI".  So yes, I agree pinning
is useless when you first download the software.  But nothing you have
suggested so far is any better in that case.  As far as I know, nothing
can be any better.

> Again... using CP alone won't make things secure - unless you really
> hard code all the single Debian host certs in all programs that use
> TLS/SSL (or at least with Debian services).

For me "all programs" boils down to 1 - Firefox.  Some others might use
Chromium, which given Chrome supports Google's pinning its own certs
probably could be hacked easily enough to support Debian doing the same
thing.

> It's much easier to run our own Debian CA plus:
> - for most non-browser programs that allow to specify which CAs are
> trusted, only add that Debian CA
> - for browsers: hard code that Debian CA as the only one for debian.org|
> net

This looks like pinning under another name to me.  And quoting you
above, in this very same email, you say pinning is too hard because you
have to "hard code all the single Debian host certs in all programs that
use TLS/SSL (or at least with Debian services)".  And yet now you say we
have to do this anyway!

Your insistence that Debian become a CA is now truly mystifying.  There
is only one reason to become CA, and that is so you can have your
certificates distributed by the X.509 PKI.  Yet you profoundly distrust
the X.509 PKI, so much so (and imo with good reason) that you insist we
don't use X.509 PKI at all when interacting with Debian, preferring to
use a pinned Debian cert instead.  So bother becoming a CA in the first
place?


signature.asc
Description: This is a digitally signed message part


Re: HTTPS everywhere!

2014-06-23 Thread Russell Stuart
On Tue, 2014-06-24 at 08:29 +0200, Matthias Urlichs wrote:
> The difference is that while pinning a bunch of certificates is indeed a
> lot of on-going work, pinning the CA cert used to sign these is not (set up
> the CA and install it into our software once, sign server certificates with
> that forevermore).

If that is a huge problem you just pin the CA's cert.  The assertion you
are making is: all .debian.net/.debian.org's must be signed by this
root.  To compromise Debian the attacker must compromise a CA Debian
chooses, not a CA of their choice.

It's not a new idea - Certificate Patrol already does it.


signature.asc
Description: This is a digitally signed message part


Re: Alioth tracker

2014-06-24 Thread Russell Stuart
On Tue, 2014-06-24 at 18:05 +0200, Raphael Hertzog wrote:
> On Tue, 24 Jun 2014, Оlе Ѕtrеісhеr wrote:
> > I thought that usually such requests should be done through an request
> > for help? (Is that valid for "pseudo packages" like a hypothetical
> > alioth one?) If the alioth admin team would openly request for help
> > (maybe on the Debian-News?), they may gain more attention for that.
> 
> Most infrastructure teams are always looking for help, wether they send
> explicit calls for help or not. And even more so when they are not working
> properly.

Sorry to be blunt Raphael, but your response looks like a series of
platitudes designed to settle down the politics.  Alioth's maintenance
is in serious trouble.  It is unusable to many projects on it, and it is
clear from it's own bug reporting systems this has been the case for
years.

Deciding to accept an offer of help requires a lot of work.  You have to
find a graduated set of tasks that allows you to gain confidence in the
persons abilities, and closely monitor how well they perform them.  It
looks to me like the effort required is beyond the Alioth team.

I say that because I had a similar experience to Ole, which I reported
here [0], on IRC and on Alioth's bug tracker [1].  They all included
offers to help.   AFAICT the bug reports have never been looked at.

The only response I got here was from Paul Wise [2] who said my negative
impressions of sourceforge were outdated.  (Well done Ole - you managed
to elicit a better response than me.)

In the end it doesn't matter I guess.  Unlike the other core Debian
facilities there are lots of well known alternatives out there so Alioth
isn't really needed.  I tried 3 of them (4 if you count Alioth) moving
on from each after trying to put a project on them and finding then
deficient.  Thanks to Paul's advice I tried SourceForge in the end, and
so far it has worked out well [3].  Thanks Paul - that response really
helped.

Ole - I suggest you do the same.



[0] https://lists.debian.org/debian-devel/2014/05/msg00463.html
[1]
https://alioth.debian.org/tracker/index.php?func=detail&aid=314680&group_id=1&atid=21
[2] https://lists.debian.org/debian-devel/2014/05/msg00464.html
[3] http://sourceforge.net/u/rstuart/profile/




signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-09-28 Thread Russell Stuart
On Sun, 2014-09-28 at 09:33 +0100, Colin Watson wrote:
> On Sat, Sep 27, 2014 at 09:11:44PM -0500, Troy Benjegerdes wrote:
> > Does update-menus really need bash? Why?
> 
> pipefail is actually a fairly useful bashism.

I've attempted to port the many shell scripts I've written over the
years to dash.  The three irritants are:

  - pipefail,
  - local variables, 
  - array variables.

If dash had those features conversion could almost be mechanical.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-09-28 Thread Russell Stuart
On Sun, 2014-09-28 at 16:47 +0200, Guillem Jover wrote:
> > I've attempted to port the many shell scripts I've written over the
> > years to dash.  The three irritants are:
> > 
> >   - pipefail,
> 
> .

That's one of those "scratch my eyes out" solutions. A more readable
solution is just to say the exit status of each command in a temporary
file.  Given how infrequently the problem arises, it isn't a major
issue.

> >   - local variables,
> 
> dash does have local variables support.

So it does!  You now have me wondering why I thought it didn't.

> >   - array variables.

No workaround for this one?  Pity.  This is what usually prevents
conversion.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-09-29 Thread Russell Stuart
On Mon, 2014-09-29 at 08:03 +0200, Matthias Urlichs wrote:
> Russell Stuart:
> > 
> > > >   - array variables.
> > 
> > No workaround for this one?  Pity.  This is what usually prevents
> > conversion.
> 
> Well, you could use $ary_len to remember the length of the array,
>   "$(eval "echo \"\$ary_$pos\"")"
> for retrieving values, and
>   val="some random value which probably requires quoting when eval'd"
>   eval "ary_$pos=\"\$val\""
> for assigning to individual members.
> 
> Package that in a couple of helper functions and it looks almost sane. :-/

For some versions of sane I guess.

The major reason for having an array is to be able to go "${array[@]}"
somewhere, and have the quoting automagically work.

Like all successors of the original /bin/sh, dash does have to support
arrays for its argument processing: supporting "$*", "$@", "$#" and
shift off the top of my head.  You can bend it to your own purposes to
some extend using "set --- val1 val2 ...".

I suspect some think adding arrays is a big change, introducing new
concepts to dash.  But it isn't really.  All it really does is allow you
to have named argument lists in addition to the built in one.  And most
uses I have found for them are in that vein as well - building up
argument lists for commands, without having to descend into eval/quoting
hell.  


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-09-30 Thread Russell Stuart
On Tue, 2014-09-30 at 13:08 +0200, Thorsten Glaser wrote:
> You really really should be looking at replacing any
> ash variant with mksh. It’s not that much bigger (at
> least if you add -DMKSH_SMALL to CPPFLAGS and build
> with klibc or dietlibc or so), but much saner.

I am not a fan of any particular variant of *sh.  They are all horrible
computer languages.  Nothing over a couple of lines should be written in
them, as they are idiosyncratic, error prone and basic software
engineering processes (like units tests) difficult.

The only reason I ported things to dash is /bin/sh is now linked to it,
which in view makes it the standard shell.  Every script starting with
#!/bin/sh must work with.  If I can't get it working because of a
missing feature like arrays then I have to change it to #!/bin/bash or
something, and add an explicit dependency.

It's the additional dependency I find irksome.  If I am going to add
one, whether it is to bash, mksh or any other variant doesn't really
matter - they are all equally bad as programming languages.  If I wasn't
so lazy, I'd move them to a real computer language.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-02 Thread Russell Stuart
On Thu, 2014-10-02 at 11:48 +0200, Thorsten Glaser wrote:
> This is wrong. Every script starting with #!/bin/sh must work with a
> POSIX shell that supports “local” and “echo -n” (Policy §10.4).

Solid, working software is hard enough to produce.  A policy requiring
something you can't test for makes it near impossible.

IMO, if Debian has decided the in the default case /bin/sh ==> dash,
then the policy should say "#!/bin/sh scripts" must work with dash.  It
then becomes trivial for Developers to test their code conforms with
policy.  If we allow /bin/sh to be linked to other shells, policy should
say those shells must implement all the features /bin/dash implements so
that any script that works with dash must also work with them.

As is stands, the one thing you can guarantee we will get from our
policy saying "#!/bin/sh" scripts work with a shell that does not exist
and can't be tested against is scripts that have never been tested
against that policy.

If Debian really wants to implement the policy as described, then it
should do the work required to produce robust software that conforms
with it.  In this case that would mean producing a shell that behaves as
described, which we make /bin/sh by default.  Perhaps a flag to dash
stripping all of the features not described in SUSv3 features would
suffice.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-02 Thread Russell Stuart
On Fri, 2014-10-03 at 00:04 +, brian m. carlson wrote:
> The shell you're describing is posh.  It implements exactly those
> features, and nothing more.

You've got me to look at posh.  Thanks for that.

So we do have a shell that developers can use to test their scripts
match Debian policy.

> Unfortunately, some developers have outright refused to make their
> software using /bin/sh work with posh, even when provided with a patch
> (e.g. #309415), to the point that last time I tried to use posh
> as /bin/sh, the system wouldn't boot.

If I understand what you are saying correctly this means policy 10.4 is
mostly ignored.  Worse, if you try to configure a system that conforms
closely to that policy it doesn't boot.  This is the sort of thing that
that makes open source programmers look like rank amateurs.

If we want Debian policy to reflect reality and make it easy for
developers to test their scripts conform to policy, then it should say
"#!/bin/sh" scripts must work with dash.

As an aside, I'm not sure why the preference for posh over dash, given:

$ size $(which posh)
   textdata bss dec hex filename
 12326048682920  131048   1ffe8 /bin/posh
$ size $(which dash)
   textdata bss dec hex filename
 1083765192   11240  124808   1e788 /bin/dash



signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-02 Thread Russell Stuart
On Thu, 2014-10-02 at 22:50 -0300, Henrique de Moraes Holschuh wrote:
> Debian policy mandates that /bin/sh implement a _superset_ of POSIX, which
> is out of scope for "posh".

Regardless, posh implements all the additional features mandated by
10.4:

  echo -n, if implemented as a shell built-in, must not generate a
  newline.

  $ posh -c "echo -n x"
  x$

  test, if implemented as a shell built-in, must support -a and -o as
  binary logical operators.

  $ posh -c "test -z '' -a -z '' && echo x; test -z '' -a -n '' && echo y";
  x
  $

  $ posh -c "test -n '' -o -n '' && echo x; test -n '' -o -z '' && echo y";
  y
  $

  local to create a scoped variable must be supported, ...

  $ posh -c 'a=a; x() { local a b c=C; a=A; echo $a$c; }; x; echo $a$c'
  AC
  a

   The XSI extension to kill allowing kill -signal, where signal is
   either the name of a signal or one of the numeric signals ...

  $ posh -c 'kill -KILL $$'
  Killed
  $   

   The XSI extension to trap allowing numeric signals must be
   supported. In addition to the signal numbers listed in the
   extension, which are the same as for kill above, 13 (SIGPIPE) must
   be allowed.

$ posh -c 'trap "echo sigterm" TERM; kill -TERM $$'
sigterm
$



signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-02 Thread Russell Stuart
On Thu, 2014-10-02 at 18:05 -0700, Russ Allbery wrote:
> Up until dash changes, and then you have absolutely no idea what to do
> with that sort of policy.  There's a reason why no standards document I've
> ever seen says something like this.  The ISO C standard isn't going to say
> that anything that compiles with gcc is valid code.

Correct.  But we aren't we aren't ISO C, or POSIX and we aren't in the
standards business.  We are in the business of producing working
reliable systems.  Yes we do use standards to do that, we even write a
few of our own.  But unlike ISO and POSIX they are for our internal
consumption.  The thing we create and release to others is the archives,
and it would be a long stretch to call them a standard.

Onto your "then you have absolutely no idea what to do with that sort of
policy" comment.  

Why have policy 10.4 at all?  Presumably to make producing a reliable,
robust Debian easier.  But 10.4 doesn't proscribe a standard way of
doing that or checking it has been done.  Unsurprisingly adherence is at
best sporadic.

So now to the answer to your "then you have absolutely no idea what to
do with that sort of policy" comment.  This is what you do.  You, as a
developer create unit tests the scripts using /bin/posh (or whatever
shell implements the policy), and if you do your job well you deliver a
system that works reliably under a prescribed set of conditions (ie posh
is installed as /bin/sh). You have a reasonable chance of this remaining
so because the hopefully the unit tests are run every time the package
is build.  The standard may not be well enough defined for your tastes,
but in my world repeatable and reliable take a far higher precedence.

I can also tell you how "what to do with that sort of policy" applies
the current policy.  What is done is we have the occasional "spat" about
bash'ism and discussion on shell syntax.  Having bashism's (or not) is
NOT the same as working.  Yet, that's about the best we can do.  Thus
the policy is not robustly and objectively enforced (and worse, can not
be).  brian's observation should come as no surprise: when you configure
Debian with /bin/sh conforming strictly to 10.4, Debian doesn't boot.
Surely you aren't going to say this is an example of policy working
well?

AFAICT defining posh as the shell all "#!/bin/sh" scripts must work with
is better in every way, and where ever possible all Debian policy should
be like that.  Which is to say it should be obvious what you have to do
to comply, it should be automatically verifiable you have complied, and
if you comply it should help in ensuring Debian is robust and reliable.
The current policy 10.4 fails the first two tests.  Since could be
re-worded so it doesn't, there is no doubt in my mind it could be
better.

This isn't to say we should delete the statements on what we expect of
the shell.  They are excellent good documentation.  I guess in summing
up, my main point is good documentation doesn't necessarily make good
policy.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-03 Thread Russell Stuart
On Thu, 2014-10-02 at 20:43 -0700, Russ Allbery wrote:
> A lot of people miss this about Policy 10.4.  People seem to think that
> Policy 10.4 is about requirements for shell scripts.  But it's just as
> much a standard for /bin/sh.

You wrote it, so I guess you get to say what it means.  But if you
hadn't said so just now, I'd be using the "People"'s interpretation
rather than yours.  10.4 is after all titled "Scripts", not "a sh
language definition" or some such.  Where it does define the shell
language, it does so in in this context: "Scripts may assume
that /bin/sh implements ...".  So to me it is addressing itself to
script writers, not sh language designers, and to extent it describe the
the shell language standard is does to purely to inform the script
writers what environment they can expect when they use "#!/bin/sh".

> This was important when we were debating switching from bash to something
> else, and needed to be clear about what behavior the rest of the
> archive could expect /bin/sh to always satisfy.

I looks to me like you are re-writing history.  Right back in the
2.1.3.2 policy was pretty much what it was now.  "#!/bin/sh" scripts had
to be POSIX - albeit with less extensions than we allow today.  If
policy was so good at informing the debate about what "#!/bin/sh"
scripts did, I'd be surprised there was much in the way of a debate.
You could have switched to any shell that implemented the POSIX subset
with very little pain.

So this statement of yours: "we ... needed to be clear about what
behavior the rest of the archive could expect /bin/sh to always satisfy"
is puzzling, because there was pain.  Everyone knew what /bin/sh did -
it was defined in the bash man page.  Since bashism's worked just fine,
and evidently regardless of what policy said no one cared whether you
used bash'ism or not so they were used with gay abandon.  If as you
suggest Debian relied on policy for a clear description of how /bin/sh
scripts behaved, it was in for a rude shock.

It didn't of course.  Debian got the clear description it needed by
writing automated checkers like checkbashism, followed threats to
change /bin/sh away from /bin/bash, mass bug filings and finally in 2011
doing it.

I am not criticising any part of this process.  Standardising its shell
language was huge undertaking for Debian, and pull it off almost without
a hitch.  It's the sort of thing that makes me proud of the project.
What I am questioning is your assertion that policy that wasn't verified
let alone enforced was somehow key to it.

> I think people often don't realize what Policy is actually about, or what
> it can (and can't!) accomplish.  Policy is more a way for us to coordinate
> our work and agree on what we're actually talking about than an enforced
> set of rules that are followed.

Again, you've lost me.  Yes, policy that is followed and policed is very
useful.  It is very nice have man pages for almost everything.  For me
its essential I can rely on Debian's copyright policing.

But to use this example again, you are saying the agreeing that
"#!/bin/sh" scripts shall be POSIX shell scripts, and then largely
ignoring it for 10 years because it is unverifiable was helpful to the
project?  I don't see how it saved anyone any time.

> So yes, there's a lot of Policy that is ignored in practice.  You can take
> various attitudes towards that.  You can view that as meaning Policy is
> (at least partly) worthless because we're not enforcing it.  Or you can
> decide that Policy is more aspirational than descriptive.  Or you can
> focus on the change Policy has helped make happen.  I think all those
> viewpoints are accurate to a degree.

OK, but realise you are making life hard for some of us here.  Perhaps
you, as one of the policy author's know what bits are hard and fast
rules and which bits are purely aspirational.  I don't.  I guess if we
less knowledgeable folk finding ourselves disagreeing with some policy,
we can try assuming it's aspirational and ignore it.  Yes, it made me
cringe to write that.  But you are telling me it is the way Debian works
now.  And I get the impression you think this is a good thing.

> As bad as you think the compliance with Policy 10.4 is right now, I
> guarantee that the prospects of being able to use something else as 
> /bin/sh would be way worse if we did what you suggest.

Ah!  And here we is our fundamental point of difference.  It is beyond
me how you could think that could be so.  So much so that I'm doubting
my comprehension abilities.

I do have this right - the goal is to ensure "#!/bin/sh" scripts use a
standardised subset of the shell languages out there?  That way should a
user change a different /bin/sh, he can be reasonably sure it will work
if implements this well defined subset.  (And yes, I acknowledge the
subset is well defined in the current policy - well done.)

To achieve that end you are proposing all we do is ask developers nicely
to use that subset.  The alternative I 

Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-06 Thread Russell Stuart
On Fri, 2014-10-03 at 09:20 -0700, Russ Allbery wrote:
> Russell Stuart  writes:
> > I looks to me like you are re-writing history.
> 
> I'm not sure how you meant this, but to note, this sentence made me very
> sad, since it felt like you believe I'm being intentionally dishonest with
> you.  I'm really not, although I'm not sure how to convince you of that.

You convinced me I was wrong a few sentences later.

> I thought that's what you were getting at when talking about
> testing

Not really.  I'm about documentation reflecting reality.  Think of
putting an electrical component whose documentation says its 200 degrees
on a motherboard, only to find it fails at 190.  When you ask why, is
"well we design it for 200, but only test it to 180" a satisfying
answer?

You have convinced me that in this case it's going to have to be that
way, so my prejudices notwithstanding.  I've rationalised the pain away
by deciding it's no so bad as any competent programmer could see that is
it only tested to 190 regardless of what the standards say.

> Oh!  I didn't realize or internalize that you were proposing switching the
> default shell to posh from dash.  Yes, that would certainly improve our
> compliance with Policy considerably.

It's attractive because makes Policy more relevant - but only because of
that.  Now that I think about it, switching pbuilder to posh would be
almost as good.  Any additional pain would not be worth the effort.

If Debian was going to switch to another shell, I'd vote for the one in
busybox.  That's because on desktop machines it doesn't matter, but on
embedded architectures it does - and they use busybox.  So switching to
busybox would extend Debian's reach.

> If the speed is comparable

Here are two benchmarks.  I did others. These demonstrate the extremes:

$ time dash -c 'i=0; while [ $i -lt 1000 ]; do echo -n; i=$(($i + 
1)); done'
real0m16.695s
user0m16.684s
sys 0m0.000s
$ time posh -c 'i=0; while [ $i -lt 1000 ]; do echo -n; i=$(($i + 
1)); done'
real0m41.899s
user0m41.872s
sys 0m0.000s
$ time busybox sh -c 'i=0; while [ $i -lt 1000 ]; do echo -n; 
i=$(($i + 1)); done'
real0m27.938s
user0m25.160s
sys 0m2.760s
$ time bash -c 'i=0; while [ $i -lt 1000 ]; do echo -n; i=$(($i + 
1)); done'
real1m7.971s
user1m7.928s
sys 0m0.000s

$ time dash -c 'x="aaa"; t() { local x=$1; echo $x; }; 
while [ "${x%b}" = "${x}" ]; do y=; while :; do z="${x#b}"; [ "$z" != "$x" ] || 
break; y=a$y x=$z; done; x=$(t ${y}b${x#a}); done'
real0m1.577s
user0m0.204s
sys 0m0.500s
$ time posh -c 'x="aaa"; t() { local x=$1; echo $x; }; 
while [ "${x%b}" = "${x}" ]; do y=; while :; do z="${x#b}"; [ "$z" != "$x" ] || 
break; y=a$y x=$z; done; x=$(t ${y}b${x#a}); done'
real0m2.232s
user0m0.316s
sys 0m0.536s
$ time busybox sh -c 'x="aaa"; t() { local x=$1; echo $x; 
}; while [ "${x%b}" = "${x}" ]; do y=; while :; do z="${x#b}"; [ "$z" != "$x" ] 
|| break; y=a$y x=$z; done; x=$(t ${y}b${x#a}); done'
real0m2.104s
user0m0.284s
sys 0m0.516s
$ time bash -c 'x="aaa"; t() { local x=$1; echo $x; }; 
while [ "${x%b}" = "${x}" ]; do y=; while :; do z="${x#b}"; [ "$z" != "$x" ] || 
break; y=a$y x=$z; done; x=$(t ${y}b${x#a}); done'
real0m4.849s
user0m0.892s
sys 0m0.740s
$

It looks like moving to dash sped Debian up a little.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-12 Thread Russell Stuart
On Sun, 2014-10-12 at 22:05 +0200, Florian Weimer wrote:
> Array variables practically imply arithmetic evaluation, amd this is a
> shell feature which is rather difficult to use correctly because
> compatibility with other shell encourages both recursive evaluation
> and access to the full shell language in a few corners.

I don't understand this.  I've had legions of bugs in shell scripts over
the years.  The number caused by arithmetic evaluation is tiny.  The
only trap I can recall is it being restricted to 32 bit signed
arithmetic, and that not being enough for times.

> If you need array variables, it's likely that the script has grown so
> complex that switching to another language is a good idea.

Not really.  One of, if not the primary function of the shell is to run
other programs.  One of the things you have to do when running programs
is construct and process argument lists.  An array variable is the only
sane way to represent an argument list in the shell scripting language.
The only other option is horrid hacks using `eval ...`.

Also, while I agree that shell script is a terrible language and nothing
over 100 lines or so should be written in it, real life doesn't always
pan out the way you plan.  It's probably true that any shell script I've
written started out under 100 lines, or at least I'm going to pretend I
thought they would would when I stared writing them.  But the things are
like dust bunnies in a cupboard.  They grow while you aren't paying
attention.

So now I have one 5000 line shell script, and a few around the 1000 line
mark.  No, this is not something I'm proud of, but I've got better
things to do in my life than rewrite 5000 line programs that have been
bug free for years.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-12 Thread Russell Stuart
On Mon, 2014-10-13 at 06:23 +0200, Dominik George wrote:
> foo='x[$(rm -rf /)]'
> echo $(( foo ))
>
> Guess when the array index is evaluated? Now mind that it could be 
> user-provided.

In dash it isn't executed which means on Debian at least it's most
harmless.  That's another bouquet for dash.  It's almost enough to make
you forgive the reason it doesn't parse: dash's buggy argument parser.


signature.asc
Description: This is a digitally signed message part


Re: bash exorcism experiment ('bug' 762923 & 763012)

2014-10-16 Thread Russell Stuart
On Wed, 2014-10-15 at 23:36 +0100, Ian Jackson wrote:
> Actually, the problem is indeed in policy.  In its resolution of
> #539158 the TC decided unanimously (but unfortunately slightly
> implicitly) that printf ought to be provided by our /bin/sh.
>
> Unfortunately the policy has not been properly clarified.  This leaves
> us in the somewhat unsatisfactory situation where our real
> compatibility requirement is de facto rather than de jure.
> 
> As the maintainer of a minority shell, Thorsten has the most interest
> in regularising this.  Perhaps Thorsten would like to propose a
> suitable policy wording (with a view to changing posh to match).
> 
> Obviously that wording ought to be consistent with the TC's decision
> in #539158 - ie, it should specify printf as a builtin.

The arguments about printf in #539158 also applies to '['.  POSIX does
not say '[' must be a built in (in POSIX's terminology is part of the
'Special Built-In Utilities').  Thus if the shell didn't implement '['
udev would fail since uses [ and sets PATH to be /bin:/sbin.

The reality is in a POSIX (or a minimal Policy 10.4) world shell scripts
must have access to bulk of the stuff that is both covered in the man1p
pages and is required in Debian.  Turns out only three commands fall
into that category: [, printf, and test.

And yes, to me the obvious fix is say in policy /bin/sh have those
commands as builtins.


signature.asc
Description: This is a digitally signed message part


Re: GPL-3 & openssl: provide a -nossl variant for a library

2014-10-22 Thread Russell Stuart
On Thu, 2014-10-23 at 12:46 +1100, Brian May wrote:
> On 23 October 2014 04:03, Russ Allbery  wrote:
> It's usually more immediately useful to just
> upload the package with an explanation of the issues in
> debian/copyright
> and see what the ftp-master team says.
> 
> 
> This is probably getting off-track, however I have a package that has
> been stuck in NEW for over a month because ftp-master won't give
> feedback on what they see as a legal issue with my package. I
> disagreed with their verdict, gave good reasons, indicated that
> another package already in Debian would have the same issues, and got
> no response.

Yeah, that's been my experience too.  I waited a week for a reply, but
none was forthcoming.  I took that as a "no".  They are busy people
after all, and probably don't have time to engage in what could be long
discussions.  Particularly now when everyone is rushing to get in before
the freeze.

I wasn't happy at the time, but in retrospect it seems like a reasonable
process to me.  I assume they are consistent as they can be, so their
decisions reflect Debian's current consensus (written or otherwise) on
what is allowed into Debian.  If you disagree strongly enough to want a
debate that changes it, that debate should be held here on debian-devel
where everyone can participate.

You don't need a debate or a reply to reach a compromise - just
re-submit with your compromises.  It has the advantage of forcing them
to give you an answer :D.  If you aren't prepared to compromise, either
have the debate or drop the package.



signature.asc
Description: This is a digitally signed message part


Re: Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files

2014-10-29 Thread Russell Stuart
On Wed, 2014-10-29 at 19:39 -0700, Russ Allbery wrote:
> But we shouldn't confuse that with the right way to check 
> for security updates for Debian systems.  People who
> care about security updates need to be subscribed to
> debian-security-announce and reading the DSAs.

If there are two "ways" and one requires a human and the other is
completely automatic, all other things being equal for me the "right"
way is the automated one.  I know my limitations - not being
conscientious when doing manual repetitive labour is one of them.

> It seems to me that if you want to lower the chances of a downgrade attack
> for your systems, setting the validity period on your systems is exactly
> the tool that you need.  There's no need for anything to change on the
> server side for you to get that protection.

Yes, I agree.  But for me apt.conf/Max-ValidTime is useless unless the
release file is guaranteed to be updated more frequently than its
"Valid-Until:" header implies.  Is it, and is that undertaking
documented somewhere?


signature.asc
Description: This is a digitally signed message part


Re: Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files

2014-10-29 Thread Russell Stuart
On Wed, 2014-10-29 at 21:58 -0700, Russ Allbery wrote:
> Also, this means that you completely miss security advisories that *don't*
> involve changing a package in the archive, like "this thing is a disaster,
> so we're pulling it from the archive entirely and suggest you stop using
> it."

If it is so that much of a disaster that it warrants pulling a package
from stable, surely a little more notification than an email to a list
most people don't monitor would be warranted?  Something like replacing
it with an package that sends email daily to root explaining the
situation would be the very least you could do.

But then the bash function bug made my local TV news, and bash remains
in the archive.  If it warranted pulling a package from stable I'd wager
you would have to be living under a rock not to hear about it.


signature.asc
Description: This is a digitally signed message part


Re: Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files

2014-10-29 Thread Russell Stuart
On Thu, 2014-10-30 at 01:40 -0400, Michael Gilbert wrote:
> There are also end-of-life announcements, which maybe the
> debian-security-support package now addresses in a somewhat automated
> fashion.

I wasn't aware of that.  Thanks.

> Anyway, it is entirely understandable that reading can be hard, but at
> a minimum the truly security-conscious need to be to do so.

Yes, fine.  But a truly security conscious distribution doesn't depend
on its users being truly security conscious.

And on that note, I applaud the Debian Security Team for  demonstrating
their awareness of this by releasing debian-security-support.


signature.asc
Description: This is a digitally signed message part


Re: Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files

2014-10-30 Thread Russell Stuart
On Thu, 2014-10-30 at 16:06 +0100, Wouter Verhelst wrote:
> On Thu, Oct 30, 2014 at 03:59:33PM +1000, Russell Stuart wrote:
> > Yes, fine.  But a truly security conscious distribution doesn't depend
> > on its users being truly security conscious.
> 
> I would hope Debian never becomes a "truly security conscious"
> distribution by that definition. It implies the distribution thinks it
> knows better than its users what the right security trade-off is, and
> that way lies disaster.

You are reading way too much into it.  It's meant to express something
uncontroversial.

There is the spectrum ranging from: "The default install priorities
should be (...put your fetishes here - eye candy, small, have
everything, [not] run systemd...).  If the user wants security they can
customise it later".  To: "The default install should be as secure as
possible.  If the user wants to weaken that in favour of (...put your
fetishes here...), they can customise the system later".  IMO, on the
spectrum Debian must be heavily biased towards favouring security.

So it just expresses what I presume to be the consensus.  As such I
really should not have wasted your time by writing it, but there was an
element of conceit involved - I was taken with the turn of phrase.


signature.asc
Description: This is a digitally signed message part


Re: Alioth tracker

2014-05-11 Thread Russell Stuart
On Sun, 2014-05-11 at 21:38 +0200, Tollef Fog Heen wrote:
> I'm not disagreeing, I think we're providing a much poorer service level
> for Alioth than what we should do.  Sadly, I don't have the motivation
> to spend much time there nowadays.

I have for years hosted my own projects in a minimalistic fashion [0],
and as a consequence have been nagged to provide "modern amenities" like
issue trackers and DCVS.  The obvious solution would be to move to free
hosting like sourceforge, but sourceforge is closed source.  Then I
joined Debian.  Alioth seemed like a natural fit, so I started moving my
projects to it.

But now I've decided that's not such a good idea.  Not because of some
of the issues mentioned in this thread - like DVCS support of lacking
features in Fusion Forge.  I can work around them.

My problem is Alioth isn't reliable enough.  In week or so I used it:

a.  As Ole mentioned, there are 180 odd open support requests, dating
back 7 years.  It's not that things aren't being done - Stephen Gran
in particular appears to regularly attend to the list and close
issues.  However, there should be no support requests open for more
than a few weeks (and ideally at most a couple of days).  Many of
these old "support requests" are bugs or features, and there are
separate trackers for those.  In other words, the support list needs
to be triaged.  If after triage there are still support requests
more than a month old, the clearly Alioth needs extra admin
manpower.  Right now it is difficult to tell if manpower is really
the issue.

b.  For a while when I was using it is was horribly slow (as in taking
minutes to send a response to a HTTP request).  I could not see
why.  After a day or so the issue went away and it because usable
again.

c.  Then I started getting mysterious failures.  After a bit of digging
around I noticed /tmp was at 100%.  Someone fixed this after a few
hours.

d.  At the same time I noticed disk space it is sitting 94% usage.
The amount of disk used is under 600GB.

e.  I suspect running out of disk space on /tmp caused a number of
other issues for me [1].  The details aren't relevant here.  What is
relevant is in order to diagnose what was going I poked around the
file system, and noticed a number of other much older projects were
suffering from the same issues.  Since this means among other things
they can't use the DVCS, presumably they had been abandoned.

f.  After seeing all this, I decided I had better do some "due
diligence" and what backup arrangements were in place.  As far as
I can tell there aren't any.

At this point I reluctantly decided I had to use why I was trying to
avoid - a commercial provider running close source.

If three things changed on Alioth I would move back.  They are:

A.  Solve the disk space problems.
B.  A backup system.
C.  Support list triaged, and it's length viewed as a KPI.

If the Alioth team thinks I could be useful in getting this things done,
I'd be happy to become part of it.  Even if they don't, I'd be happy to
donate 2 x 4TB drives so the disk space issue can be fixed - assuming
there are remote hands available to fit them.

[0]  http://www.stuart.id.au/russell/files
[1]
https://alioth.debian.org/tracker/index.php?func=detail&aid=314680&group_id=1&atid=21


signature.asc
Description: This is a digitally signed message part


Re: Why not 03 ?

2014-05-29 Thread Russell Stuart
On Fri, 2014-05-30 at 07:21 +0900, Charles Plessy wrote:
> For a lot of scientific packages, -O3 is chosen by the upstream 
> author, and I always feel bad that if we make the programs slower 
> by overriding it to -O2, it will reflect poorly on Debian as a 
> distribution for scientific works.

In particular -O3 turns on auto-vectorisation.  It can provide a big
speed up to programs that can take advantage of it - and yes many
scientific programs fall into that category.  Big as in 300% [0].  So
you are correct in saying not turning it on will make Debian look slow
compared to a system that takes advantage of it.

Unfortunately the instructions need to get the speed up vary by CPU.
Not only is AMD is different to Intel, Intel turns them on and off
depending on the intended market.

This breaks Debian's "One binary rules them all model" unless the
upstream has gone to extraordinary lengths.  As in providing multiple
compiled versions of the same code path, and choosing the best one at
run time based on CPU model.  Projects that do that generally use hand
crafted assembler, usually inlined in C code.  Note that means they will
run fast without -O3.

As others have pointed our -O3 turns on optimisations that help on some
architectures and hinder on others.  Vectorisation sort of falls into
that category: hinder becomes "fail with a SIGILL".  That doesn't happen
normally because of another fail safe: even with -O3 gcc only generates
instructions the target CPU can execute [1].  Debian tells gcc to
generate code for a generic CPU.

Bottom line: the vectorisation provided -O3 can provide big speed ups to
some scientific programs, but it is ineffective on Debian because by
necessity it tells gcc to compile code for lowest common denominator CPU
which doesn't have the necessary instructions.


[0] http://felix.abecassis.me/2012/08/sse-vectorizing-conditional-code/
[1] See the -march option of gcc.  In particular, -march=native.


signature.asc
Description: This is a digitally signed message part


Re: policy regarding redistributable binary files in upstream tarballs

2014-11-20 Thread Russell Stuart
On Thu, 2014-11-20 at 13:46 +0800, Paul Wise wrote:
> On Thu, Nov 20, 2014 at 1:14 PM, Ben Finney wrote:
> 
> > But a growing number of upstreams disagree, so those upstreams are
> > likely to be actively opposed to your recommendation to patches 
> > which remove non-source files from the VCS repository.
> 
> I wonder about the basis for that disagreement.

In the GNU model the tarball doesn't just provide sources.  It is a
complete packaging system.  It checks for build prerequisites, does the
build, checks for install prerequisites, does the install, and does
uninstall.  It does all that because upstream wants people to use their
project - regardless of whether their distro packages it.  Combine that
with wanting to keep the perquisite list small including things like
minified jquery libraries is exactly the right thing to do.

> Putting all third-party libraries into a separate place (tarball,
> repo, branch or dir).
> 
> Putting all pre-built files into a separate place (tarball, repo,
> branch or dir).

Those suggestions may make things easier for Debian, but they do so by
making life harder for upstream's other users.  That isn't going to
happen, or at least for me it wouldn't.  If my DD alter ego asked my
upstream ego to make things much harder for his other users, he would be
politely told where he could shove his suggestion.

Personally, I think Debian passing judgement on what is upstream
pristine tarballs is over the top.  It's upstream's original work, not
Debian's.  Ideally we are just mirroring it.  (That we often aren't is
part of the problem.)  We should be happy enough to accept their
assurances on having obtained whatever licenses they need for what is in
them. [0]

Admittedly this meshes well with my experience that they are often
fairly lax about what they put in those tarballs.  Their "make
distclean" scripts are often not as good as they could be, which means
all sorts of crap it left lying around. Vim .swp files and compiler
intermediates spring to mind.  I have no idea what license would apply
to a .swp file, but I do know that for all practical purposes it doesn't
matter and I'd rather Debian didn't insist I find out. [1]

That's just me being lazy I guess.  But there is a deeper issue.  For me
it is vital there be an audit trail from the pristine upstream tar ball
to the binaries we distribute. [2]  In pursuing licensing purity we have
been gradually destroying what little of that audit trail we used to
provide.  To put it bluntly: as a DD I do care about licensing, but when
it comes to day job where I have to ensure hundreds of computers are
reliable and secure so the licensing of of tarballs I don't download let
alone use takes a distant second place to security.  So in my view we
are making life difficult for our users on the altar of FSF style
idealism.

Maybe if we were forced to choose between the two that would be right
choice to make.  But technically there are a ways to be FSF idealists
and provide something akin to an audit trail.  So we aren't forced to
choose - but we just deprive our users of the audit trail anyway.  That
is bad.

What follows is something I am sure has been covered before by someone
somewhere, before I started following the project in earnest.  I can't
find it - so I apologise in advance for the repetition.

I start my Linux life as a RedHat user, and I wrote RPM packages for my
own use.  Then about a decade ago I moved to Debian, and of course
started writing Debian packages.  During the transition I was struck by
how much better Debian's binary packaging was compared to RPM, and yet
RPM's source packaging was so much better than Debian's.

To explain why I'll step back a bit.  If I were writing a book on how to
design a packaging system it would start by introducing these 5 steps:

A.  The process is ideally [3] a pure function.  It's input is the
pristine source.  It's output is the binary packages.  So the 1st
step is to obtain the input - the pristine source, and record it
in the output so anyone else can reproduce what you have done.

B.  These inputs are fed to packaging process a program, written by the
packager, that implements the function doing the transformation.
In debian, this is debian/rules.  This function is split into
standardised steps.  The second step is unpacking sources from
whatever format they are in into a build directory. [4]

C.  The third step is to tailor the pristine sources to match the
requirements of the distribution.  This is done in a standardised
way: by applying a series of patches in a well defined format, each
with a clearly documented purpose.

D.  Run the build process as supplied by upstream, but perhaps modified
by step (C).

E.  Collecting the output of the build process into binary packages. [5]

And that is exactly what RPM's did over a decade ago.  Debian mashed
steps (A), (B) and (C) into what could only be described as a mess.

Time has moved on, and things

Re: policy regarding redistributable binary files in upstream tarballs

2014-11-21 Thread Russell Stuart
On Fri, 2014-11-21 at 17:39 +0800, Paul Wise wrote:
> On Fri, Nov 21, 2014 at 5:25 PM, Matthias Urlichs wrote:
> 
> > These days, they might just push their repo to github and let its machinery
> > generate the tarballs, which TTBOMK aren't guaranteed to be 1:1 identical to
> > another tarball of the same commit that's downloaded a week later. Or a
> > year.
> 
> I tried downloading a tarball just now and got identical results. I
> guess they are just using git archive, which produces identical
> results for me too.
> 
> https://github.com/whohas/whohas/archive/0.29.tar.gz

It doesn't matter whether git supplies a tool that provides reproducible
tar balls.  If there was a target in debian/rules responsible for it
something like this would work:

pristine-source:
rm -rf debian/pristine-source.tmp
mkdir debian/pristine-source.tmp
git clone http://... debian/pristine-source.tmp
cd debian/pristine-source.tmp && \
  git checkout $(get commitish from debian/changelog somehow)
dpkg-pristine-source --format=git pristine-source.tmp

The spec for dpkg-pristine-source is roughly:

- Inputs: source directory(s) and their formats.
- Outputs:
> .orig.tar, and
> hashes written to debian/pristine-source.hashes

Which is not what I said before, but this is WIP.

Now that I think about if dpkg-pristine-source is possibly an overkill.
Any repeatable process would do.  Even:

  find debian/pristine-source.tmp \
-path debian/pristine-source.tmp/.git -prune ! -type d | \
LANG=C sort >debian/x
  cpio -o -H ustar ../$(sed 's/\(.*\) (\(.*\).*/\1_\2/;q' 
debian/changelog).orig.tar.gz
  rm -f debian/pristine-source.hashes
  xargs -d '\n' < debian/x cat | sha256sum | \
sed s/-/URL/ >debian/pristine-source_sha256.hash

All of the above was written after a couple of glasses of wine and has
never been tested.  Regardless, I hope is demonstrates the point: it is
possible to compute a immutable hash if upstream provides a
reproduceable way to retrieve the same sources.  As far as I know every
SCM post CVS does. 

That was a statement of the obvious I guess.  But it show what I am
proposing is not pie-in-the-sky.  It's achievable, and not even that
hard.


signature.asc
Description: This is a digitally signed message part


Re: Who gets an email when with bugreports [was: Re: Unauthorised activity surrounding tbb package]

2015-01-19 Thread Russell Stuart
On Mon, 2015-01-19 at 10:03 +0100, Tomas Pospisek wrote:
> Am 19.01.2015 um 02:03 schrieb Ben Hutchings:
> > No, this would turn the BTS into a (worse) spam vector.
> > 
> > But the acknowledgement mail should tell you how to subscribe, if you
> > aren't already subscribed.
> 
> But isn't subscribing participants "natural"?

It may be natural, but IMO you are underestimating the spam vector
problem.

Debian's bug submission mechanism does not try to verify you control the
email address you are submitting from.  Most other bug tracking systems
do such authentication, usually by requiring you to create an account.
Since there is no verification it becomes trivial to sign someone up to
1000's of bugs using a script.

Treating every bug submission as a subscribe request (by putting a
subscribe link in the ack) is one compromise. (I am sort of surprised
that doesn't happen already.)  Automatically subscribing a DD to any bug
he sends a signed message to is another.

I am partial to the latter, even though it is a partial solution.  It
encourages DD to sign their bug reports.  IMHO anything we can do to
encourage DD's to sign their emails to the project improves our
security.


signature.asc
Description: This is a digitally signed message part


Re: Who gets an email when with bugreports [was: Re: Unauthorised activity surrounding tbb package]

2015-01-19 Thread Russell Stuart
On Mon, 2015-01-19 at 16:57 -0500, Michael Gilbert wrote:
> Isn't the spam vector already wide open for
> nn-subscr...@bugs.debian.org, which isn't much (ab)used today?
> 
> I fail to see how any of the discussed changes open an abuse vector
> that doesn't already exist.

OK, so let me help you see.

The vector you are pointing to doesn't exist.  You can _not_ subscribe
to a bug by sending email to -subscr...@bugs.debian.org.  You
subscribe to a bug by sending an email to an address that looks like
this:

  
701234-subyes-8aba1368a9ac33362ea1f68c28446c15-65bf3bd3886fb8abfe59d40709c84...@bugs.debian.org

I presume this "invite" address is unforgeable (because Ian Jackson's
expertise is in crypto, and he said earlier he designed the system).

Sending an email to -subscr...@bugs.debian.org just asks the system
to send an invite containing such an address to someone.  I'm not sure
what email address gets the invite - it could be the envelope MAIL FROM,
or the Reply-To, or the From.  But really "who" doesn't matter.  All the
matters is the only a person controlling an email address is able to
subscribe it to a bug, not some random noob.

For what it's worth, the invitation contains full text of the
subscription request, including all the RFC5322 headers.  If it was
someone doing something unpleasant it gives you some hope of tracking
them down, or blocking them.

In other words the current system contains robust defences against such
an attack.  All I (and I presume Ben) are saying is removing those
defences is not a good idea, given it's easy enough to design a system
that keeps them.  Currently most of the auto subscription proposals
appearing here do remove them.


signature.asc
Description: This is a digitally signed message part


Re: Who gets an email when with bugreports [was: Re: Unauthorised activity surrounding tbb package]

2015-01-21 Thread Russell Stuart
On Wed, 2015-01-21 at 21:10 -0500, Michael Gilbert wrote:
> So anyway, nn-subscribe can be used to spam confirmation messages
> currently, and general mail to the bts from an unknown address will
> end up doing the same, but it's basically a non-issue because it's a
> rather uninteresting thing to do for anyone that might consider
> wanting to do it.

I don't know how interesting it would be on an absolute scale, it
certainly would be "more interesting than it is now" if we remove the
authentication we have.

The reason is all that happens now is you get one unwanted email and
that is the end of it.  In particular the attacker can't force you do to
something to prevent the bugs.debian.org from sending further unwanted
emails.  If you get rid of authentication then the victim, be it you, or
your mother, or your local police constable, will have to tell the
Debian bugs system to unsubscribe them from a list they never subscribed
to in the first place.

Perhaps you can suggest a way of explaining the situation to our mothers
or local law enforcement agents so they don't end up blaming the Debian
bugs system for putting them in this predicament.  I struggling to come
up with something they would swallow once they learn we could have
designed the system to avoid it, but chose not to because we found it
convenient to inconvenience them.


signature.asc
Description: This is a digitally signed message part


Re: debian github organization ?

2015-04-16 Thread Russell Stuart
On Thu, 2015-04-16 at 19:37 +0200, Sven Bartscher wrote:
> On Thu, 16 Apr 2015 09:04:07 -0600
> Dimitri John Ledkov  wrote:
> 
> > I'd rather see gitlab.debian.net :)
> 
> I don't  a reason to have gitlab/github/someother git stuff for debian,
> since we already have alioth.
> Maybe someone can enlighten me.

Probably not.  UI's are a personal thing and if you've looked at the
others and still the UI provided by FusionForge, that's unlikely to
change.

But do acknowledge that makes you unusual.  Github has all but
annihilated SourceForge in the hosting market place, and the stand out
change is it's UI.  That is in spite of SourceForge's impressive mirror
network and SourceForge being VCS agnostic.  So it's not surprising some
DD's want to move away from the FusionForge UI.

I'm on SourceForge now.  [0]  I'd prefer to be on Debian's
infrastructure of course, but Alioth is so poorly maintained it was
unusable for me [1].

Of the suggestions so far only Kallithea is VCS agnostic, but Kallithea
only supports source code hosting - no Ticketing (eg bug tracking), no
web project web page, no release hosting (binaries).  Maybe that's an
advantage for Debian projects because it forces you to use Debian's
existing infrastructure for everything else, but for me it makes it a
no-go.

Gogs looks to be similar, but is unstable.  Gitlab is git only and
doesn't support releases.

SourceForge's Apollo is an open source project supporting all those
features plus a heap more, but the UI is not "code centric" like the
others - it feels more like FusionForge.  That said, unlike FusionForge
modern work flows (forking, pull requests and the like) - it's just they
aren't a prominent in the UI.



[0]  http://sourceforge.net/u/rstuart/

[1]  https://lists.debian.org/debian-devel/2014/05/msg00463.html

 That triggered this response, but it read like someone in denial
 rather than acknowledging the problem:

 https://lists.debian.org/debian-devel/2014/06/msg00435.html

 Acknowledging the problem is always the first step in fixing it,
 and I think it's significant the number of open bugs has gone up by
 20% since then.



signature.asc
Description: This is a digitally signed message part


Re: debian github organization ?

2015-04-17 Thread Russell Stuart
First a Mel Cupa.  I called the SourceForge system Apollo.  It's actual
name is Apache Allura.  Brain fart.

On Thu, 2015-04-16 at 23:13 -0700, Russ Allbery wrote:
> Er, they did, didn't they?  I could have sworn that they only supported
> CVS initially, and then only added Subversion, and getting Git support
> took forever.

Pretty much.  Of course that may have something do with the respective
VCS being born in that order.  For comparison in the speed of addition,
GutHub opened for business in April 2008.  SourceForge added support for
git in March 2009.

> However, I still stand by the decision to only support a single VCS, at
> least when you start, because you can move a lot faster and implement a
> lot more functionality that people care a great deal about.

Woo, slow down there.  Here I was thinking the discussion was about
spinning up a server using exist software.  Has the discussion moved to
writing our own or even modifying something to suit Debian's needs?  If
so, is that justified by history?  Was there a period when not only was
Alioth's bug queue serviced, but we actually did some heavy lifting?  If
not than any discussion of "adding functionality" is probably fanciful.

In any case using an existing project and contributing any changes
upstream sounds like a much better plan to me - particularly if the
project is packaged in Debian.  They means we can just install auto
upgrades to keep it secure.

As for one DVCS to rule the world - that also sounds like a bit of a
stretch.  If we are going to do that, can we also settle on a preferred
computer language and force everyone to use a single debian packaging
method?  It would make life sooo much easier.


signature.asc
Description: This is a digitally signed message part


Re: Proposal: enable stateless persistant network interface names

2015-05-10 Thread Russell Stuart
On Sun, 2015-05-10 at 17:11 +0200, Vincent Bernat wrote:
> The disease is that actual servers running actual free software can
> break at each boot because we cannot have both a persistent naming
> scheme and use the eth* prefix is worse that the cure because old
> versions of Novell ZENworks may stop to work on upgrade?

Speaking as someone who runs Debian on his servers and laptop, I don't
care about what you guys choose.

I don't care on the laptop mostly because of the reasons Josh Triplet
pointed out earlier.  I manage the network through GUI interfaces, and
it doesn't care what the interfaces are called.

For servers the interfaces are all assigned fixed, well known names.
The reason is their configuration is completely automated through 100's
of scripts.  In all instances I've seen it done like this well known
interface names are used.  It's not difficult to understand why if
you've done it - you can sort out what NIC is supposed to be doing at
install or boot up and rename it accordingly, or you can do it in every
script that deals with the network.  The choice is obvious.

You have the "disease" wrong.  These scripts have been around for over a
decade.  In that time various fashions in interface naming have come and
gone.  This is yet another one.  The disease isn't the kernel choosing
different names on boot up, it's people inventing new interface naming
schemes every few years, just as being done here.  Everyone who has had
to support many servers over a long period gets sick of this and writes
yet another script that does in the renaming in the way they want, in a
way that will be stable forever more.  Thus they don't care what choice
is made here - as they have already given up relying on Debian to do it,
because Debian isn't stable enough.

That said when writing our own scripts most of us long ago came to the
conclusion the bus path to the interface was the most useful way to
identify it.  Again the reason is straight forward - if you have an
image tuned for a piece of hardware that you have deployed en-masse the
one thing that is the same on all of them is the paths to the NIC's.
Note that when, long ago, the kernels actually managed to repeatability
name the NIC's eth0, eth1 etc, it was because the buses were enumerated
in a repeatable order so the NIC's got seen in a repeatable order.  So
in effect the name was determined by the bus path.  Ergo, nothing has
changed in 20 years.

I get the distinct feeling some people posting here consider ifup/down
"old fashioned".  Granted it doesn't have a nice GUI, but from the point
of view of someone who deploys lots of similar machines a GUI of any
sort is a negative, and it has a far nicer property - it is easily
scriptable.  In fact underneath the hood it's driven by scripts.  If
there is a network configuration it isn't capable of setting up, I
haven't seen it.  In my very brief look at networkd it didn't provide
anything like the same amount of flexibility.


signature.asc
Description: This is a digitally signed message part


Re: Proposal: enable stateless persistant network interface names

2015-05-11 Thread Russell Stuart
On Mon, 2015-05-11 at 09:29 +0200, Marc Haber wrote:
> For example, it doesn't know dependencies between Interfaces, which is
> rather common for a server jockey (consider a VLAN on a bridge which
> is connected to the network via a bonding device)

I haven't had to solve that example, but I have had a problem again
involving bridges that sounds similar.  It was solvable with ifup/down -
by calling ifup in the /etc/network/interfaces pre-up.  I'll grant you
it's not pretty, but I've only had to do it once so I forgive aj.


> [ifupdown] it doesn't handle IP addresses that come and go
> at run time (as it is to be expected on IPv6 networks).

Could you explain when IPv6 addresses come and go?  You are talking to a
IPv6 neophyte here.  In the IPv4 world addresses handed out by DHCP do
come and go.  It's true that isn't handled by ifupdown, but that's not a
problem because if you want to do something about it, you do it in the
dhclient hook.  It seems the right place to me.

That aside, I don't see anything in "man systemd.network" that allows
you to watch for IPv6 addresses coming and going, or for that matter
anything else coming and going except devices.

> And, when it comes to scriptability and flexiblity, systemd-networkd
> is vastly superior. It was made with a server in mind.

This para is the real reason I'm responding.  I must be missing
something, because nowhere in "man systemd.network" do I see to a way to
run a script of any sort.  For me the acid test is "can it do totally
manual configuration", ie the equivalent of ifupdown's "manual" method.
(I occasionally use manual - it's a great way of ensuring there are no
surprises.)  systemd.network's [Match] section combined with the sort of
demonstrated by ifupdown's manual method would be a match made in
heaven.  But if it exists I've missed it.  You could perhaps emulate it
if it were possible to make a systemd.service depend on a
systemd.network, but that appears to be right out of scope.  As it
stands, networkd looks to be one of the least scriptable networking
configuration options I've seen since ... oh redhat 7.0 or so.

> Otoh, it is much harder to debug, extend and modify than ifupdown,
> which has a _very_ flexible script interface.

Up until recently I thought the systemd mob had solved the "visibility"
problem by logging everything written to stdout and stderr with
journald.  I was disabused of that notion just this weekend, when
apache2 failed to start after an apt-get dist-upgrade.  journalctl -xn
helpfully told me:

$ ssu journalctl -xn
-- Logs begin at Mon 2015-05-11 21:16:17 AEST, end at Mon 2015-05-11 
22:22:42 AEST. --
May 11 22:21:43 russell-laptop systemd[1]: apache2.service: control 
process exited, code=exited stat
May 11 22:21:43 russell-laptop systemd[1]: Failed to start LSB: Apache2 
web server.
-- Subject: Unit apache2.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit apache2.service has failed.
-- 
-- The result is failed.

Which is about as useful as a hip pocket in a singlet.  In the end this
told me what was going on:

$ _SYSTEMCTL_SKIP_REDIRECT=true /etc/init.d/apache2 start
[FAIL] Starting web server: apache2 failed!
[warn] The apache2 configtest failed. ... (warning).
Output of config test was:
apache2: Syntax error on line 141 of /etc/apache2/apache2.conf: Could 
not open configuration file /etc/apache2/mods-enabled/php5_cgi.conf: No such 
file or directory
Action 'configtest' failed.
The Apache error log may have more information.

Having to troll through scripts in /var/lib/lsb in order to figure out
how to disable the systemd redirect in order to see the error message
the sysV init script sent to stdout is NOT an improvement.  (The Apache
error log was empty.)

If debugging networkd stuff is harder, then ... *shudder*.


signature.asc
Description: This is a digitally signed message part


Re: Proposal: enable stateless persistant network interface names

2015-05-13 Thread Russell Stuart
On Wed, 2015-05-13 at 17:16 +0200, Vincent Lefevre wrote:
> Well, having some of the network traffic (more precisely, connections
> to machines that have an IPv6 address) re-routed to some unknown
> machine on the local network is not a nice feature.
> 
> IMHO, such a feature should be enabled only by the network management
> system, not by default at the kernel level.

Now I've looked up what Marc is referring to in an earlier reply, SLAAC
and DHCP look pretty similar to me.  Both have the "re-route your NIC to
some unknown machine" feature.  I'm sure everybody here will be the
victim of a rouge router sending NDP responses, just as everybody has
already been the victim of a rouge DHCP server.

Not having the "automatically make my NIC usable on bootup" feature
enabled by default would seem like a major omission to me.

The one difference between the two right now is dhclient make it easy
for the client to watch for changes using scripts.  AFAICT, there is no
off the shelf way of doing it for SLAAC.  It's easy enough to do - just
have a daemon listen to kernel netlink messages and fire off a script.
The right place to put that now would be in systemd, but if they are
opposed to scripts as Marc says that ain't going to happen.  Sigh.


signature.asc
Description: This is a digitally signed message part


Re: Bug#786902: O: ifupdown -- high level tools to configure network interfaces

2015-05-27 Thread Russell Stuart
On Wed, 2015-05-27 at 12:33 +0200, Marco d'Itri wrote:
> (I am shocked, shocked that there is no flood of people here rushing to 
> save ifupdown... :-) )

Until systemd-networkd can run scripts on events no defence is required.
It would be like comparing a calculator to a computer.  Sure, the
calculator is simple and lightweight, but there are some things you just
can't do with a calculator.

Right now systemd-networkd looks like a wasted opportunity - maybe it's
work in progress.  It has some wonderful pattern matching stuff but once
it matches the pattern ... there is very little you can do.  All the
dependency processing packaged with systemd.unit is oddly missing.


signature.asc
Description: This is a digitally signed message part


Re: Bug#786902: O: ifupdown -- high level tools to configure network interfaces

2015-05-27 Thread Russell Stuart
On Wed, 2015-05-27 at 19:27 +0800, Paul Wise wrote:
> Your mail is missing some things:
> 
> To: 786...@bugs.debian.org
> Control: retitle -1 ITA: ifupdown -- high level tools to configure
> network interfaces
> Control: owner -1 !

If you mean it has been orphaned, it will work for while yet even if it
is unmaintained.  As the bug notes:

In current state ifupdown is probably good enough for what it is
used for

That's an understatement.  It's plugin based architecture that makes it
such a flexible tool for sysadmin's has also meant it's been able to
adapt to newer technology for 15 years now.  So there is no rush.

I agree there is room for improvement.  Systemd-networkd's pattern
matching make ifupdown's mapping look like a hack, it lacks any
dependency processing, I suspect the shell script's architecture
responsible for it's flexibility is also responsible for it's bugs and
it only provides tools for address assignment - other tasks like
firewalls and traffic control aren't there.

But they aren't there in the proposed replacements either, so we are a
long way from having a new solution.  Fortunately we have do time.


signature.asc
Description: This is a digitally signed message part


Re: git and https

2015-05-29 Thread Russell Stuart
On Fri, 2015-05-29 at 22:21 +1000, Riley Baird wrote:
> > LetsEncrypt will save us!
> 
> I just looked that up. What a wonderful idea!

I don't know how you missed it.  My tongue has been hanging out for a
year now.  Finally, sanity prevails.

A https cert is supposed to certify www.someone.com is the person who
controls the servers managing www.someone.com.  Traditionally, the
certifiers have relied on business names, trademarks, directors names -
all sorts of things that have one thing in common - they don't actually
prove you control www.someone.com.  The more things unrelated to whether
you controlled www.someone.com they asked you to prove, the more they
charged for the certificate.  If you wanted an example of marketing
triumphing over engineering, the CA system is it.

LetsEncrypt is a pure embodiment of the "you control it you own it"
principle.

They could do better.  A lot better.  For example they could insist you
control www.someone.com for a while - say repeatedly confirm over a
month.  This would thwart the guy who took over www.hotmail.com for a
while [0].  And they could allow people to register an interest in
someon.com or someon.co, so if someone registered a cert with it they
would know.

Regardless, when LetsEncrypt works we will have made a step forward.



[0] 
http://news.cnet.com/Good-Samaritan-squashes-Hotmail-lapse/2100-1023_3-234907.html
Fortunately for Microsoft it was a Linux nutter.  When he couldn't
access his hotmail account he diagnosed it, registered the domain,
and then gave it back to Microsoft.



signature.asc
Description: This is a digitally signed message part


Re: Metapackage dependencies: "Depends" or "Recommends"?

2015-07-31 Thread Russell Stuart
On Thu, 2015-07-30 at 08:57 +0200, David Kalnischkies wrote:
> This example makes it quite obvious that your requirements are "keep
> a minimal set of packages installed" while the requirement of libapt's
> autoremove is "suggest only packages for removal which are completely
> safe to remove".

If "Suggest only packages for removal which are completely safe to
remove"  is supposed to be "list all which are completely safe to
remove" then it doesn't manage to do that either.  It fails given
circular references, ie A depends on B depends on C depends on A.

I guess it's designed for the "cleanup packages I played with for a
short while on my laptop" use case.  It does a sort of OK job at that.
Only sort of, because when you move across flag days the dregs left
libapt leaves behind can get it confused over which packages should
removed so the upgrade can proceed.

For my servers it's different.  The inherent ambiguity of Debian
dependency system that libapt tries hide becomes intolerable, meaning I
don't want something to just choose between the possibilities on my
behalf, I want to be informed so I can see the choices and have my
decision explicitly recorded to I can repeat it.  Leaving garbage
packages (think "garbage" as in garbage collector) lying around on
servers that are supposed to be clones left alone to maintain themselves
for a while is equally unacceptable.

In the end I gave up up on libapt and wrote my own dependency resolver.
Fortunately libapt makes that relatively easy because it's API gives you
access to all its internal working so you can re-use most of it.


signature.asc
Description: This is a digitally signed message part


Re: Preferred git branch structure when upstream moves from tarballs to git

2019-04-29 Thread Russell Stuart
On Tue, 2019-04-30 at 09:25 +0800, Paul Wise wrote:
> I like this option because it still works well if we ever decide to
> fix a fundamental flaw in the Debian source package layout.

I suspect whether that's a fundamentally is a matter or personal taste.
 On this point my taste aligns with yours.

I've used both rpm source format and the Debian one, and IMO the rpm
one is mostly better.  The primary reason is the one you've mention
here: they maintain the separation between the source, rpm spec, and
build areas's far more cleanly than Debian does.  This makes some
common flaws one often flaws in Debian packages just disappear: like
cleaning up the source directory after a build.

Where the rpm format goes wrong is it then beaks that separation in the
actually .srpm format by putting the upstream source in and rpm bits in
one file, which is of course what Debian gets right.  Sigh.

On the positive side, rpm and deb seem to be gradually converging in a
sort of co-evolution.



signature.asc
Description: This is a digitally signed message part


Re: Programs contain ads - acceptable for packaging for Debian?

2019-06-20 Thread Russell Stuart
On Thu, 2019-06-20 at 13:15 +0700, Bagas Sanjaya wrote:
> Suppose that an upstream has released a program which its license 
> conforms to DFSG (named ZZZ), but when I test it, ads placed by the 
> upstream appear (such as pop up ads). Since ads can affect user 
> experience of ZZZ, but at the same time the upstream get paid by ad 
> networks which he place the ads into ZZZ, would it acceptable to
> package ZZZ for Debian?

I don't know whether it's relevant to your question, but we already
have software in Debian that displays pop-up ads.  Zoneminder displays
a pop-up nag for donations.  You can turn it oft, but unless you delve
into raw SQL hackery of the underlying DB you will see it at least
once.

signature.asc
Description: This is a digitally signed message part


Re: duplicate popularity-contest ID

2019-08-07 Thread Russell Stuart
On Wed, 2019-08-07 at 09:34 +0200, Marc Haber wrote:
> I am using Debian for two decades now, and I realized that necessity
> two days ago.

Ditto - except for me it was a few seconds ago.

signature.asc
Description: This is a digitally signed message part


Re: Command /usr/bin/mv wrong message in German

2024-03-31 Thread Russell Stuart

On 1/4/24 10:18, gregor herrmann wrote:

% dpkg -S $(which mv > coreutils: /usr/bin/mv


On bookworm:

$ dpkg -S $(which mv)
dpkg-query: no path found matching pattern /usr/bin/mv

This is caused by the /bin -> /usr/bin shift.

The reason I'm replying is after one, probably two decades this still
annoys me:

   $ dpkg -S /etc/profile
   dpkg-query: no path found matching pattern /etc/profile

It was put their by the Debian install, and I'm unlikely to change it.
Its fairly important security wise.  It would be nice if "dpkg -S" told
me base-files.deb installed it.  It would be nice if debsums told me if
it changed.  There are lots of files like this, such as /etc/environment
and /etc/hosts.  There are some directories like /etc/apt/trusted.gpg.d/
which should only have files claimed by some .deb.

To put it another way, Debian's audit trail of files managed / used by
the distro has never been great.  There was a modest proposal ages ago
(by aj I think) to improve this, but it was rejected.  To me it looks
more important now than it was then, and it was pretty important then.


OpenPGP_0xF5231C62E7843A8C.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Making Debian available

2021-01-16 Thread Russell Stuart

On 17/1/21 3:27 am, Russ Allbery wrote:

"Andrew M.A. Cater"  writes:
Wifi is the tough one: The companies that dominate laptop chipsets 
- Broadcom/Realtek/Qualcomm don't make it easy to find out which 
particular chipset it is. For USB Wifi connectors it's even harder 
- lots of Realtek chipsets in cheap dongles seem to require you to 
go and get a repository from git and build DKMS - RTL8812* springs 
to mind.


For the record, this is always the problem I have.  The last system I
installed didn't have an Ethernet port.  I think there was some way 
to make Ethernet work over USB, but I didn't have the right 
hardware.


To me, this is *the* problem. I always use netinst. Once the thing is
running editing /etc/apt isn't hard, so all netinst has to do is a base
image install. But, if I can't get the network up and see the disks I
can't install.

Seeing the disks is invariably caused by the kernel not recognising a
modern chipset, which means I must install with testing that has a
matching modern kernel. Working WiFi almost always requires firmware.
Testing doesn't produce netinst with non-free firmware and I'm normally
installing several of these things, so I have now become proficient in
rolling my own non-free netinst.

I guess it might be possible to make it harder to install Debian on a
new laptop, but rolling your own netinst sets a pretty high bar so you
would have to work at it. I occasionally get asked to install Debian,
and it's hard enough that I carry a USB with one of these netinst's on
me at all times, along with my GPG fingerprints.

So for me at least, the fix doesn't require putting all of non-free on
the install media. It just requires firmware-*. (Which brings me to
another whinge: why doesn't firmware-linux-nonfree include things like
firmware-iwlwifi so I don't have to go on a whumpus hunt for every
firmware package. Sigh.)

I happen to strongly agree with the sharp distinction between free and
non-free in the archive. But in this case, I think we should be carving
out an exception.

If you want a softener for those rigid ideals (I need one myself), try
this: these firmware blobs are peculiar. They don't run on the same CPU,
we talk to them with strictly open API's and protocols. In that way,
they aren't anything new. We already talk to and depend on megabytes of
nonfree software to get our laptop's booted, but we tolerate it because
it lives in ROM. We don't consider firmware in ROM to be part of Debian
even though it must be running for our Debian machines to run. It's true
the difference is these problematic firmware blobs do live in Debian
packages, but it's not because we package them in most of the usual
senses: we don't compile them, or modify them and no software packages
by Debian inspects their contents. For these firmware packages the
packaging system has been reduced to something that does a good
imitation of a copper wire: faithfully coping some bits from the
manufacturer to the hardware they made.

My suggestion is we could create a new section of the archive for
packages that places far stronger restrictions on what Debian is allowed
to do with the packages contents (to wit: be a conduit between two
external parties, and no nothing else), and in return we do tolerate it
on our install images.

We already tolerate something similar. fwupdate pulls down non-free
blobs onto our Debian boxes, and installs them so our Debian machines
can use the firmware therein. It's in free, and AFAICT no one has a
problem with that. This is a step beyond fwupdate of course, as the
firmware is coming from our servers rather than someone else's. But I
would have thought that would make the situation better, not worse. This
is software we can vet at our leisure, take down / revert if it turns
out it violates our sensibilities, like say discovering a new version
has critical CVE's.  Firmware coming from our own servers, signed by our
keys gives us far more say on what non-free software we allow on our
machines than fwupdate does now.


OpenPGP_0xF5231C62E7843A8C.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


Re: Firmware - what are we going to do about it?

2022-04-20 Thread Russell Stuart

On 19/4/22 10:27, Steve McIntyre wrote:

  5. We could split out the non-free firmware packages into a new
 non-free-firmware component in the archive, and allow a specific exception
 only to allow inclusion of those packages on our official media. We would
 then generate only one set of official media, including those non-free
 firmware packages.


The motivation here for splitting non-free firmware into a separate 
component is so we can install Debian on modern hardware.  That's a good 
reason, but I've always thought there was at least one other good reason.


It doesn't belong in Debian.

Unlike everything else, we usually don't have the source, which neuters 
many of the nice security properties inherent with open source.  We 
don't compile it, because even if we did have the source it's probably 
for a CPU & silicon we don't support.  Ergo reproducible builds are out 
of the question: it could literally contain, copy or do anything the 
hardware allows and none of us would be the wiser.  Peculiarly, we don't 
care about the licence, beyond being allowed to distribute it in the 
first place.


One of Debian's foundations is the DFSG but when it comes to this stuff, 
freedoms?  We don't even have the freedom to avoid it.  I'm genuinely 
surprised the project has managed to be in denial and pretend it had a 
choice for this long.


In short non-free packages we have the source for is one thing.  These 
binary opaque blobs are quite another.  They should be in a different 
component.  Non-free-firmware sounds far too innocent to me.  How about 
"not-debian", or "under-sufference".


OpenPGP_0xF5231C62E7843A8C.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Re: A mail relay server for Debian Members is live

2022-07-17 Thread Russell Stuart

On 17/7/22 10:37, Ansgar wrote:

On Sun, 2022-07-17 at 10:29 +0200, Dominik George wrote:

tl;dr: DKIM-signed mail is verifiable, but only the headers; the body
can be tampered with;


This is just wrong. There is no reason to sign mails to ensure
authenticity if one can just change the body...


The goal of this isn't to provided end to end authentication.   It's to 
hold the domain in From: address accountable for the content the 
message.  Without DKIM the From: address can be easily forged, and any 
spammer can impersonate the From: address.


Despite what Dominick says the body is always included in the DKIM 
signature.  In fact the just about everything in the email can be 
covered by the signature.  In an ideal world you would cover everything, 
but in our world relaying SMTP servers regularly append stuff to bodies 
and mangle with headers (eg, spamassassin will shove stuff into the 
Subject:), and that breaks the signature.  DKIM "solved" this by 
allowing the sender to specify what headers in the message the signature 
covers.  You also get to specify a body length, so all characters after 
that length are ignored, thus "solving" the append problem.


rfc6376 does supply a recommended list of what to include in signature, 
but perhaps takes compatibility too far by allowing everything to be 
optional, bar the From: field.  No receiver would accept purely the 
From: field signed as that makes replay attacks trivial, so each 
receiver sets its own standards.  And yes, that's messy.


But in practice, what has to be signed to make it impractical for 
spammers to use existing signatures in replay attacks isn't much.  The 
From:, Date:, To:, Cc: is enough.  The fact that most email clients 
order your inbox by date means the Date: field will ensure an existing 
signature is only good for a few days before the spam appears too far 
down to be noticed.  So it all works out for it's intended purpose.


It's intended purpose isn't to replace gpg signed emails.  It can't 
really, as it's not signed by the sender, it's signed by the domain. 
That means a ISP can forge a signature from anybody using them.  That 
doesn't matter for DKIM, as it's intended purpose is to hold the ISP 
accountable for spam sent from it's domain.  That assumption is the ISP 
will police it's users.


Nonetheless, I personally use it as a form of authentication.  My wife 
once fell for a PayPal phishing attack that cost us well over $1,000 
from memory.  The curious thing is phishing the email had a valid DKIM 
signature from paypal.com.  I contacted them immediately and a full 
refund happened very quickly.  I'm pretty sure the refund would have 
happened anyway, but it was nice to have the DKIM signature on my side.


OpenPGP_0xF5231C62E7843A8C.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Re: Init systems and docker

2019-10-11 Thread Russell Stuart
On Fri, 2019-10-11 at 19:25 -0400, Jose-Luis Rivas wrote:
> There's not much sense in using systemd inside a docker container, to
> be honest.

To put it another way, in the container world the init system belongs
outside of the container.

That is because the closest thing equivalent to a container is not a
box running Debian doing multiple things, it's a single process.  In
fact if you are Google it's actually simpler - it's a process whose
executable is a single statically linked file.  It pulls it's
configuration from something like etcd (something that lives in the
cloud), it uses the file system as an extension of it's RAM - temporary
storage that can be blown away when the process exits, if it needs
persistent storage comes in the form of a database it connects to using
TCP/IP.

It's utility doesn't arise because it's more powerful and flexible than
the environment Debian provides.  On the contrary, it is fare less
rich, and flexible than the environment Debian provides.  It's power
arises because reduces the dependencies to an absolute minimum.  It is
one file whose main configuration is the TCP endpoints its receives
commands from and sends results to, and TCP endpoints of services, log
storage and databases it depends on.  It's only dependency outside of
that list of endpoints is an amd64 machine with some amount of RAM and
CPU cycles and the Linux syscall API - something that can be replicated
1000's or millions of times automatically and on demand.

A major part of it's utility that unlike Debian dependencies (like say
a Web Server or a .so of the right version) that programmers tend to
assume is there or their program can break in weird and wonderful ways,
programmers tend to assume this one dependency, a TCP connection, is
unreliable and ephemeral.  So you can create these things and destroy
them to handle varying loads, and outages in any one data centre can be
handled by shifting the load to another one.  The consistency of
persistent data is not a problem because it's stored in a full ACID
database.

An init system of any sort in this is just unwanted complexity.  What
replaces the init system is something starts up these containers and
connects them together on an as needed basis. Kubernetes is one such
system, but there alternatives.  Docker Swarm is a much simpler thing
that creates something is closer to a Debian box (a collection of
containers running on one physical box).

In the most evolved container systems inside a container it's not just
the init system that's considered an anti-pattern, a distribution
Debian is also out of place.  None of the services Debian provides
(like packaging, dependencies, security upgraded) are helpful.  The
host box will have an OS - but it's just a support system for
Kubernetes and you don't really need an entire distribution just to run
one program.

However, that is the extreme edge.  It's where you have a team of
programmers creating a static app that includes all of the services it
needs, like a web server.  Us smaller guys will use pre-packaged
software provided by a distribution simply because we are familiar with
nginx or apache, and don't want to compile stuff to do off the shelf
functions.  The rest remains the same - a container is a single
process, configuration is primarily connecting TCP end points, things
are written to retry when a  TCP connection dies, security patches are
"installed" by rebuilding the container and restarting it, shells and
what not in containers are there merely to facilitate development,
debugging and fault analysis.

The most popular container distribution is I think Alpine Linux which
has an impressively small base install.  If Debian wanted to compete in
this area it would have to start by shrinking it's minbase install by a
factor or 10 or so.  Anybody arguing about what sort of init system
that install includes hasn't groked containers yet.  


signature.asc
Description: This is a digitally signed message part


Re: [RFC] Proposal for new source format

2019-10-22 Thread Russell Stuart
On Tue, 2019-10-22 at 16:52 -0700, Russ Allbery wrote:
> That seems excessively pessimistic.  What about Git makes you think
> it's impossible to create a reproducible source package?

Has it been done?  Given this point has been raised several times
before if it hasn't been done by now I think it's reasonable to assume
it's difficult, and thinking that it's so is not excessively
pessimistic.

I personally wonder how the mirrors are expected to handle .git
repositories.  That would increase the number of files they have to
handle by a couple of orders of magnitude.  What are the plans for
that?  Maybe you think that can handle it?  Maybe you plan to abandon
the mirror network in favour of something else like the CDN?  Maybe
you plan to remove the source from the mirrors?

Finally, there are more consumers of the source format than the Debian
packagers.  For example, I regularly download Debian source packages
just to figure why the hell something isn't working as I expect.  When
I do that, there are two things that are important to me:

1.  The download is as small as possible, and doesn't require a
specialised tool.  (Github and gitlab go to the trouble of 
providing just such as thing, which I think is evidence it's
needed.)  The current format is pretty good in this area.  At
a pinch you can get away without using deb-source to unpack it. 

2.  The point that has been raised here - reproducible builds of the
source package.  By that I mean a reproducible build should be
pure function that is given the upstream source package and some
data in the form of patches or whatever, and ends up with the
source and build instructions.  Being a pure function it always
produces the same outputs give the same inputs.

Unfortunately Debian doesn't always do a good job of this 
currently, albeit for good reasons - we can't distribute the 
upstream source package so DD's rebuild it, but they are allowed
to do so in any way they please.

Any source format that handled the issues above would get the thumbs up
from me.  (Interestingly despite the hairs it has in other areas the
rpm source format have always done well on those issues.) 
Unfortunately Bastian's proposal doesn't address them directly.


signature.asc
Description: This is a digitally signed message part


Re: [RFC] Proposal for new source format

2019-10-22 Thread Russell Stuart
On Tue, 2019-10-22 at 20:21 -0700, Russ Allbery wrote:
> This history has at least one commit per upload, although ideally 
> has the package maintainer's full revision history and upstream's 
> full revision history.

I understand you like the history.  A lot of people do.  But not
everyone values it, and I don't.  The only uses I've found for it are
git-bisect, reversing hasty deletes, and auditing who contributed what
which is a handy weapon in a court room copyright battle.  I can count
the number of times I've done all of those things in my life on one
hand.  Regular backups do those jobs almost as well, and I have to do
them anyway.

Source code control becomes a real time saver when you have a lot of
people working on the same source - I'd go so far as to say
indispensable for that case.  Such merging of histories needs a small
amount history to work with of course, and that makes that small amount
of history equally indispensable.  However the typical Debian Developer
scenario of the "lot of people" being you and upstream is a fairly
degenerate case, so their is understandably get some argument about
whether a heavyweight tool like git adds much.  If you like it that's
great - but others thinking it's not worth the bother is also great.  I
doubt anybody who just wants a one off copy of the source is going to
see much in the way of greatness.

On Tue, 2019-10-22 at 20:21 -0700, Russ Allbery wrote:
> I don't agree with this definition of reproducibility.  You're
> defining reproducibility from inputs that I consider build artifacts,
> which to me is rather weird.

That's a perfectly understandable perspective from a Debian Developer. 
But lets take a different perspective, or a Debian user installing
audited-crypto-program-x. What you are dismissing as "artefacts" is
exactly the information the person installing this needs to assure
themselves the Debian version of audited-crypto-program-x is a
reasonably faithful reproduction of the original.  If the packaging is
done well it will be broken down into small changes, each with a
documented purpose.

(None of this is rocket science or new, we are fairly close to it now. 
One the reasons I am writing this is it would be if better - and
definitely not get worse at it.)

The point of defining the process of constructing the Debian source
representation as a "pure function" is to guarantee it faithfully
reflects the original source for and documented changes _only_ - not
some random crap living in stale state carried across from years ago.

From my perspective there are lots of ways a Debian developer could
store this stuff.  Quilt patches with their headers are one and they
work well enough from this perspective - but a Git repository with
branches representing those same changes works equally well, although
it would be nice if a git branch (as opposed to a commit) could have a
rant associated with it about why it is there akin to the quilt patch
header.  (I guess this would be trivial to add by insisting each branch
adds a description file to the debian/branches directory.)  The Debian
developer is free to use whatever representation works best for them,
so long as when I download it, I can easily see the debian version of
openssl contained a patch that changed random number its generator to
getpid(), along with the reason why.

AFAICT, dgit does not address this, at all.  It's written purely from
the perspective of the Debian developer.


signature.asc
Description: This is a digitally signed message part


Re: [RFC] Proposal for new source format

2019-10-27 Thread Russell Stuart
On Tue, 2019-10-22 at 20:21 -0700, Russ Allbery wrote:
> I define reproducibility as generating the same Debian source package
> from a signed Git tag of my packaging repository plus, for non-native 
> packages, whatever release artifacts upstream considers canonical
> (which may be a signed tarball or may be a Git tag or may be
> something else entirely).

That is a great definition of reproducibility if all you are interested
in is the Debian version of the package.  It is not so great if you
want is the upstream version of the package - ie, it is important to
you that it behaves identically or at least diverges in accountable
ways.  In that case you want a clear audit trail from the upstream
source to the Debian binary.

On Tue, 2019-10-22 at 20:21 -0700, Russ Allbery wrote:
> All of this business with patches and whatnot is an implementation
> detail.

If you are thinking of patches in terms of .dpatch files in
debian/patches then we both agree, as I don't consider the
representation to be particularly important.  It could be branches
stored in git for all I care, perhaps managed by a tool like gquilt. 

What is important to me is the source contain an audit trail or how
Debian got from the upstream source to the Debian package.  If I
understand your position correctly, your proposal boils down adding a
(single) branch to the upstream .git for the debian changes.  My
problem isn't with using git - it's with the word "single".  It isn't
even with you using a single branch, as perhaps that's appropriate for
the packages you maintain (which it would be if the only change is to
add a debian directory).  My problem is the implication that since it
good enough for you, it's good enough for every package.  It's not. 
When you are carrying a lot of changes it's bloody horrible.

Perhaps an illustration may help.  I used to be a consumer of RedHat
kernels. Back in the 2.6 days they carried 100's if not thousands of
individual patches for stuff they backported form Linux 3.0.  (I gather
they still do carry a lot of patches for their LTS releases.) When you
wanted to add your own modification there was invariably conflicts, and
without knowing what patches it conflicted with and why it was just
impossible.  Then Oracle released their "own" Linux distribution.  It
was a copy of RedHat, something Oracle didn't go out of its way to
acknowledge.  Effectively Oracle was garnishing for themselves part of
RedHat's revenue stream (support fees) using a rebadged RedHat product.
RedHat responded by doing effectively what you are suggesting.  They
replaced source rpm's audit trail of every change they made and why
with one humongous, uncommented patch.  Technically they were operating
in accordance with the layers reading of the GPL I guess - they were
distributing the source.  But it sure as hell wasn't in accordance with
a programmers definition of "source" (which is along the lines of
something you can edit), as porting a patch from a the .orig kernel to
RedHat's became damned near impossible.

A second illustration is the kernel development process itself.  One
huge patch is not considered acceptable.  They must be smaller, easily
understood, digestible patches.  The quilt source format encouraged
that format - to the point of having lintian checks for it.  Nowhere do
you propose a similar mechanism - or even acknowledge it's important.

On Tue, 2019-10-22 at 23:20 -0700, Russ Allbery wrote:
> Checking reproducibility only back to a set of patches does *not*
> provide a real guarantee of reproducibility, since a supply-chain
> attack could still have introduced malicious code in the patch 
> generation process.

You are damming the good because it's not perfect.  It's true there are
still ways of attacking the code, it merely renders those attacks
visible and attributable.  In fact rendering all changes visible and
attributable by insisting they are signed is *precisely* the mechanism
the kernel uses to defend itself both from malware attacks of the type
you envisage and when someone attempts to add copyrighted code that
opens the kernel to legal attack later.  Turns out a bit of sunlight is
a great disinfectant.

On Tue, 2019-10-22 at 23:20 -0700, Russ Allbery wrote:
> like an argument for dropping all of the features that I want and 
> retaining only the feature that you want, when you can derive the 
> feature that you want (at some additional complexity cost, to be 
> sure) from the format that I'm arguing for.

I can see how you might think that.  The reality is a different.  At no
stage have I suggested you should be prevented from using git, or
indeed any other mechanism you desire.  I have said if you adopt a new
system like dgit please figure our a way of implementing one feature
the one you are replacing (quilt) - a way to audit changes.  But it has
been proposed that everybody be forced to drop whatever workflow they
might like in favour of dgit, and you look to be arguing in favour of
that idea.  If we moved

Re: [RFC] Proposal for new source format

2019-10-27 Thread Russell Stuart
On Wed, 2019-10-23 at 09:49 -0400, Theodore Y. Ts'o wrote:
> Generating a reproducible source package given a particuar git commit
> is trivial.  All you have to do is use "git archive".  For example:

It is indeed.  Almost a tautology.  But it's not what I'm interested in
doing.  The focus is on showing the connection between upstream's
source and Debian, not on reproducing Debian's source.

Repeating my earlier example, I want to show whether openssl (insert
name of fully audited package here) in Debian is a bit for bit
reproduction of upstream's openssl.  It won't be, of course, so I want
the next best thing: an audit trailing explaining exactly why it's
different.

Harking back to the time we removed the randomness generator from
openssl, it's very nice to have a single patch say "it was removed
because it wasn't exercised in the tests.  upstream didn't respond to
requests for comment" rather than having it interspersed with the 650
odd other lines of other changes we carry with no explanation.


signature.asc
Description: This is a digitally signed message part


Re: [RFC] Proposal for new source format

2019-10-27 Thread Russell Stuart
On Sun, 2019-10-27 at 20:29 -0700, Russ Allbery wrote:
> If you modify the upstream source, then by definition you do not have
> reproducibility of the upstream source, and you're now talking about
> something else (review of the changes, which I called audit in my
> previous message).

I think I'm guilty of a poor choice of words.

> I have no idea how you got that from my previous messages, but you
> have misunderstood.

Excellent.

> This is exactly my objection to reducing everything to patches rather
> than using the power of Git to represent the history and structure of
> the changes made for Debian.

Personally I don't see the "power of git" adds much apart from history,
but really it doesn't matter for this discussion.

> am completely baffled by your belief that this is inherently easier
> to do with quilt than with Git.

I don't believe that.  I guess we are talking past each other.  Out of
curiosity do you do maintain the changsets manually in git, or use
something like gquilt?



signature.asc
Description: This is a digitally signed message part


Re: Heads up: persistent journal has been enabled in systemd

2020-02-04 Thread Russell Stuart
On Tue, 2020-02-04 at 18:10 -0800, Russ Allbery wrote:
> It does take a bit of retraining to use journalctl instead (and I'm
> personally not horribly fond of its UI, although that's probably
> because I'm using it wrong), but it's a lot better at effectively
> narrowing log messages to the things of interest once you get used to
> it.

journald has nits I mention below, but I was prepared to put up with
them and drop rsyslog until one day a server stopped in a nasty way and
journalctl refused to display what lead up to the crash because it's
binary logs were corrupt.  As far as I was concerned this made journald
unfit for use on production servers.  (rsyslog's logs also get 4k lumps
of nulls and other garbage in them in similar situations, but they
remain usable.)

That was a long time ago, and it may well be fixed now.  But if it
isn't IMO turning off rsyslog by default is a bad idea.  My view is the
main reason Debian exists is to serve as a reliable base for production
machines.  Debian Desktop is what I use on my personal machine and yes,
dropping rsyslog hardly matters there, but I wouldn't be using Debian
Desktop if I wasn't using Debian in production.   

Another journald anti-feature (which is probably an unfair attribution
as it is almost certainly a consequence of systemd's design), is a
manually started service doesn't print the reason it refused to start
to stderr.  Having to fire up journald and wade through it's crappy UI
to get something sysV used to put under my nose is a step backwards.

Finally, it may be I just don't know how to use it well, but looking
for a needle in a haystack of logs is slower with journalctl that it is
with grep, and not by a small margin.  Journald making the thing you
spend most time doing with logs slower doesn't help it in the
slightest.  But I don't spend a lot of time searching logs, so it
wouldn't stop me from dropping rsyslog.

Get rid of those problems, and dropping rsyslog becomes a no-brainer
for me.


signature.asc
Description: This is a digitally signed message part


Re: isc-dhcp-client sends DHCPDISCOVER *before* wpa_supplicant authenticates/associates/connects.

2020-07-12 Thread Russell Stuart
On Sat, 2020-07-11 at 22:12 -0400, The Wanderer wrote:
> I don't run either systemd or NetworkManager, and I'm not currently
> interested in changing either of those things, but I am interested in
> trying out an alternative to wpa_supplicant. Is there an appropriate
> similar procedure for such an environment, or would I have to
> experiment and play around trying to get things to work?

I don't use network manager, so I'm in a similar position.

From what I can see iwd lacks two features of wpa_supplicant.

Firstly, there doesn't seem to be a way to attach iwd to a particular
wireless interface.  Iwctl doesn't provide one, and I don't see any
other way to tell it.  Most people have just one wireless interface so
it may not be a huge issue but it is an impedance miss-match with
things like ifupdown that are focused on interfaces.  Maybe that's the
problem with network manager too.

Secondly, it doesn't support wpa_supplicant's priorities.  I use them a
fair bit.  For example, I tell wpa_supplicant to favour my phone WiFi
hotspot over others, so if I have a connectivity issue I can just turn
it on.  That said, I guess I could just use iwctl to manually connect
to the phone.


signature.asc
Description: This is a digitally signed message part


Re: synaptics vs libinput and GNOME 3.20 no longer supporting synaptics

2016-07-11 Thread Russell Stuart
On Mon, 2016-07-11 at 23:51 +0200, Raphael Hertzog wrote:
> Well, if some KDE/XFCE/etc. packages work only with synaptics and not
> with libinput, then we should get those packages updated to depend on
> xserver-xorg-input-synaptics, no?

I don't know about KDE/XFCE, but in the etc category is LXDE, and it
works with both.  I'd be surprised if KDE and XFCE didn't work with
both too as libinput and synaptics are drivers, and as such are hidden
by the X API these window managers use.  The surprising thing for me is
GNOME evidently isn't using the X API, but instead talking to the
driver directly.

In my case, the thing that broke when xserver-xorg briefly switched to
using libinput instead of synaptics wasn't LXDE, it was me.  The
reasons are spelled out in the bug that was filed when the change was
made:

    http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=822835

Briefly: synaptics is much better touchpad driver than libinput.
 Libinput treats a touchpad as a mouse.  Modern MAC like touchpads are
great input devices, but when you strip them of multitouch and gesture
recognition they become so unwieldly people give up and plug in a real
mouse.

signature.asc
Description: This is a digitally signed message part


Re: synaptics vs libinput and GNOME 3.20 no longer supporting synaptics

2016-07-11 Thread Russell Stuart
On Tue, 2016-07-12 at 07:48 +0300, Lars Wirzenius wrote:
> After Raphael's mail yesterday, I switched from the synaptics driver
> to the xinput one (by removing xserver-xort-input-synaptics) and
> since then, I've not had a single case of moving the mouse or
> clicking by tapping by accident. When the opposite change happened a
> few weeks ago, the accidents started happening with such frequency
> that I could barely finish a sentence in the same window I started
> it.

For me at least the problem isn't palm detection, because in the end
it's a kludge that can at best only partially work.  For synaptics the
solution for this is syndaemon, and the problem it's broken right now:

    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818471

Maybe libinput does the typing detection syndaemon does, but I can't
find any evidence for it.

> Tapping and two-finger scrolling work perfectly fine with the xinput
> driver, too.

As a touchpad that doesn't move the mouse as I type is hugely
attractive, I tried libxinput again.  Same result.  Palm detection
works wonderfully, but only because it doesn't recognise a tap from
finger either.  Clicking (ie a tap heavy enough to activate the
mechanical switch underneath) does work.

If I configure synaptics to ignore TapButton1 taps (as opposed to
clicks), it also has faultless palm detection.

signature.asc
Description: This is a digitally signed message part


Re: synaptics vs libinput and GNOME 3.20 no longer supporting synaptics

2016-07-12 Thread Russell Stuart
On Tue, 2016-07-12 at 10:07 +0300, Lars Wirzenius wrote:
> I've been using the following script, with variations on the
> parameters to find a working setup. The values below are the best I
> could manage, and they aren't any good.
> 
> #!/bin/sh
> 
> synclient \
>   TapButton1=1 \
>   TapButton2=2 \
>   TapButton3=3 \
>   PalmDetect=1 \
>   PalmMinWidth=50 \
>   PalmMinZ=10 \
>   VertSCrollDelta=-41 \
>   HorizScrollDelta=-41 \
>   TouchpadOff=0 \
> "$@"

For what it's worth, for me on stretch synclient isn't too reliable.
 Eg, "synclient TapButton1=0" has no effect - it should disable one
fingered taps.  The libxinput equivalent (xinput --set-prop) doesn't
work for libxinput either.

The only thing that has proved reliable is putting a file in
/etc/X11/xorg.conf.d/touchpad.conf:

Section "InputClass"
  Identifier"Touchpad twofinger scroll"
  MatchIsTouchpad "yes"
  Driver"synaptics"
  Option"ZAxisMapping"  "4 5"
  Option"HorizTwoFingerScroll"  "true"
  Option"VertTwoFingerScroll"   "true"
  Option"FastTaps"  "on"
  Option"PalmDetect""on"
  Option"AccelFactor"   "0.1028806" #2x
  Option"AdaptiveDeceleration"  "10"
  Option"MinSpeed"  "0.5"
  Option"MaxSpeed"  "4.75"
  Option"TapButton1""1"
  Option"TapButton2""3"
  Option"TapButton3""2"
EndSection

I would happily use libxinput if it extricated me from this mess, but
right now it seems to be bugs all the way down.

signature.asc
Description: This is a digitally signed message part


Re: synaptics vs libinput and GNOME 3.20 no longer supporting synaptics

2016-07-13 Thread Russell Stuart
On Thu, 2016-07-14 at 13:41 +1000, Peter Hutterer wrote:

Thanks for the thorough analysis.

> The second component is that apparently tapping doesn't work when
> enabled. That's most probably a bug, file one against libinput at
> bugs.freedesktop.org and it'll get fixed.

Done.  Having to filing it against a Wayland component was novel, and
encouraging.

signature.asc
Description: This is a digitally signed message part


Re: synaptics vs libinput and GNOME 3.20 no longer supporting synaptics

2016-07-13 Thread Russell Stuart
On Thu, 2016-07-14 at 14:13 +1000, Russell Stuart wrote:
> The second component is that apparently tapping doesn't work when
> > enabled. That's most probably a bug, file one against libinput at
> > bugs.freedesktop.org and it'll get fixed.
> 
> Done.

For anyone still following along it is possible to get libinput to work
as least as well synaptics on my laptop (and I presume all others):

    https://bugs.freedesktop.org/show_bug.cgi?id=96925#c6

Short form: there are bugs somewhere, but you can work around them.

signature.asc
Description: This is a digitally signed message part


Re: GR: Declassifying debian-private: First call for votes

2016-08-06 Thread Russell Stuart
On Sun, 2016-08-07 at 01:48 +0200, Debian Project Secretary - Kurt
Roeckx wrote:
> Hi,
> 
> This is the first call for vote on the General Resolution about
> declassifying debian-private.
> 
>  Voting period starts  2016-08-07 00:00:00 UTC
>  Votes must be received by 2016-08-20 23:59:59 UTC
> 
> The following ballot is for voting on declassifying parts of -private
> of
> historical interest.
> 
> This vote is being conducted as required by the Debian Constitution.
> You may see the constitution at https://www.debian.org/devel/constitu
> tion.
> For voting questions or problems contact secret...@debian.org.
> 
> The details of the general resolution can be found at:
> https://www.debian.org/vote/2016/vote_002
> 
> Also, note that you can get a fresh ballot any time before the end of
> the vote by sending a signed mail to
>    bal...@vote.debian.org
> with the subject "gr_private".
> 
> To vote you need to be a Debian Developer.
> 
> 
> BALLOT OPTIONS
> 
> Choice 1: Allow declassifying parts of debian-private
> =
> 
> Title: Declassifying parts of -private of historical interest
> 
> 1. The 2005 General Resolution titled "Declassification of debian-
> private
>    list archives" is repealed.
> 
> 2. Debian listmasters and/or other individuals delegated by the DPL
> to
>    do so are authorized to declassify excerpts of -private of
> historical
>    interest by any process which at minimum provides sufficient time
> and
>    opportunity for Debian Developers to object by GR prior to
>    declassification.
> 
> 3. In keeping with paragraph 3 of the Debian Social Contract, Debian
>    Developers are strongly encouraged to use the debian-private
> mailing
>    list only for discussions that should not be disclosed.
> 
> Choice 2: Further Discussion
> 
> 
> This is the default option. Rank this option higher than the
> unacceptable
> choices.
> 
> 
> HOW TO VOTE
> 
> To cast a vote, it is necessary to send this ballot filled out to a
> dedicated e-mail address, in a signed message, as described below.
> The dedicated email address this ballot should be sent to is:
> 
>   gr_priv...@vote.debian.org
> 
> The form you need to fill out is contained at the bottom of this
> message, marked with two lines containing the characters
> '-=-=-=-=-=-'. Do not erase anything between those lines, and do not
> change the choice names.
> 
> There are 2 choices in the form, which you may rank with numbers
> between
> 1 and 2. In the brackets next to your preferred choice, place a 1.
> Place a 2 in the brackets next to your next choice. Continue until
> you
> reach your last choice. Do not enter a number smaller than 1 or
> larger
> than 2.
> 
> You may skip numbers, leave some choices unranked, and rank options
> equally. Unranked choices are considered equally the least desired
> choices, and ranked below all ranked choices.
> 
> To vote "no, no matter what", rank "Further Discussion" as more
> desirable
> than the unacceptable choices, or you may rank the "Further
> Discussion"
> choice and leave choices you consider unacceptable blank. (Note: if
> the
> "Further Discussion" choice is unranked, then it is equal to all
> other
> unranked choices, if any -- no special consideration is given to the
> "Further Discussion" choice by the voting software).
> 
> Finally, mail the filled out ballot to: gr_priv...@vote.debian.org.
> 
> Don't worry about spacing of the columns or any quote characters
> (">") that
> your reply inserts.
> 
> NOTE: The vote must be GPG signed (or PGP signed) with your key that
> is
> in the Debian keyring. You may, if you wish, choose to send a signed,
> encrypted ballot: use the vote key appended below for encryption.
> 
> The voting software (Devotee) accepts mail that either contains only
> an
> unmangled OpenPGP message (RFC 2440 compliant), or a PGP/MIME mail
> (RFC 3156 compliant). To avoid problems I suggest you use PGP/MIME.
> 
> 
> VOTING SECRECY
> 
> This is a non-secret vote.  After the voting period is over the
> details on
> who voted what will be published.  During the vote itself the only
> information that will be published is who voted. 
> 
> You can encrypt your message to the voting system to keep your vote
> secret
> until the end of the voting period.  The software will also try to
> keep
> your vote secret and will encrypt the reply it sends to you.
> 
> 
> - - -=-=-=-=-=- Don't Delete Anything Between These Lines =-=-=-=-=-
> =-=-=-
> 4896c7a8-1d45-49db-ba5e-490da5ed275c
> [  1 ] Choice 1: Allow declassifying parts of debian-private
> [  2 ] Choice 2: Further Discussion
> - - -=-=-=-=-=- Don't Delete Anything Between These Lines =-=-=-=-=-
> =-=-=-
> 
> ---
> ---
> 
> The responses to a valid vote shall be signed by the vote key created
> for this vote. The public key for the vote, signed by the Project
> secretary, is appended bel

Re: When should we https our mirrors?

2016-10-27 Thread Russell Stuart
On Thu, 2016-10-27 at 08:35 +0200, Vincent Bernat wrote:
> Moreover, the download speed can be very slow, either from work or
> from home (100M fiber connection). Sometimes 100kbytes/s. That's a
> pain.
> 
> I am a bit worried for deb.debian.org to become a default as it
> doesn't work well for me. Am I alone to have such problems?

It's *far* better than httpredir.  But given adding httpredir as a
backup has become a net negative for me, that's not saying much. 
deb.debian.org so far has been net positive.

Yes, it's slow.  But for those of us who live on the east cost of
Australia our choices are ftp.au.debian.org (rock solid but 5Mm away,
connected to the east by wet string or carrier pigeon - pabs will
advise) or an Eastern mirror (fast but unpredictable and unreliable,
possibly because pabs doesn't live here[0]) it's the reason we need a
backup.  I was hoping for better given fastly has POP's in Australia,
but apparently the nearest fastly POP for Debian is in the US.

The real problem is mirroring infrastructure for Debian badly needs
some love.  As Raphael spelt out when explaining the current state of
httpredir - the problem lies deeper than httpredir itself. 
deb.debian.org can bypass most the mess because they control both ends
- debian and the "mirror".

I'm sure Debian is riddled with people who could fix the mirror
network.  In the usual Debian way (why does heart bleed spring to
mind?) it will become an impossible to ignore itch before one of us
gets off our fat arses to does something about it. [1]



[0] pure speculation.

[1] Antarctic penguins spring to mind.  All hungry, all standing on the
ice starring at the sea wondering if there are leopard seals waiting
for dinner to deliver itself, all hoping someone else will be hungry
enough to be the test the dinner theory before them. 

signature.asc
Description: This is a digitally signed message part


Re: unattended-upgrades by default?

2016-11-03 Thread Russell Stuart
On Thu, 2016-11-03 at 18:47 +, Steve McIntyre wrote:
> To solve the issue and provide security updates by default, I'm
> proposing that we should switch to installing unattended-upgrades by
> default (and enabling it too) *unless* something else in the
> installation is already expected to deal with security updates.

I am amazed we don't do this already.  Effectively it makes us insecure
by default.

signature.asc
Description: This is a digitally signed message part


Re: Crowd funding campaign to package browserify in debian

2016-12-23 Thread Russell Stuart
On Fri, 2016-12-23 at 21:36 +, Jonas Smedegaard wrote:
> This list is about development of Debian.
> 
> Not about how to raise money to ease developing Debian.

The first condition is fulfilled - the email is about getting
development done within Debian.  In fact given ITP's I've seen floating
by, it's about something that is getting a lot of development done
within Debian.

Obviously it's the asking for money that rankles.  If he was asking for
some rare hardware to test on, or documentation, or even assistance you
would be fine with it.  Any normally I'd agree - money is such a
difficult topic.  I also would intensely dislike seeing repeated posts
on this list begging for money on a promise of getting some work done. 
But it should be obvious by now that isn't what's happening here. He
hasn't raised much of his target, but the work is getting done anyway. 
Clearly his main passion is to get the JS development tools into
Debian.  Money is just lubricant to make the task easier.  

JS development; the techniques they use, the appalling version control,
the tendency to mash together various bits of code with no respect to
licences, the propensity to download code they are about to run from
random sites is must be contrary to the bulk of our standing
policies.  Twisting this mess into something that can be used within
Debian is understandably difficult.  But also necessary, given it is
the most active area development on the planet right now. [0]

So add he is cracking a very tough nut that no one else has much much
of a dint in so far, and maybe we can be a little flexible?



[0] I was proudly shown some production "web code" yesterday.  Cutting 
edge stuff, apparently.  A single file contained HTML, css, and JS.
 
The first thing that hit me is JS didn't contain any semicolon's - 
something I found disquieting.  I was told "no, we don't use them 
any more".  But what about the problems with that pointed out by 
"Javascript - The Good Parts", I asked.  The reply was, "oh, no 
one does this stuff without running it through an aggressive 
linter, so its completely safe - mostly strictly type safe in 
fact".  (That was nice - evidently at least some of the scars
carried by the C using forbears had been noticed.)

But how could a linter process that, I asked - it was some  unholy 
mess of 3(? maybe more) intermixed languages.  It gently explained 
this was the source code form.  A large tool chain would digest 
it, turning it into something no sane human would look at.  It was 
broken into single language modules that were digestible by a 
browser, downloaded by some dynamic linker created by the tool 
chain that GET's the requisite parts as the running code links to 
it while executing.  It was complete with debugging symbols packed 
into separate files, so they were there if needed.  From the 1000' 
view it was not unlike the m4 / cpp / gcc / ld / ldd GNU tool 
chain - but created in some parallel universe.

Then I noticed some JS/Typescript/? syntax I hadn't seen before.  
Not wanting to let any more grey show through I decided to chase 
it down myself.  It took a while - it was some variant of the 
new JS spreader syntax - but it wasn't in ECMAScript 2015 and 
wasn't recognised by any shipping browser.  I knew this new 
generation of our profession didn't share my aversion to 
releasing production code developed in a language that hasn't been 
standardised yet - but it wasn't in ECMAScript 2017 either.  
Turned out it was a proposed addition to ECMAScript 2017 
implemented by a grand total of 2 transpilers.  In production 
code!  (The mere existence of those bloody transpilers makes the 
statement "written in Javascript" near inscrutable.)

    Many of bed rock of rules I built my career on are viewed as road
blocks to progress by this new generation, and treated accordingly.
It made me feel like I had been run over by the generation gap
bus.  They feel similarly I think - now they have containers they
configure with makefiles and rebuild nightly, they are 
understandably wondering where a monolithic solution like Debian 
fits in.  Pirate has evidently decided to work full time on
bringing these two worlds together.  Yes, he is using a rather
"novel" approach, but the entire situation is novel.  I say cut
him some slack.

signature.asc
Description: This is a digitally signed message part


Re: Crowd funding campaign to package browserify in debian

2016-12-26 Thread Russell Stuart
On Sun, 2016-12-25 at 19:17 +0100, Stéphane Blondon wrote:
> Perhaps I missed something, so I'm curious to learn more about it (a
> link or some keywords can be a good start).

The buzz work mix is:

- vue-loader
- webpack
- webpack plugin for .vue files (mix of HTML, CSS/sass/stylus, and JS).

and some browser plugin that automagically reflected any source code
changes on disk in the browser immediately, including tool messages (eg
, compile errors).


signature.asc
Description: This is a digitally signed message part


Re: Can we kill net-tools, please?

2016-12-27 Thread Russell Stuart
On Tue, 2016-12-27 at 01:02 -0800, Josh Triplett wrote:
> The rest of net-tools aside (which have sensible replacements), what
> replaces netstat in the absence of net-tools?

/bin/ss, which is part of iproute2

It's probably wise to 'dpkg -L iproute2 | grep bin/'.  They are the
tools provided by the current kernel network maintainers for
manipulating the kernel's network stack.  You will find some surprising
things in there, like tipc.

If you are contemplating moving to iproute2 this web page is
invaluable:

  http://baturin.org/docs/iproute2/



signature.asc
Description: This is a digitally signed message part


Re: Can we kill net-tools, please?

2016-12-27 Thread Russell Stuart
On Wed, 2016-12-28 at 03:08 +, Wookey wrote:
> If we are supposed to change to something newer these days

We've been discussing doing that for 8 years now:

https://lists.debian.org/debian-devel/2009/03/msg00780.html

> a pointer to a 'conversion' document would be nice.

https://wiki.debian.org/NetToolsDeprecation

(There are links on that page).

It's not a changeover document, but as I said earlier my favourite
resource is: http://baturin.org/docs/iproute2/

> Like Andrew I don't like the tone of these 'get rid of this crap'
> messages.

The issue is ifconfig was the tool up until Linux 2.2, but then the
kernel developers favoured iproute2.  The kernel has moved on and they
maintained iproute2, but ifconfig has remained static.  It now doesn't
support the most mundane things like multiple IP addresses per
interface, let alone multi routing takes, routing rules and the various
tunnelling protocols, or virtual ethernet devices needed by containers
to name but a few.
 
I don't know whether "crap" is the right word, but it is certainly
baggage from a bygone era.  "Baggage" here means that if we are nice to
our users (ie, Debian sysadmins), we should not force them to know two
tools.  We only have one complete tool set available: iproute2.  This means at 
the very least ifconfg can not appear in any conffile, nor can it really appear 
in documented shell scripts like dhclient-script.

Unfortunately this pain does not end at ifconfig. iwconfig has suffered
the same fate.  If you want to use Linux wireless in anger, then that's
the tool you have to use.  It's doubly annoying because of the "Do NOT
screenscrape this tool, we don't consider its output stable" warning it
issues.  It's not like you have much of a choice any more.

signature.asc
Description: This is a digitally signed message part


Re: Can we kill net-tools, please?

2016-12-29 Thread Russell Stuart
On Thu, 2016-12-29 at 11:38 -0800, Russ Allbery wrote:
> It certainly doesn't provide a man page that doesn't start with a BNF
> syntax description.  The iproute2 documentation is awful.
> 
> Also, this is not at all easy to parse:
>
> # ip -o address
> 1: loinet 127.0.0.1/8 scope host lo\ valid_lft foreverpreferred_lft 
> forever

All true.  In particular the documentation produced by kernel's
networking group is a pet hobby horse of mine.  To paraphrase an old
joke - you can't complain about most of it, because it doesn't exist.
[0]

When it comes to parsing they look equally bad once you have used them
for a while.  Worse from my point of view they are both unnecessarily
difficult to scrape in a script.  The "ip" does have one outstanding
attribute though - it is complete.  ifconfig doesn't list multiple ipv4
address (but does list multiple ipv6 addresses - what's up with that?).
 route can't handle multiple routing tables let alone routing rules. 
The equivalent of "ip tunnel" doesn't exist - and it goes on and on. 
net-tools might still be useful for configuring your laptop - but it's
now useless for any serious networking work.

To me this thread looks like a bunch of old men grumbling that the
young'ins have taken over what they created and turned the tools they
were comfortable with into something unrecognisable.  It's true - they
did do that, and it's true it was unnecessary.  They could have just
extended net-tools.  But this is how the young'ins have behaved for
time immemorial - when they take over the reins from the previous
generation and make it their own.  Look on the bright side.  They've
given the kernel's networking stack a large array of new tools that
weren't envisaged when net-tools was conceived - like QoS.


[0] Now I've started, the Linux kernel's networking stack is a mess.
From the outside it looks like a mob of warning tribes, each 
developing with their own way of doing the same thing.  To people 
not familiar with it this will sound like a hyperbolic claim.  So 
lets consider one simple task: dropping a packet.

- Did you know the routing table can drop a packet?
  "ip route add w.x.y.z/c blackhole" and
  "ip route add w.x.y.z/c prohibit" and
  "ip route add w.x.y.z/c unreachable" all do that.

- The traffic control engine can "police" packets.  You can "shot"
  a packet during policing.  (Being Australian, I find this odd,
  but I'm sure US citizens will be comfortable with it).

- Traffic control engine schedulers can also drop packets, (as well
  as move them like a bridge, create duplicates and a lot of other
  things).

- Iptables can drop packets.  This how most people do it.

- The new nftables can drop packets. 
  
Not only can they drop packets, each has their own way of figuring
out what packets to drop.  Which means each must pull apart the
packet to see it it matches, so the same work is being repeated
over and over again.

This has real impacts.  One is this spaghetti you see at the API 
level is reflected underneath, making for one large, complex, hard 
to understand and consequently fragile lump of code.  Another is 
the the BSD networking stack is faster than Linux - sometimes near 
an order of magnitude faster(!)

http://www.phoronix.com/scan.php?page=article&item=netperf-bsd-linux


signature.asc
Description: This is a digitally signed message part


Re: Can we kill net-tools, please?

2016-12-30 Thread Russell Stuart
On Fri, 2016-12-30 at 07:51 +0100, Vincent Bernat wrote:
> The same work is not repeated over and over again. The kernel keeps
> the needed information in a structure to avoid parsing the packet
> several times.

Yes, it does indeed keep the offset of the headers for the various
protocol layers (link, ip, transport) in the skbuff.  And yes, it uses
them where it can - for example when routing it knows it will be
looking at the destination ip address, and when routing it can use
those fields.  In netfilter (iptables) has a separate block of code for
each test, so it can use them so the test knows .  Unfortunately this
comes at a large cost - sequential execution.  In the traffic control
engine the main filter, u32, doesn't have a clue what you are looking
at, so it can't.  nftables could conceivably do it - but internally is
structured like u32 so it doesn't.  eBpf could also conceivably do it -
but it has less of an idea of what it is looking at than u32 so it
doesn't either.

Linux provides what 2(?) API's for manipulating files - read/write and
memory mapped io.  Want to count the number of ways it provides to
dissect and act upon network packets?  These aren't the esoteric things
the file system hides under ioctl's either (which is arguably how it
the main API remains so clean) - all these ways of pulling apart and
manipulating packets are in the fast path.

> When you need to decide how to route the packet, you need to do a
> route lookup. If the route entry you find happens to be a blackhole
> route, you drop the packet. You didn't do any additional work.

I bet the bash authors use that argument when they add another feature.
 In reality all code has a cost.  Do you remember Van Jacobson's
suggestion for speeding up Linux's network stack?  (It's mentioned in
his Wikipedia page.)  I was lucky enough to be at linux.conf.au when he
first presented it.  He provided working code and some very impressive
benchmarks.  It went nowhere because of that cost thing - the code
doesn't have to be executed in order to slow down development.  I think
Linux's networking mob pretty much gave up after the ipchains to
iptables transition.  Now stuff doesn't get ripped out and replaced,
instead new ideas are bolted onto the side - creating a bigger ball of
wax that's even harder to move.  Which is how we got to having the
ability to drop packets added in so many different places.

> Those benchmarks show huge differences between Linux distributions
> for a kernel-related task. Like all phoronix benchmarks, it should
> not be trusted.

Maybe you trust Facebook's opinion instead (it's weird - that wasn't
easy to write):

   http://www.theinquirer.net/inquirer/news/2359272/f

signature.asc
Description: This is a digitally signed message part


Re: Can we kill net-tools, please?

2016-12-30 Thread Russell Stuart
On Fri, 2016-12-30 at 10:42 +0100, Vincent Bernat wrote:
> > I bet the bash authors use that argument when they add another
> > feature.  In reality all code has a cost.
> 
> The only additional cost is the cost to check if the routing entry is
> a blackhole (while the check for anything else already exists). Even
> FreeBSD supports this feature since a long time.

I wasn't talking about the merits of doing it in any particular place. 
It was about the same thing being done in multitudes of places - in a
different way in each case, with a different API, with different code
pulling apart the packet.

> The flexibility is also what people like with Linux.

True.

> hence the need to be able to drop earlier (even adding filters
> directly in the NIC).

Of course - bypassing the entire stack is always going to be faster.

The reason I have the irritates is because I don't bypass it.  I use
the multiple routing tables, the traffic control engine, iptables,
tunnels, bridges, tun/tap, ipsec, veth, vlan's.  I tried to collect
information on flows - but that slowed throughput to the point people
actually noticed it on a 10M bit/sec link.  Thus my complaining about
having to use a different syntax every time I recognise a packet, how I
have to set MARK's to communicate between different levels of the
stack.

And also thus my long and loud whining about the lack of documentation.

It turns out such whining isn't entirely pointless.  While looking  at
the kernel code again I came across the net/openvswitch.  Implemented
as yet another bolt on, of course.  But if it does what it says on the
box it may be the answer to my complaints.  This quote from the web
site is heartening: "The goal with Open vSwitch is to keep the in-
kernel code as small as possible (as is necessary for performance)". 
Indeed.

signature.asc
Description: This is a digitally signed message part


Re: Can we kill net-tools, please?

2016-12-31 Thread Russell Stuart
On Fri, 2016-12-30 at 16:09 +0100, Bjørn Mork wrote:
> Fix it instead :)

I have submitted patches for kernel the network stack to improve QoS
for ADSL (ie where ATM cells are the link layer carrier).  I'm not
terribly forgiving of the long drawn out initiation rites the kernel
dev's seem to demand, so I eventually gave up.  (It didn't help that
the kernel networking dev's all seemed to use cable and so didn't
notice the issue.  As I was a heavy ADSL user it was a very pertinent
itch for me.)  My co-conspirator, Jesper Brouer, did persevere in the
area and is now a name some in kernel development might recognise. 
Years later he re-submitted the patch.  When it was rejected by the
same people in much the same way he responded "please lets not go
through this again", and it was accepted.

> FWIW, I find it much harder to document a new feature than
> implementing it. And I believe many others feel the same.  Any help
> improving the docs is greatly appreciated.

As it happens I did write documentation.  Not stuff most people consume
directly, but in a effort to use QoS I pored through the kernel source,
documented what it did and put the result up in a public place.  That
documentation is now referenced in a number of places.  For example
it's in the "SEE ALSO" of the tc-u32 man page, I think because it
copied large chunks of it.

> New networking features often have a narrow target and are added by a
> person knowing the specific area pretty well, but not necessarily 
> everything else...  Writing good docs in this context is difficult
> because your point of view is so different from the readers.

I acknowledge it's hard to write good documentation.  But I wasn't
talking about language skills.  I was talking about writing a single
word.  There wasn't one for tc for a long while.  Some modules of tc
are still only mentioned in examples - like the ifb device (which sadly
because it's a key part of the traffic control puzzle).

Some get an introductory para only:

  atm - a scheduler for ATM VC's. 
  gact - probabilistic filter action.
  gred - generalised random early detection scheduler.
  hhf - Heavy-Hitter Filter scheduler.
  multiq - driver for multi queue devices, disguised as a scheduler
  rsvp - Match Resource Reservation Protocol filter.
  qfq - better version of pfifo, maybe.

At least a casual user can find the above and look up the kernel source
if they are desperate enough (I did it after all).  But there are other
parts still haven't attracted a single word:

  blackhole - a scheduler that always drops packets.
  canid - a filter that looks at CAN bus ID's.
  mq - the default scheduler used for virtual devices.
  plug - scheduler allowing user space start / stopping of flows.
  teql - link bonder disguised as a scheduler.
  text - filter matching strings of text in a packet.

> Note that the reason there are two sets of tools is exactly because
> they *didn't* turn the existing tools into something
> unrecognisable.  The existing tools were left alone when the kernel
> API went from ioctls to netlink.

You learn something new every day.  I had read the kernel devs had
rewritten net-tools.  I presumed that was because the ioctl interface
had gone, so they were doing it out of the kindness of their hearts to
reduce the impact of the change over. Now I've looked at the source I
see that isn't so.

> But I see no point in the subject of this discussion. 

It always puzzles me when I see someone make a point like this - right
after adding another 100 words to the very discussion they say is
pointless!

> Leave net-tools as they are for anyone used to them.

There are 2(!) versions of ifconfig available - one in inettools and
one in net-tools.  I don't recall anybody suggesting we remove either.

The issue is the new maintainer of net-tools is changing it's output. 
This could reasonably be expected to break scripts scraping it.  Since
its been superseded the suggestion is packages using it switch to
iproute2 rather than simply adapting to the change.  Re-wording that in
Debian terms: that means no packages should depend on it.

Granted this simple suggestion has prompted several people to charge
off on vaguely related hobby horses, and I'm a prime if not the major
offender.  Undoubtedly this has distracted from the original
discussion.  That is unfortunate as the original idea seemed sound to
me - and not at all pointless.

signature.asc
Description: This is a digitally signed message part


Re: Feedback on 3.0 source format problems

2017-01-03 Thread Russell Stuart
On Tue, 2017-01-03 at 18:37 -0800, Russ Allbery wrote:
> Even if we never used tarballs, and instead our unit of operation was
> the upstream Git repository plus Debian branches, I would maintain a
> rebased branch of Debian changes to upstream

This is not a novel requirement.  Most projects I've worked with insist
you rebase your patches.  This is not new.  Before git they insisted
your patches applied cleanly - which amounts to the same thing. 
Breaking up large patches into a series of smaller independent patches
each with a simple and documented purpose isn't an unusual requirement
either.

To me this is just the software engineering 101 rule "break just large
software projects into smaller, easily understood and documented
modules" applied to change control.  The reason is identical - it's so
someone coming along behind you can easily understand what you have
done and why.   For those of us at a certain age, that someone else
coming along behind might be ourselves a few months later.

Whether it's mandatory seems to depend on how big the team is.  A large
DD team packaging who submits patches upstream to large project will
find it unavoidable.  (The Debian kernel currently carries 400 patches,
totalling 50K lines.  Managing that as a single lump would be
impossible.)  On the other hand a one person team who doesn't
contribute upstream could reasonably say it's pointless.  I know most
Debian packages are a one horse affair, but I am still surprised by the
number of DD's here claiming this software 101 process is _always_
pointless.

Source format "3.0 (quilt)" is a straightforward way of storing a
series of small documented patches.  This is in contrast to quilt(1)
the program, which is a way maintaining those patches.  I'm not fond of
quilt(1) as I regularly manage to get myself into states I can only get
out of using "rm -r; dpkg-source -x ...; reapply work done so far".

The kernel uses git as a better quilt (both are spin-offs from  the
kernel).  Gits adds some new tricks.  It doesn't get into impossible to
recover from states (yeh!).   The history it keeps allows it compare
and merge change sets - something that has to be done manually with
quilt.  That history also provides some extra features like git bisect,
and the ability to trace copyright - but they aren't so important.

What is important is git as it is used by the kernel devs still
produces small, rebased, documented patches.  If it didn't I doubt the
kernel would be using it.  The central issue here appears to be that
none of the proposed ways of using git within Debian help with that
task.  Debian packages that do use git and have patches don't use git
to generate the patches.  Eg, the kernel team appears[0] to use
quilt(!) to maintain its patch series, even though they use git too. 
If you are using quilt anyway because you like small documented
patches, but you are a one horse team and so aren't concerned about
parallel work flows a reasonable question is: why use git at all?


[0] I'm not on the kernel team, so I can't be sure about them using
quilt.  I guess they do because their debian/bin/test-patches tool
does use quilt.


signature.asc
Description: This is a digitally signed message part


Re: Converting to dgit

2017-01-03 Thread Russell Stuart
On Wed, 2017-01-04 at 14:47 +1000, Russell Stuart wrote:
> The central
issue here appears to be that none of the proposed ways
> of using git
within Debian help with that task.

On Wed, 2017-01-04 at 04:42 +, Colin Watson wrote:
> 
> git-dpm does too, and I agree it's nice.
> 
> It produces a patches-applied tree *and* the separated upstream
> patches in debian/patches/.  You never actually touch the latter by
> hand; they're purely an export format.

I stand corrected.  It seems to be precisely what is needed.

signature.asc
Description: This is a digitally signed message part


Re: Feedback on 3.0 source format problems

2017-01-09 Thread Russell Stuart
On Mon, 2017-01-09 at 17:33 +, Ian Jackson wrote:
> All of this applying and unapplying of patches around build
> operations is complete madness if you ask me - but I don't see a
> better approach given the constraints.  dgit sometimes ends up doing
> this (and moans about it), which is even madder given that dgit has
> git to help it out.

To state the bleeding obvious, it arises because on day 1 Debian
decided to do the builds in the original source tree, then tries to
recover the original source at the end by running "debian/rules clean".
 When I moved from rpm's to deb over a decade ago, I was surprised by
this.  Rpm's create temp build directory, so the "debian/rules clean"
step can be handled reliably 100% of the time by the rpm build tool.
Debian insisting you write it creates extra work.  But when in Rome do
as the Romans do and all that.

Later when I started to work on other peoples packages it became
apparent that many of the Romans didn't bother with it.  So the
debian/rules binary; dpkg --install; test; debian/rules clean; fix;
rinse and repeat cycle doesn't work at all for maybe 1/3 of packages,
and another 1/3 occasionally fail when something goes wrong during the
patch / build process.

When it goes wrong your only option is "rm -r; dpkg-source -x; manually
reapply your changes" which is tedious, error prone, and for me least
conducive to losing a days work.  It is indeed utter madness that over
a decade later we still allow this work flow. 

signature.asc
Description: This is a digitally signed message part


Re: What can Debian do to provide complex applications to its users?

2018-02-27 Thread Russell Stuart
On Tue, 2018-02-27 at 14:13 +0100, Didier 'OdyX' Raboud wrote:
> > - we could ship those applications not as .deb but as container
> >   and let them have their own lifecycle
> 
> tl;dr: a new package format is needed, with a new non-suite-specific 
> repository is needed to bring the Debian added-value to these
> ecosystems.

To me 'container' is the right solution.  The problem is Debian doesn't
support building light weight containers well.  In fact nobody does. 
Docker makes an attempt, but distributing static file system images
that have to get their security updates installed by replacing the
entire image is ick.

If I were do the entire thing over again, I would break Debian up into
a series of rings.  In the inner most ring is like the inner most ring of 
Linux.  It's filesystem(s) is readonly to all other rings.  In it sits the code 
for dpkg,  But dpkg wouldn't do much beyond pulling down packages and their 
security upgrades into a /debian directory, which would look rather like /pool 
on the mirrors now, but the .deb's and .dsc's would being directories rather 
than tar archives.

Other containers would run above this.  They create their /usr file
systems by linking into dpkg's /debian directory (which is readonly to
them).  Maintainer scripts would run when these are containers are
built.  This means dpkg is no longer running maintainer scripts, so
just like an Android application a malicious package is limited in the
harm it could cause and in particular uninstall would always work.
These containers would be notified when packages they are running have
security upgrades installed, so they can swap to the new versions at a
convenient time.  We still get to keep the "one copy of each library so
we only have to fix a vulnerability once" advantage Debian has now (and
other current solutions notably lack).

Anybody who has fiddled with containers will have no trouble filling in
the rest of the picture.  It gives us two things: much better security
and a faster way to build containers (because the unpacking step has
already been done).

I realise it sounds grandiose and far fetched, however it can be broken
down into small(ish) steps.  Step 1 would be having dpkg unpack
everything to the /debian directory (including the state it currently
stores under /var) rather than installing it in place, and just placing
links in /usr, /etc and so on.   (I'm am optimist in that I think you
could pull that off without too many things noticing.)  Step 2 would be
to isolate /debian, so the rest of the world sits in its own container
and run the maintainer scripts from within that container.  (I'm such a
optimist that even I think doing this wouldn't require many changes
beyond dpkg.)  The next steps would be moving each application into
it's own container.  They would be much harder, but I suspect once
you've done the refactoring to make dpkg maintained containers
possible, the thing would take on a life of it's own.

In this world, vdeb's are just packages that apt will only permit to be
installed in a container the user has somehow marked as insecure (means
no Debian QA, ie no security patches).  Anybody thinking "yeah, but not
that insecure" should probably read this bug report:

https://github.com/npm/npm/issues/19883

The idea Debian would by default prevent that from trashing my laptop
is real appealing.

signature.asc
Description: This is a digitally signed message part


Bug#894696: ITP: nagios4 -- A host/service/network monitoring and management system

2018-04-03 Thread Russell Stuart
Package: wnpp
Severity: wishlist
Owner: Russell Stuart 

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

* Package name: nagios4
  Version : 4.3.4
  Upstream Author : Ethan Galstad 
* URL : http://www.nagios.org/
* License : GPLv2
  Programming Lang: C
  Description : A host/service/network monitoring and management system

Nagios is a monitoring and management system for hosts, services and
networks.  This is a metapackage installing both the monitoring daemon
and the web interface.

Nagios' features include:

 *  Monitoring of network services (via TCP port, SMTP, POP3, HTTP, NNTP,
PING, etc.)
 *  Plugin interface to allow for user-developed service checks
 *  Contact notifications when problems occur and get resolved (via email,
pager, or user-defined method)
 *  Ability to define event handlers to be run during service or host events
(for proactive problem resolution)
 *  Web output (current status, notifications, problem history, log file, etc.)

nagios4 is the upstream replacement for nagios3.

-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEZqiOeH6lCkTWvjmorNSfiF5UUm4FAlrDUkIACgkQrNSfiF5U
Um433hAAmDgTLfxlkDBLiRlFUOgmiVBqHarbnttWfgdQZEFO2yCtttic4/egc4E2
GMRokcFWDfNPJGlwezFUs9hOeE+6n/hnO//4O9UBt7n1zkoAEls1TrWjT6l6x/Cf
EbpGrBONEBm30F247ZPe/er6s3Fa5wOs5ja9cyGWDxdYS2I5iF8mxd9Jvkf7S0q1
YKJOnMW3+siON7sseZBFId2UqMMWszMDoXiNJkmI+kb2CKRPyp1dr8cTzX1guqf+
6qQbzBU3TSrAGptCUOHAEAniwhPZtIz+rLl35OXdXKPUdkIdfACneALOBCoK8GEy
NkOqjyvOe3dfIPRtjE2NbTadKVYyu5TdCtpBiU8YNmbDx9gQxPBtXGq3MW6wJhNQ
vsxXd4k51cB490LUsqyEjgDFKma1E8RQYlUzaVEwTmyknv98rSkTBOFuz7bdRYBX
S/ee/V4W0voAFueaPfJdm8Rn+o/XFdIPRHs2AMfxFqowXTb136N7gqGvm0UPvesh
59R5+nDEqDhIeZ52mBgTXupcNGvkCBqUAa1wjvPZP7ma/FLhEy+IAVhRJz5AMiJa
rNR6rYakd3XgqZ3BYeVfuiUgv+5x338wDA6M/pap2k4s5gPSPdm7FL///GaL6qN7
sSxS4Wwex+IWDsDHcziSFC2DpUHKfydiV1NpvZK4sclcDUnU7CQ=
=AGWq
-END PGP SIGNATURE-



Re: concerns about Salsa

2018-06-05 Thread Russell Stuart
On Tue, 2018-06-05 at 15:44 +0100, Ian Jackson wrote:
> Packages are great for software which you can just install and use
> without much fuss.  That is often true for mature software.  But for
> services which are less mature, and more complex, and which have more
> tentacles, the admin is likely to need to change things.  This makes
> using packages awkward.

I think it's better to express the trade off in terms of consequences.

- Using software packaged by a distro requires very little effort,
  which makes packaged software the ultimate sysadmin force
  multiplier.  I know sysadmin's who exploit it to the extent they
  maintain literally 1000's of working boxes.

- Taking a package straight from the developer gives you flexibility
  using software packaged simply can't provide or worse you can't use
  it at all because it's not packaged.  The downside is you become a
  nurse maid for the box it's installed on.  Nurse maid's are literally
  orders of magnitude less productive than one sysadmin maintaining an
  automated assembly line, so the end product had better be orders of
  magnitude more useful than the packaged version.

To better understand the understand the consequences, compare Wordpress
and Drupal.  Both get their share of security issues.  However only
Wordpress is packaged for Debian, so a developer who uses Wordpress on
a Debian box with unattended-upgrades installed some does not have to
spend much time worrying about patching security issues.  Reading
Drupal developers comments on the net after the recent Drupal exploits
gets you a common theme: "I've put together lots of customer sites and
now they all need upgrading from a variety of versions, but no one will
pay to do it and there is no way I have the time to do it myself."

That's the consequence of choosing the wrong model for the task at
hand.  I expect they would argue they had a hard requirement for some
Drupal feature, so the consequence of not using Drupal was the web site
didn't happen at all.   That's hard to swallow for a web site, but it's
not so hard for Salsa given the state of its dependencies in Debian.

signature.asc
Description: This is a digitally signed message part


Re: concerns about Salsa

2018-06-07 Thread Russell Stuart
On Thu, 2018-06-07 at 18:14 +0200, Tollef Fog Heen wrote:
> Packages does not imply automation (lots of people maintain machines
> by logging into each one and running apt by hand and $EDITOR on their
> configuration files; I suspect this applies to the majority of
> desktops and laptops by people on this list), and git repositories do
> not imply not-automation.  Those are simply transport mechanisms for
> bits and the level of automation you use is not decided by git-vs-
> packages.

No, distros are not "just transport mechanisms".  In particular they
allow security patch upgrades to be automated by doing several things. 
On is automatically scanning for them and installing them which some
rare packages do provide (eg, browsers) and the second is supplying
back ported security patches which gives a good enough guarantee it
will be backward compatible that I let them through without testing.

I'll drive the point home with yesterdays (literally yesterdays)
headline: "Three months later, a mass exploit of powerful Web servers
continues".  The headline is referring to the 1000's of unpatched
Drupal servers out there, unpatched because patching required upgrading
to the latest version which is too hard.  Wordpress sites using the
Debian package with unattended upgrades installed would likely have
been patched before news of the exploit made the headlines.

In a nod to Salsa's team, they have taken the road you suggest and
automated everything they can with Ansible.  And yes, it's true the
burden of supplying these security back patches may fall on them, so
packaging it would not save them time.  But that's how it works for
DD's - we don't do this for our benefit, it's the rest of the world
that benefits.

> For debian.org hosts, the choice is primarily a matter of privilege
> separation: Service owners generally don't have root on the hosts,
> and so if they are to be able to update the service configuration,
> the service must run as a user they have access to or we need to
> build control planes with access controls which allow service owners
> to control their service.  DSA has root on the hosts and maintain
> those but  we don't run all our services, so we'd rather not be on
> the critical path for updating various services (which we'd need to
> be if those came from packages).

I accept that's doesn't leave the Salsa team with much choice, but it
still leaves me scratching my head.  Containers / VPS's / VM have been
a thing for years now.  They solve this separation problem in a way
that reduces the workload for everyone.

signature.asc
Description: This is a digitally signed message part


Re: concerns about Salsa

2018-06-08 Thread Russell Stuart
On Fri, 2018-06-08 at 10:11 +0800, Paul Wise wrote:
> In my experience the Wordpress upstream auto-upgrade system is
> typically faster than the Debian's handling of Wordpress.

I didn't realise Wordpress had an auto-upgrade system.  That put's in
the same league as the Browsers like Chrome and Firefox.  I'm
impressed.

However, it's not the same service that Debian offers.  Wordpress has
an auto upgrade system to the new version.  Debian has auto application
of security patches to an existing system.  To see the difference, try
googling "Wordpress upgrade breaks".  Or look at the howls of anguish
on this list directed up the upcoming Firefox ESR update for stable. 
Both are examples of what happens when you update to the latest version
rather than just applying security patches.

The ultimate measure is the number of systems a person can maintain
using one system over the other.  For the Debian way of doing things
there really isn't a limit, or more accurately other limits (like
hardware failures, and dealing with network and power outages) will hit
you first.  In Wordpress's case there is a background rate of plugin
and theme breakage which will eventually overwhelm you.

The difference between the two is pretty obvious to the person paying
the bills.  I suspect that is the real reason Debian, a project that
has no income to speak of, somehow manages to have all the
infrastructure it does - 60TB servers for snapshots, a mirror network
and CDN, LWN subscriptions, free venues for its conferences and I
suspect lots other things.  I don't know of another open source project
that gets even remotely close to this level of support.  It would be
downright peculiar, if it weren't for the fact that the value of the
service Debian provides can be judged by RedHat's turnover, which is
about $3 Billion/year.  For the firms throwing the occasional piece of
chump change Debian's way it must look like the bargain of the century.

> I also get the impression that the number of CVEs (let alone all
> security issues) is scaling faster than the amount of folks in Debian
> who are handling them.

Is there some public proxy measure for this?  For example, the number
of outstanding CVE's, or average days it takes for a CVE to get fixed?

signature.asc
Description: This is a digitally signed message part


Re: concerns about Salsa

2018-06-09 Thread Russell Stuart
On Sat, 2018-06-09 at 13:52 +0100, Ian Jackson wrote:
> As a service owner who has chosen to run the service out of git
> for other reasons, I don't really care.  But someone who wants to run
> the service from packages might have a different view.

In my very limited experience with containers they don't need
the host privileges that come with root.  The only reason containers
want root is to continue doing privilege separation (eg, prevent the
web from installing packages) in the way they've always done it.  For
example a fakeroot that persisted across reboots and somehow worked
with ldd / ld.so would be fine.  

It turns out if this is all you need its already available.  Some
container systems can already map root inside of the container to a
less privileged user outside of the container.  Docker for example [0].

And it is generally all you need. A container seems to reduce to:

- Program(s) that run in their own little ephemeral POSIX universe
  (ephemeral in the sense when it is stopped all internal state is
  lost, as if it was stored in RAM and the power was cycled) that
  has no connections with the outside world whatsoever, except:

- They listen to TCP/UCP ports inside the container.  But these are
  completely isolated from the outside world until the sysadmin
  connects an outside IP Address / port to them, and

- Stores data that should be persisted or perhaps just visible (eg
  logs) in a few well known directories, onto which the sysadmin can
  bind mount appropriate storage (the convention seems to be /data),
  and

- Has unfettered access to the network as a client. 

Root is either not required for these things or easily avoided.  For
example even though the external world connects to Salsa on port 80 and
it would require root privileges to listen on port 80, Salsa can listen
on port  inside the container and if the sysadmin wants it to serve
port 80, he maps some_ip_addr:80 to the container .

There are inefficiencies built into this system.  For example you may
end up with 100 containers all polling Debian for security updates and
installing them into ephemeral little worlds.  But these inefficiencies
don't seem like good reason to avoid using containers.

[0] 
https://docs.docker.com/engine/security/userns-remap/#user-namespace-known-limitations

signature.asc
Description: This is a digitally signed message part


Re: Concerns to software freedom when packaging deep-learning based appications.

2018-07-12 Thread Russell Stuart
On Thu, 2018-07-12 at 18:15 +0100, Ian Jackson wrote:
> Compare neural networks: a user who uses a pre-trained neural network
> is subordinated to the people who prepared its training data and set
> up the training runs.

In Alpha-Zero's case (it is Alpha-Zero the original post was about)
there is no training data.  It learns by being run against itself. 
Intel purchased Mobileye (the system Tesla used to use, and maybe still
does) with largely the same intent.  The training data in that case is
labelled videos resembling dash cam footage.  Training the neural
network requires huge amounts of it, all of which was produced by
Mobileye by having human watch the video and label it. This was
expensive and eventually unsustainable.  Intel said they were going to
attempt to train the network with videos produced by game engines.  I
haven't seen much since the Intel purchased Mobileye however if they
succeed we are in the same situation - there is no training data.  In
both cases is is just computers teaching themselves.

The upshot is I don't think focusing on training data or the initial
weights is a good way to reason about what is happening here.   If Deep
Mind released the source code for Alpha-Zero anyone could in principle
reproduce their results if you define their result as I'm pretty sure
they do: produce an AI capable of beating any other AI on the planet at
a particular game.  The key words are "in principle" of course, because
the other two ingredients they used was 250 MW hour of power (a wild
guess on my part) and enough computers to be able to expend that in 3
days.

A better way to think about this is the AI they created is just another
chess (or Go or whatever) playing game, no different in principle to
chess games already in Debian.  However, it's move pruning/scoring
engine was created by a non human intelligence.  The programming
language that intelligence uses (the weights on a bunch of
interconnected polynomials) and the way it reasons (which is boils down
finding the minima of a high dimensional curve using newtons method to
slide down the slope) is not something human minds are built to cope
with.  But even though we can't understand them these weights are the
source, as if you give them to a similar AI it can change the program. 
In principle the DSFG is fulfilled if we don't discriminate again non-
human intelligences.

Apart from the "non-human" intelligence bit none of this is different
to what we _already_ accept into Debian.  It's very unlikely I could
have sensible contributions to the game engines of the best chess,
backgammon or Go programs Debian has now.  I have no doubt I could
understand the source, but it would take me weeks / months if not years
to understand the reasoning that went into their move scoring engines. 
The move scoring engine happens to be the exact thing Alpha-Zero's AI
(another thing I can't modify) replaces.   In the case of chess at
least they will have a database of end games they rely on, a database
generated by brute force simulations generated using quantities of CPU
cycles I simply could not afford to do.

Nonetheless, cost is an issue.  To quantify it I presume they will be
able to rent the hardware required from a cloud provider - possibly we
could do that even now.  But the raw cost of that 250 MW hour of power
is $30K, and I could easily imagine it doubling many times as it goes
through the supply chain so as another wild guess you are probably
looking at $1M to modify the program.  $1M is certainly not "free" in
any sense of the word, but then the reality no other Debian development
is free either.  All development requires computers and power which
someone has to pay for.  The difference is now is merely one of a few
added noughts, and those noughts exclude almost all of us from working
on the source.  But I'd be surprised if there isn't a Debian users out
there who *do* have the means to fiddle with these programs if they had
the weights and the source used to create them.  Which means anyone
could work on them if they had the means - but I don't have the means. 
*shrug*

Which is how I reach the opposite conclusion to Ian.  If Deep Mind
released Aplha-Zero source code under a suitable licence, plus some
example neural networks they generated with it (that happen to be bit
everyone uses) Debian rejecting the example networks as they "aren't
DFSG" free would be a mistake.  I view one of our roles as advancing
free software, all free software.  Rejecting some software because we
humans don't understand it doesn't match that goal.

signature.asc
Description: This is a digitally signed message part


Re: "Ask HN: What do you want to see in Ubuntu 17.10?"

2017-04-02 Thread Russell Stuart
On Fri, 2017-03-31 at 21:48 +0100, Chris Lamb wrote:
> There's a very active conversation happening on Hacker News right
> now entitled «What do you want to see in Ubuntu 17.10?»:
> 
>   https://news.ycombinator.com/item?id=14002821
> 
> I haven't read every comment yet, but are there any feature requests
> that seem particularly relevant to us?
> 
> (As a meta-comment, I was quite struck by the number of requests that
> should "obviously" be requested upstream instead. I wouldn't want to
> label this as "wrong" but I think this might speak to the different
> perception we have towards the upstream → distro → user
> relationship.)

I was fascinated to see the two top ones were also my two top ones. 
But both are upstream issues.

The first is better HDPI handling.  This will require Wayland as X11
simply can't handle connecting to monitors with wildly different DPI
settings. But even just handling HDPI is problematic - fixing it with
my 300 dpi screen took multiple manual hacks including in Gimp's case
installing an icon pack not available in Debian.

The second was better gesture recognition.  I'd put onscreen keyboard
into that basket too.  This in particular effects 2in1 laptops - those
a standard OS (ie, Windows / Linux) but can be both touch only or touch
+ keyboard.  At the low end things appear to be in the process of
wiping out tablets, which is good for us.  But because the onscreen
keyboard doesn't work and gestures aren't supported uniformly well by
all apps these things aren't usable currently, and they have to be
usable out of the box with a standard install - just like Windows
manages to do.

Note these two cover the largest growth groups - the very low end, and
laptops with HPDI monitors.

The third one that got my attention is security updates that can be
trusted to work on a standard desktop install.  Where we take the
approach that creating our own security patches is just too hard, this
means stable must just follow upstream (like Firefox ESR),
automagically.  They don't do this reliably now for the default
install, even with unattended-grades, on a machine that's
intermittently on. This one is definitely in our court, and it's an
absolute must IMO, but is been already been discussed to death here.

signature.asc
Description: This is a digitally signed message part


Re: "Ask HN: What do you want to see in Ubuntu 17.10?"

2017-04-02 Thread Russell Stuart
On Mon, 2017-04-03 at 15:35 +1000, Brian May wrote:
> On 2017-04-03 10:10, Russell Stuart wrote:
> > The first is better HDPI handling.  This will require Wayland ...
> >  
>  
> Did I miss something? I thought Ubuntu was doing their own thing and
> not using Wayland.
>  
> https://wiki.ubuntu.com/Mir/Spec

Yep.  But I was referring to Debian.

signature.asc
Description: This is a digitally signed message part


Re: "Ask HN: What do you want to see in Ubuntu 17.10?"

2017-04-04 Thread Russell Stuart
On Wed, 2017-04-05 at 11:18 +0800, Paul Wise wrote:
> Not AFAIK. I would guess that needrestart would need to be promoted
> to standard priority and needrestart-session would need to be added
> to tasksel's task-desktop package, or to each of the task-*-desktop
> packages; this adds wxWidgets to the default install though. The
> latter would allow different desktops to add different
> implementations, for example if someone wrote a GNOME Shell extension
> to highlight windows of applications that need restarting.

The original thread HN thread that trigged this was more about personal
machines, ie laptops and tablets.  That is were I'm coming from anyway.
 As it happens, Steve McIntyre was looking at the server side and
specifically excluded laptop's from his auto install security patch
deliberations, so nominally there isn't an overlap.

As far as I can tell, for laptop's rebooting is a non-issue mainly
because suspend is not reliable enough to use safely [0] - so they are
rebooted every day.  Ergo just fixing bug #744753 would be the cure if
it is indeed the problem - but it doesn't sound like it to me as this
isn't a suspend issue.

The itch I'm trying scratch is I've convinced some co-workers to ditch
Windows for Linux.  All our infrastructure and development is done
under Linux, so it makes sense.  For the most part it works very well,
apart from the 3 issues I raised earlier.  Fortunately they don't use
the tablet mode and they don't have HDPI displays, so they aren't
issues for them.  But the not installing security updates thing means I
have to remember do it for them.


[0] By "not safe" I mean suspend can destroy hardware.  Not directly of
course.  The first issue is modern laptops have so much DRAM it
can drain the battery overnight, which makes suspend pretty useless
if you are expecting it to reliably save your work.  The solution
is put the laptop into hibernate mode if it's been suspended too
long.  This works mostly - but it has one disastrous failure mode.
It must wake the laptop up to put it into hibernate mode but
sometimes it doesn't wake successfully. The result is the
motherboard is powered up, the laptop is in the bag with no 
ventilation and the thing cooks.




signature.asc
Description: This is a digitally signed message part


Re: "Ask HN: What do you want to see in Ubuntu 17.10?"

2017-04-05 Thread Russell Stuart
On Wed, 2017-04-05 at 12:38 +0100, Ian Jackson wrote:
> Me too.  I guess it depends very much on whether one can afford to
> buy a good laptop which works well with Linux.

Not in this case.  My laptop concerned is an Dell XPS 9550.  It wasn't
cheap and in the 12 months of ownership I'd describe the hardware as
better than "good".  Dell's part of the design is not big part of the
total of course, Intel, Sony, Broadcom, Samsung to name a few all have
their fingers in the pie, as they do in every laptop.  But bits Dell
did contribute are extraordinarily well done, with the exception of the
keyboard layout.  It's definitely the best laptop I've ever owned.

My pain is largely self inflicted: I covet shiny bits.  Lots of
companies sell new laptops with bits a couple of years old that work
with Debian stable.  Knowing this, I bought the XPS anyway.

Although there are components in this laptop, almost of the pain come
from one: Intel's Skylake CPU.  (The touchpad also contributed but the
libinput maintainers were fantastic, going way above and beyond the
call of duty and contacting me directly when I complained on LWN.  It
now works wonderfully; worth the early adopter pain then some.) 
Getting Intel's CPU and in particular it's internal GPU working took
far longer and involved more pain than that I bargained for.  Just to
put this into perspective: they didn't work on Windows either.  Intel
CPU's are not something you can avoid by buying a more expensive
laptop.

All this new hardware has meant I have had to run Debian Testing. 
Combine shiny new hardware with the shiny new software needed to drive
it, and random little surprises become part of ones life.  Coming close
to dropping your new laptop because of a burning sensation as you
retrieve it from it's bag wasn't surprising or even unexpected - not to
me anyway.

Anyway, this discussion prompted me to get off my bum and look at why
unattended-upgrades wasn't working.  Turns out the default install has
"label=Debian-Security", and all these laptops are running testing.  I
guess the assumption that people running testing have the wherewithal
to configure their machines properly isn't unreasonable.


signature.asc
Description: This is a digitally signed message part


Re: "Ask HN: What do you want to see in Ubuntu 17.10?"

2017-04-06 Thread Russell Stuart
On Thu, 2017-04-06 at 09:22 +1000, Russell Stuart wrote:
> Anyway, this discussion prompted me to get off my bum and look at why
> unattended-upgrades wasn't working.  Turns out the default install
> has "label=Debian-Security", and all these laptops are running
> testing.  I guess the assumption that people running testing have the
> wherewithal to configure their machines properly isn't unreasonable.

And ... that wasn't the full story.  The full story is when you install
unattended-upgrades it defaults to "off", or more precisely this
debconf setting default to "false":

unattended-upgrades/enable_auto_updates

This sort of thing drives me insane.  Unattended-upgrades doesn't do
anything if you don't set this to true, and why would you install it if
you didn't want it to run?  I guess it must be because some packages
depend on it, and maybe they run it themselves rather than relying on
anacron.  If that's the reason the solution is to split into two
packages, maybe "unattended-upgrades" which does do what it says on the
box and "unattended-upgrades-common" witch other packages can depend on
safely.

signature.asc
Description: This is a digitally signed message part


Re: UMASK 002 or 022?

2017-06-30 Thread Russell Stuart
On Fri, 2017-06-30 at 21:22 +1000, Scott Leggett wrote:
> If windows is different, it looks to be the outlier because macOS
> behaves the same way as Debian[0]:
> 
>   > For example, the default umask of 022 results in permissions of 644
>   > on new files and 755 on new folders. Groups and other users can read
>   > the files and traverse the folders, but only the owner can make
>   > changes.
> 
> [0] https://support.apple.com/en-us/HT201684

Windows being an outlier is a recent thing.  Earlier versions behaved
like the rest of us.  Such behaviour originated in a time when computer
users were once Uni students themselves.  They knew what file
permissions were and how to change them, and were smart enough to not
be scared of sharing as the default philosophy.  Unfortunately for gwmf
m...@openmailbox.org most Debian developers come from that cohort.

gwmf...@openmailbox.org is right in saying today's computer users don't
have the "sharing is what makes us bigger than the sum of the parts"
philosophy.  Where he goes wrong is in assuming they share their
computers.  While there was a time many people shared a single CPU,
today many CPU's share a person.  Or less obliquely, everyone has their
own phone / tablet / laptop, which they don't share with anyone except
US border agents.  In this environment umask is a quaint hallmark of a
bygone time.

The one example he gave of students sharing a University computer is a
furphy.  It's true it still such sharing still happens.  But the person
in charge of the machine isn't some naive first year pleb.  It's a
battle hardened university sysadmin who, god bless his black heart, has
faced down 1000's of aspiring university student training in the art he
long ago mastered. He knows how to wield a umask with power and
precision.  He doesn't whinge about pam_umask not being the default, he
fixes it and while he's at it alters the shell scripts in
/etc/X11/Xsession.d/ gets exactly the umask they deserve.

TL;DR - this complaint is 20 years too late.

signature.asc
Description: This is a digitally signed message part


  1   2   >