dedicated live CD for PGP master key management

2016-04-25 Thread Daniel Pocock

There are various blogs guiding people to use a Debian Live CD for
managing PGP master keys

Has anybody thought of making a dedicated live CD image for this
purpose, with some kind of PGP quick setup wizard and attempting to
enforce a sane and secure workflow?

One page I came across suggested using the Tails environment, but it is
not clear that using Tails is a good idea.  The focus of Tails is using
the network anonymously, whereas a PGP master key is intended to assert
your identity and may facilitate tracking you.  Having a different image
for this purpose may be a simple way to maintain a distinction between
these concepts.

Some specific things that the live image could do:
- verifying there is no network connection, no DHCP daemon,
automatically shutting down if a network connection becomes active
- formatting 2 or 3 flash drives in a mirrored configuration (md or
Btrfs) to mount at ~/.gnupg
- formatting another flash drive for distributing the public key
- preparing smart cards
- key renewal
- storing and printing revocation certificate
- asking users for their user ID in a GUI and doing all the necessary
gnupg commands for them
- logging all the gnupg commands for advanced users to inspect




Re: Extending the build profile namespace

2016-04-25 Thread Manuel A. Fernandez Montecelo

2016-04-24 20:08 Helmut Grohne:


* The nocheck profile is the cousin of DEB_BUILD_OPTIONS=nocheck and
  must be used in conjunction with that option. Its sole purpose is to
  mark droppable dependencies and it seems to be used properly. I would
  be happy to see even wider adoption (e.g. #787044), because this is
  one of the easiest and safest ways to work on bootstrap problems.


I think that having this duplication (potentially with different
meanings or scopes, and maybe diverging further over time as used on the
field) is undesirable in the long run.

Perhaps these OPTIONS and PROFILES should be merged, in a way that if
one is enabled, the other also is.  (Is this the plan already?)

(Same applies for "nodoc*").


Maybe this has already been considered and discarded, but from having
thought about this in the past, I think that option to solve this would
be:

- to have fine-grained and well-defined knobs in DEB_BUILD_OPTIONS
 (e.g. nodoc, nocheck, noLANGbindings...), possibly further
 standardised than today and that can be used independently of profiles
 -- this is also your idea, as far as I can tell

- and then DEB_BUILD_PROFILE turning one or several of these knobs at
 once, consistently across packages (e.g. a feasible cross-stage1
 always implying both "nodoc" and "nocheck" and dropping all language
 bindings, among others).


Even if for many packages it wouldn't be necessary to drop some/most
functionality, the target for bootstrap builds should be IMO to only
enable the minimal necessary deps for the packages to work (to save time
and space when [re]bootstrapping, if nothing else).

(More on this below)



2) Given the mess with stage profiles, I think that we should provide
   some better way for generic feature profiles (like Gentoo USE flags)
   that are to be used consistently by multiple packages. Of course,
   Debian is not going to replicate the diversity of Gentoo's USE
   flags, but adding them driven by demand may be a sane choice.


Gentoo-style USE flags are based on the idea that, rather than having an
enable-all approach and having "negative" options (disabling specific
stuff), they should be changed to behave as "positive" (have a baseline
of "disable everything non-essential" and enable functionality as
needed/demanded).

Since Debian is usually about full-functionality-enabled by default, and
this is what rdeps expect and so on, I think that this approach means
going against the tide.

Your proposal is more like a DONTUSE flags :) -- like "enable disabling
functionality".



   For
   instance, audit, libcap-ng, libprelude and newt could use a "nopython"
   profile for disabling Python language bindings instead of gaining a
   meaningless "stage1" profile for breaking dependency cycles. I see
   immediate practical use (for replacing stages) in profiles disabling
   Go (nogolang), Java (nojava), Perl (noperl) and Python (nopython)
   bindings and vague need for disabling Apparmor support (noapparmor),
   Bluetooth support (nobluetooth), Lua bindings (nolua), SELinux
   support (noselinux), systemd integration (nosystemd), and systemtap
   support (nosystemtap).


I'd consider also a "noaudit", it's a lot of rdeps and the use of this
framework is not needed at the time of bootstrapping.



Furthermore, having a way to distinguish "safe" (package set changing)
from "unsafe" (content changing) profiles would be very helpful.


Obviously you have more experience here, and I don't want to say that
this idea is without merits.

However, in general as a part-time-porter, I don't think that
distinguishing between safe and unsafe is important for the general case
of bootstrapping and architecture (it's more important for the case of
checking whether everything is bootstrappable).

One assumes that the packages being built are just a means to an end,
that they will have to be rebuilt with full funcitonality in any case.
The failure (or unsafeness) of packages to behave normally doesn't
matter, as long as it eventually allows to reach a stage where your
whole set can be used to rebuild and improve itself to a full-fledge
Debian.



Unless I hear objections, I will start using those profiles and ask
lintian maintainers to relax profile checks.


This message is not an objection, just thinking aloud in the case that
the ideas above help in some way (even if only to gauge the mindset).


Cheers.
--
Manuel A. Fernandez Montecelo 



Re: Extending the build profile namespace

2016-04-25 Thread Ian Jackson
Helmut Grohne writes ("Extending the build profile namespace"):
>  * The nodoc profile is a bit strange. It is supposed to drop
>documentation from packages or to drop documentation packages. The
>former leads to packages whose content varies with profiles (which
>generally is bad)

I think there is nothing wrong with a build profile producing the same
packages with different content.

IMO someone (human or software) who specifies the use of a build
profile is responsible for knowing the semantic effect of the profile.

For this reason each profile name should come with a specification.

Earlier in your mail you skipped over the nocheck profile and asserted
that its semantics are to `mark droppable dependencies'.  But the
semantics are also that when this profile is used, there is a greater
probability of generated nonfunctional packages (ie of failing to
detect bugs where without `nocheck', the brokenness would be detected
and the build would fail).

>  and the latter mostly drops Arch:all packages, so
>in many cases simply doing an arch-only build achieves the same
>effect. If the only Architecture: all package from a source package
>is a -doc package, then generally the nodoc profile is not necessary
>(e.g. cargo and rustc). Just populate Build-Depends-Indep properly.
>Maybe we should revise rules for this profile?

I don't agree with this analysis at all.  One thing you might want to
do is deliberately run builds of many packages in a situation where
you know you don't care about generating documentation (or would like
to save the space consumed by documentation).

For this to work, there has to be a way to specify such a build
without knowing the details of the package.


>Also some packages implement this as a non-profile
>DEB_BUILD_OPTIONS=nodoc (e.g. botch, brian, cython, dipy, isso, [...]

AIUI the difference between a build profile and a DEB_BUILD_OPTIONS
value is simply that a build profile is able to predictably[1] change
the set of build dependencies and/or the set of generated binary
packages.

[1] By predictably I mean that this is specified in a machine-readable
way, so that higher level build orchestration tools can make use of
the information.

ISTM that the nodoc build option should be phased out in favour of a
build profile.  For compatibility, the specification for the nodoc
profile should say that nodoc builds should always be done with the
nodoc build option too.  dpkg-buildpackage could handle this.


>  * The various stage profiles serve vastly different needs. The common
>theme is breaking dependency cycles, but that's about where
>commonality ends.
...
>* It is generally not well defined what functionality is dropped in
>  what stages. Instead, the stages are derived from practical need
>  (which is good). Still, we loose track of whether these stages are
>  still needed and whether they still work over time. I believe that
>  this undefinedness is a bad property of these profiles and that we
>  should therefore stop using them whenever feasible. Instead, I'd
>  like to see specific profiles (e.g. drop Perl bindings). I
>  acknowledge that this is the path to becoming more like Gentoo (USE
>  flags), but maybe that's a good direction?

I don't think there is a real problem with unneeded stages, or broken
stages, lingering.

I could be wrong but I think the stage profiles are pretty easy to
keep in the affected packages and don't have a big maintenance cost.
If they are still needed, they will get tested, and be maintained.
If they aren't needed right now they may rot, but keeping them there
and half-working will make it easier if they are needed again.  I
think it is better to keep them there than to flail about introducing
and removing them.

The stage* system avoids doing a lot of work to determine and describe
precisely the situation in an abstract way; rather, it is an explicit
and manually maintained bootstrap sequence (with a cross-archive
representation).

In general in Debian we keep lots of things which we're not sure are
still useful and which we're not sure are still working.  Whenever we
remove such a thing it usually turns out someone was relying on it, or
that someone had been using it in a way that avoids the breakage.

> Given the above analysis, I see two immediate needs for change in the
> handling of build profiles:
> 
>  1) We should provide a namespace (profile name prefix) where packages
> can add their own custom profiles at will and where there is room
> for experimentation. Such a namespace would improve the metadata for
> packages like gcc-$VER, cyrus-sasl2, dnsmasq and reprepro. I propose
> that packages can use "pkg.$sourcepackage.$anyting" whenever the
> maintainer of $sourcepackage agrees with that use.

Very good idea.

>  2) Given the mess with stage profiles, I think that we should provide
> some better way for generic fe

Re: dedicated live CD for PGP master key management

2016-04-25 Thread Ian Jackson
Daniel Pocock writes ("dedicated live CD for PGP master key management"):
> Some specific things that the live image could do:
> - verifying there is no network connection, no DHCP daemon,
> automatically shutting down if a network connection becomes active
> - formatting 2 or 3 flash drives in a mirrored configuration (md or
> Btrfs) to mount at ~/.gnupg
> - formatting another flash drive for distributing the public key
> - preparing smart cards
> - key renewal
> - storing and printing revocation certificate
> - asking users for their user ID in a GUI and doing all the necessary
> gnupg commands for them
> - logging all the gnupg commands for advanced users to inspect

These sound like very cool ideas.

Ian.

Oh, were you looking for help rather than merely encouragement ? :-)



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Daniel Pocock
On 25 April 2016 14:55:07 CEST, Ian Jackson  
wrote:
>Daniel Pocock writes ("dedicated live CD for PGP master key
>management"):
>> Some specific things that the live image could do:
>> - verifying there is no network connection, no DHCP daemon,
>> automatically shutting down if a network connection becomes active
>> - formatting 2 or 3 flash drives in a mirrored configuration (md or
>> Btrfs) to mount at ~/.gnupg
>> - formatting another flash drive for distributing the public key
>> - preparing smart cards
>> - key renewal
>> - storing and printing revocation certificate
>> - asking users for their user ID in a GUI and doing all the necessary
>> gnupg commands for them
>> - logging all the gnupg commands for advanced users to inspect
>
>These sound like very cool ideas.
>
>Ian.
>
>Oh, were you looking for help rather than merely encouragement ? :-)


I had already made up some live CDs for ready-to-run VoIP and remote hands 
purposes, so I can probably do some of what is required, but it seems like a 
good idea to avoid duplicating any other efforts in this area too.





Re: dedicated live CD for PGP master key management

2016-04-25 Thread Holger Levsen
On Mon, Apr 25, 2016 at 04:03:26PM +0200, Daniel Pocock wrote:
> I had already made up some live CDs for ready-to-run VoIP and remote hands 
> purposes, so I can probably do some of what is required, but it seems like a 
> good idea to avoid duplicating any other efforts in this area too.
 
shouldn't most of the functionality of this go into (a) dedicated
package(s) which then can be used by several, eg by tails and grml and
debian live-cds?


-- 
cheers,
Holger


signature.asc
Description: Digital signature


Re: dedicated live CD for PGP master key management

2016-04-25 Thread Daniel Pocock
On 25/04/16 16:23, Holger Levsen wrote:
> On Mon, Apr 25, 2016 at 04:03:26PM +0200, Daniel Pocock wrote:
>> I had already made up some live CDs for ready-to-run VoIP and remote hands 
>> purposes, so I can probably do some of what is required, but it seems like a 
>> good idea to avoid duplicating any other efforts in this area too.
>  
> shouldn't most of the functionality of this go into (a) dedicated
> package(s) which then can be used by several, eg by tails and grml and
> debian live-cds?
>
Some parts of such a project could probably be packaged

One of the ideas I had is that it should have a kernel compiled without
any networking support, then it may not make sense to mix bits of the
solution with other live CDs

Another interesting idea may be having an application that runs in Tails
to download other people's keys from key servers, automatically using a
different Tor connection for each download.



Bug#822608: ITP: libdynamic-graph -- Dynamic graph C++ library development package

2016-04-25 Thread Rohan Budhiraja

Package: wnpp
Severity: wishlist
X-Debbugs-CC: debian-devel@lists.debian.org

Package name: libdynamic-graph
 Version: 3.0.0
Upstream Authors: Thomas Moulard 
  François Bleibel 
  François Keith 
  Nicolas Mansard 
  Olivier Stasse 
 URL:https://github.com/proyan/dynamic-graph 


 License: LGPL-3
 Description: The dynamic graph library allows the representation of 
data-flow in C++.
  It provides fast graph evaluation and a simple script 
language to manipulate
  the graph actions.
  .
  This package contains development files (headers and 
pkg-config file).
  See http://stack-of-tasks.github.io/ for details of use, 
development and
  further documentation



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Holger Levsen
On Mon, Apr 25, 2016 at 05:24:21PM +0200, Daniel Pocock wrote:
> Another interesting idea may be having an application that runs in Tails
> to download other people's keys from key servers, automatically using a
> different Tor connection for each download.

apt show parcimonie | $magic
Description: privacy-friendly helper to refresh a GnuPG keyring
 parcimonie is a daemon that slowly refreshes a gpg public keyring
 from a keyserver.
 .
 Its refreshes one OpenPGP key at a time; between every key update,
 parcimonie sleeps a random amount of time, long enough for the
 previously used Tor circuit to expire.
 .
 This process is meant to make it hard for an attacker to correlate
 the multiple performed key update operations.
 .
 See the included design document to learn more about the threat
 and risk models parcimonie attempts to help coping with.


-- 
cheers,
Holger


signature.asc
Description: Digital signature


Re: dedicated live CD for PGP master key management

2016-04-25 Thread Christian Seiler

Am 2016-04-25 17:24, schrieb Daniel Pocock:

On 25/04/16 16:23, Holger Levsen wrote:

On Mon, Apr 25, 2016 at 04:03:26PM +0200, Daniel Pocock wrote:
I had already made up some live CDs for ready-to-run VoIP and remote 
hands purposes, so I can probably do some of what is required, but it 
seems like a good idea to avoid duplicating any other efforts in this 
area too.


shouldn't most of the functionality of this go into (a) dedicated
package(s) which then can be used by several, eg by tails and grml and
debian live-cds?


Some parts of such a project could probably be packaged

One of the ideas I had is that it should have a kernel compiled without
any networking support, then it may not make sense to mix bits of the
solution with other live CDs


Well, as Debian kernels are modularized, why not simply create a
package that blacklists all network drivers? Then you don't have
to compile an own kernel, but just make sure that the list of
networking-related kernel modules is up to date, which seems to
me to be a lot less work (especially since you can potentially
automate that by looking for stuff in drivers/net).

Plus a tool that looks at the list of loaded modules and checks
that there isn't any network driver loaded.

Regards,
Christian



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Daniel Pocock
On 25/04/16 17:34, Christian Seiler wrote:
> Am 2016-04-25 17:24, schrieb Daniel Pocock:
>> On 25/04/16 16:23, Holger Levsen wrote:
>>> On Mon, Apr 25, 2016 at 04:03:26PM +0200, Daniel Pocock wrote:
 I had already made up some live CDs for ready-to-run VoIP and
 remote hands purposes, so I can probably do some of what is
 required, but it seems like a good idea to avoid duplicating any
 other efforts in this area too.
>>>
>>> shouldn't most of the functionality of this go into (a) dedicated
>>> package(s) which then can be used by several, eg by tails and grml and
>>> debian live-cds?
>>>
>> Some parts of such a project could probably be packaged
>>
>> One of the ideas I had is that it should have a kernel compiled without
>> any networking support, then it may not make sense to mix bits of the
>> solution with other live CDs
>
> Well, as Debian kernels are modularized, why not simply create a
> package that blacklists all network drivers? Then you don't have
> to compile an own kernel, but just make sure that the list of
> networking-related kernel modules is up to date, which seems to
> me to be a lot less work (especially since you can potentially
> automate that by looking for stuff in drivers/net).
>
> Plus a tool that looks at the list of loaded modules and checks
> that there isn't any network driver loaded.
>

I agree that is probably easier for development, although from a
security point of view the strategy would be to avoid having any
networking code in the environment at all

I've progressed the whole concept from vapourware to wikiware now:

https://wiki.debian.org/OpenPGP/CleanRoomLiveEnvironment

Does the workflow make sense?



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Christian Seiler
On 04/25/2016 06:38 PM, Daniel Pocock wrote:
> On 25/04/16 17:34, Christian Seiler wrote:
>> Am 2016-04-25 17:24, schrieb Daniel Pocock:
>>> On 25/04/16 16:23, Holger Levsen wrote:
 On Mon, Apr 25, 2016 at 04:03:26PM +0200, Daniel Pocock wrote:
> I had already made up some live CDs for ready-to-run VoIP and
> remote hands purposes, so I can probably do some of what is
> required, but it seems like a good idea to avoid duplicating any
> other efforts in this area too.

 shouldn't most of the functionality of this go into (a) dedicated
 package(s) which then can be used by several, eg by tails and grml and
 debian live-cds?

>>> Some parts of such a project could probably be packaged
>>>
>>> One of the ideas I had is that it should have a kernel compiled without
>>> any networking support, then it may not make sense to mix bits of the
>>> solution with other live CDs
>>
>> Well, as Debian kernels are modularized, why not simply create a
>> package that blacklists all network drivers? Then you don't have
>> to compile an own kernel, but just make sure that the list of
>> networking-related kernel modules is up to date, which seems to
>> me to be a lot less work (especially since you can potentially
>> automate that by looking for stuff in drivers/net).
>>
>> Plus a tool that looks at the list of loaded modules and checks
>> that there isn't any network driver loaded.
>>
> 
> I agree that is probably easier for development, although from a
> security point of view the strategy would be to avoid having any
> networking code in the environment at all

Well, your live CD creator script could also drop the networking
modules from the image, then they aren't available on the live
CD/USB key at all. Unless there is some driver compiled into the
kernel (I haven't checked), this should also be sufficient.

> I've progressed the whole concept from vapourware to wikiware now:
> 
> https://wiki.debian.org/OpenPGP/CleanRoomLiveEnvironment
> 
> Does the workflow make sense?

In principle yes, however it doesn't quite fit with my the workflow
I'd like to use something like that for: my master key is on a two
separate SD cards, and I only have one SD card reader. So what I
do when I need to change something on the key is to insert one SD
card, copy the .gnupg directory to a tmpfs, do the modifications,
copy the directory back to the first SD card, and then copy the
directory back to the second SD card. Any RAID solution wouldn't
work for me, as I can't insert both cards at the same time. (Also,
many people only have a limited amount of USB ports, and not every
person wants to buy a hub just for this, so even with USB keys RAID
might not always be easily possible, especially if you need one
port for booting the system.)

By the way, in addition to the passphrase for the GPG key, my SD
cards are also encrypted via LUKS (with a different passphrase) to
provide an additional layer of security. (For example, if gpg had
some bug that left some information behind in .gnupg pertaining to
the key, this would still prevent people that get a hold of the
card to access those.) LUKS should probably be optional, but
something to think about.

Finally, I'm not sure I'd trust btrfs enough for this - especially
in RAID mode if both devices are your only copy. I can't think of
any feature in btrfs that might be required for this use case, so
I'd rather stick with a plain ext4, at least by default. When it
comes to this I'd rather be conservative.

Regards,
Christian



signature.asc
Description: OpenPGP digital signature


Re: Extending the build profile namespace

2016-04-25 Thread Helmut Grohne
On Mon, Apr 25, 2016 at 01:44:01PM +0100, Ian Jackson wrote:
> Helmut Grohne writes ("Extending the build profile namespace"):
> >  * The nodoc profile is a bit strange. It is supposed to drop
> >documentation from packages or to drop documentation packages. The
> >former leads to packages whose content varies with profiles (which
> >generally is bad)
> 
> I think there is nothing wrong with a build profile producing the same
> packages with different content.

That can be ok, but it can also break dependency contracts. The ensuing
breakage is usually hard to debug. Now documentation is special here,
because the policy says that nothing must rely on /usr/share/doc/$pkg,
but when did the last archive rebuild validate that property?

> IMO someone (human or software) who specifies the use of a build
> profile is responsible for knowing the semantic effect of the profile.

That's what we currently do. Some profiles are recorded at
https://wiki.debian.org/BuildProfileSpec. Yet even that specification is
violated in a fair number of places already. This is an attempt at
gaining more precision.

> >  and the latter mostly drops Arch:all packages, so
> >in many cases simply doing an arch-only build achieves the same
> >effect. If the only Architecture: all package from a source package
> >is a -doc package, then generally the nodoc profile is not necessary
> >(e.g. cargo and rustc). Just populate Build-Depends-Indep properly.
> >Maybe we should revise rules for this profile?
> 
> I don't agree with this analysis at all.  One thing you might want to
> do is deliberately run builds of many packages in a situation where
> you know you don't care about generating documentation (or would like
> to save the space consumed by documentation).
> 
> For this to work, there has to be a way to specify such a build
> without knowing the details of the package.

The question here is where you put that effort. Either you "pollute"
many packages with nodoc options/profiles or you put that logic into the
builder. From my pov (as the one building many packages), putting a rule
"if all arch:all packages are in section doc, skip indep" is much easier
than patching tens or even hundreds of packages.

> ISTM that the nodoc build option should be phased out in favour of a
> build profile.  For compatibility, the specification for the nodoc
> profile should say that nodoc builds should always be done with the
> nodoc build option too.  dpkg-buildpackage could handle this.

I certainly wouldn't object such a transition. Still I think that we
need to improve the tooling before starting it. Adding support for a
nodoc profile currently involves a fair number of steps touching both
d/control and d/rules and potentially switching to dh-exec for using
profiles in d/*.install. Instead, generic helpers (e.g. dh_installdocs)
should support that use case.

For these reasons, I believe that the maintenance cost of nodoc
currently outweighs the benefits for many packages. Of course, I welcome
efforts to lower that cost. From a bootstrap perspective, this has
rather low priority though, so I won't be doing that.

> The stage* system avoids doing a lot of work to determine and describe
> precisely the situation in an abstract way; rather, it is an explicit
> and manually maintained bootstrap sequence (with a cross-archive
> representation).

Can you point me to that manually maintained bootstrap sequence? Who is
maintaining it? Where are those stages described?

Instead, I see that bootstrapping requires intimate knowledge of which
existing stages are useful and which ones are best ignored (either
because they are unneeded or broken). It also requires knowledge of
which necessary stages only exist as patches in the BTS.

>The benefit of the current stage profiles is
> that it is easy for a cross bootstrap orchestration tool to know what
> to do.

My experience with maintaining a cross bootstrap orchestration tool is
exactly the opposite and the reason for writing the mail you replied to.

> Your suggestion, if implemented, would:
> 
>  1. Complicate the metadata for what is a pretty minority feature.
>This imposes more work on package maintainers in general and more
>work on those trying to get the boostrap to work.

No, it's about naming the profile. You can call it "stage1" or
"nopython". It's just a name. The only thing that really changes is that
instead of being a vague "stage1", it now comes with meaning and follows
a specification.

>  2. Require the implementation of a new bootstrap planner which would
>be able to mine the profile-specific build-dependency information
>to construct a bootstrap plan.

There is another reason to require a bootstrap planner: prevail sanity.
The days of manually ordering architecture bootstrap have long passed.
Everyone uses at least partially automatic ordering in one way or
another.

For instance, rebootstrap orders build

Re: Extending the build profile namespace

2016-04-25 Thread Helmut Grohne
On Mon, Apr 25, 2016 at 01:28:00PM +0100, Manuel A. Fernandez Montecelo wrote:
> Perhaps these OPTIONS and PROFILES should be merged, in a way that if
> one is enabled, the other also is.  (Is this the plan already?)

They serve different needs. Quite a few options do not make sense as
profiles. What do you do with parallel=n? None of nocheck, noopt or
nostrip should change any package relations. noddebs changes which
packages are built, but those are not part of d/control anyway. Then
there is the non-standard but frequently used debug option.

> (Same applies for "nodoc*").

As Ian pointed out, it may still make sense to transition some options
to profiles.

> Even if for many packages it wouldn't be necessary to drop some/most
> functionality, the target for bootstrap builds should be IMO to only
> enable the minimal necessary deps for the packages to work (to save time
> and space when [re]bootstrapping, if nothing else).

I actually think that it should be possible to use the result of a
bootstrap as is. You see, that's exactly what Yocto, Buildroot, PTXdist,
and others do. Just Debian has much better security support, long term
support, package quality and quantity than any of the contenders.

> I'd consider also a "noaudit", it's a lot of rdeps and the use of this
> framework is not needed at the time of bootstrapping.

Yes. Obvious omission. It is not clear though whether bootstrapping with
or without audit is easier.

> However, in general as a part-time-porter, I don't think that
> distinguishing between safe and unsafe is important for the general case
> of bootstrapping and architecture (it's more important for the case of
> checking whether everything is bootstrappable).

It's crucial for an algorithm to be able to choose profiles. I want to
remove the need to have to think about which stages are necessary.

> The failure (or unsafeness) of packages to behave normally doesn't
> matter, as long as it eventually allows to reach a stage where your
> whole set can be used to rebuild and improve itself to a full-fledge
> Debian.

It matters as soon as your reverse dependencies fail to build. Spoiler:
They do occasionally.

Helmut



Re: Extending the build profile namespace

2016-04-25 Thread Holger Levsen
On Mon, Apr 25, 2016 at 08:06:07PM +0200, Helmut Grohne wrote:
> because the policy says that nothing must rely on /usr/share/doc/$pkg,
> but when did the last archive rebuild validate that property?

https://piuparts.debian.org/sid-nodoc/ at least tests that installation,
upgrade and removals work correctly.


-- 
cheers,
Holger


signature.asc
Description: Digital signature


Re: dedicated live CD for PGP master key management

2016-04-25 Thread Daniel Pocock


On 25/04/16 19:03, Christian Seiler wrote:
> On 04/25/2016 06:38 PM, Daniel Pocock wrote:
>> On 25/04/16 17:34, Christian Seiler wrote:
>>> Am 2016-04-25 17:24, schrieb Daniel Pocock:
 On 25/04/16 16:23, Holger Levsen wrote:
> On Mon, Apr 25, 2016 at 04:03:26PM +0200, Daniel Pocock
> wrote:
>> I had already made up some live CDs for ready-to-run VoIP
>> and remote hands purposes, so I can probably do some of
>> what is required, but it seems like a good idea to avoid
>> duplicating any other efforts in this area too.
> 
> shouldn't most of the functionality of this go into (a)
> dedicated package(s) which then can be used by several, eg
> by tails and grml and debian live-cds?
> 
 Some parts of such a project could probably be packaged
 
 One of the ideas I had is that it should have a kernel
 compiled without any networking support, then it may not make
 sense to mix bits of the solution with other live CDs
>>> 
>>> Well, as Debian kernels are modularized, why not simply create
>>> a package that blacklists all network drivers? Then you don't
>>> have to compile an own kernel, but just make sure that the list
>>> of networking-related kernel modules is up to date, which seems
>>> to me to be a lot less work (especially since you can
>>> potentially automate that by looking for stuff in
>>> drivers/net).
>>> 
>>> Plus a tool that looks at the list of loaded modules and
>>> checks that there isn't any network driver loaded.
>>> 
>> 
>> I agree that is probably easier for development, although from a 
>> security point of view the strategy would be to avoid having any 
>> networking code in the environment at all
> 
> Well, your live CD creator script could also drop the networking 
> modules from the image, then they aren't available on the live 
> CD/USB key at all. Unless there is some driver compiled into the 
> kernel (I haven't checked), this should also be sufficient.
> 
>> I've progressed the whole concept from vapourware to wikiware
>> now:
>> 
>> https://wiki.debian.org/OpenPGP/CleanRoomLiveEnvironment
>> 
>> Does the workflow make sense?
> 
> In principle yes, however it doesn't quite fit with my the
> workflow I'd like to use something like that for: my master key is
> on a two separate SD cards, and I only have one SD card reader. So
> what I do when I need to change something on the key is to insert
> one SD card, copy the .gnupg directory to a tmpfs, do the
> modifications, copy the directory back to the first SD card, and
> then copy the directory back to the second SD card. Any RAID
> solution wouldn't work for me, as I can't insert both cards at the
> same time. (Also, many people only have a limited amount of USB
> ports, and not every person wants to buy a hub just for this, so
> even with USB keys RAID might not always be easily possible,
> especially if you need one port for booting the system.)
> 

We could make it work that way

One of the things that appeals to me about BtrFS is that if one flash
drive returns bad data, BtrFS will know which one has the good data
and the application won't have to compensate for that.

The solution you describe is feasible but would possibly require more
manual effort if one of the flash drives fail or if they become out of
sync by way of human error.

The Live DVD will have a terminal, like the installer, so anybody who
doesn't want to use the workflow can drop into the shell and do
whatever they want using the utilities that are present in the filesystem.

> By the way, in addition to the passphrase for the GPG key, my SD 
> cards are also encrypted via LUKS (with a different passphrase) to 
> provide an additional layer of security. (For example, if gpg had 
> some bug that left some information behind in .gnupg pertaining to 
> the key, this would still prevent people that get a hold of the 
> card to access those.) LUKS should probably be optional, but 
> something to think about.
> 

That is a valid consideration but maybe it should be a wishlist item.

> Finally, I'm not sure I'd trust btrfs enough for this - especially 
> in RAID mode if both devices are your only copy. I can't think of 
> any feature in btrfs that might be required for this use case, so 
> I'd rather stick with a plain ext4, at least by default. When it 
> comes to this I'd rather be conservative.
> 

The BtrFS features I'm after are the RAID1 support and checksumming.

MD and ext4 can do RAID1 but without checksumming.

Regards,

Daniel



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Adam Borowski
On Mon, Apr 25, 2016 at 10:15:02AM +0200, Daniel Pocock wrote:
> There are various blogs guiding people to use a Debian Live CD for
> managing PGP master keys
> 
> Has anybody thought of making a dedicated live CD image for this
> purpose, with some kind of PGP quick setup wizard and attempting to
> enforce a sane and secure workflow?
>[...]
> Some specific things that the live image could do:
> - verifying there is no network connection, no DHCP daemon,
> automatically shutting down if a network connection becomes active

You can't verify that in software, at the very least not on Intel CPUs with
an Intel network chipset.  The AMT has its separate CPU, whole network
stack, a separate MAC address and complete access to the network card /
memory / main CPU.  Thus there's no way to be secure other than telling the
user to physically yank the network cable.

The AMD equivalent has AFAIK no such tight coupling with network cards but
it can probably still be nasty enough.  Fortunately pretty recent AMD CPUs
(Bulldozer/Piledriver?) are not yet backdoored, but as the time passes,
they'll become less and less recent.

-- 
A tit a day keeps the vet away.



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Daniel Pocock


On 25/04/16 21:51, Adam Borowski wrote:
> On Mon, Apr 25, 2016 at 10:15:02AM +0200, Daniel Pocock wrote:
>> There are various blogs guiding people to use a Debian Live CD for
>> managing PGP master keys
>>
>> Has anybody thought of making a dedicated live CD image for this
>> purpose, with some kind of PGP quick setup wizard and attempting to
>> enforce a sane and secure workflow?
>> [...]
>> Some specific things that the live image could do:
>> - verifying there is no network connection, no DHCP daemon,
>> automatically shutting down if a network connection becomes active
> 
> You can't verify that in software, at the very least not on Intel CPUs with
> an Intel network chipset.  The AMT has its separate CPU, whole network
> stack, a separate MAC address and complete access to the network card /
> memory / main CPU.  Thus there's no way to be secure other than telling the
> user to physically yank the network cable.
> 
> The AMD equivalent has AFAIK no such tight coupling with network cards but
> it can probably still be nasty enough.  Fortunately pretty recent AMD CPUs
> (Bulldozer/Piledriver?) are not yet backdoored, but as the time passes,
> they'll become less and less recent.
> 

One of those ARM-based Chromebooks could be a useful solution to that.

I've added a section on known risks now:

https://wiki.debian.org/OpenPGP/CleanRoomLiveEnvironment#Known_risks



Re: dedicated live CD for PGP master key management

2016-04-25 Thread Christian Seiler
On 04/25/2016 08:54 PM, Daniel Pocock wrote:
> On 25/04/16 19:03, Christian Seiler wrote:
>>> Does the workflow make sense?
>>
>> In principle yes, however it doesn't quite fit with my the
>> workflow I'd like to use something like that for: my master key is
>> on a two separate SD cards, and I only have one SD card reader. So
>> what I do when I need to change something on the key is to insert
>> one SD card, copy the .gnupg directory to a tmpfs, do the
>> modifications, copy the directory back to the first SD card, and
>> then copy the directory back to the second SD card. Any RAID
>> solution wouldn't work for me, as I can't insert both cards at the
>> same time. (Also, many people only have a limited amount of USB
>> ports, and not every person wants to buy a hub just for this, so
>> even with USB keys RAID might not always be easily possible,
>> especially if you need one port for booting the system.)
>>
> 
> We could make it work that way
> 
> One of the things that appeals to me about BtrFS is that if one flash
> drive returns bad data, BtrFS will know which one has the good data
> and the application won't have to compensate for that.

Yes, but you'd still have to somehow communicate to the user that
one of the drives was bad and that it has to be replaced, so the
application will need to provide some kind of interface here.

Also, in this case, if there really is corruption in essential
data by the flash drive, this will immediately show up - as the
public and private keys won't match in that case.

> The solution you describe is feasible but would possibly require more
> manual effort if one of the flash drives fail

I don't believe so. btrfs replacement only works well if you have
both the failed and the new drive plugged in (and at least the
headers are accessible and good), otherwise you have to mount the
device in degraded mode, manually add the new drive and then
remove the missing one. Doable, but in my view this appears to be
more complicated than a manual copy logic of a directory.

> or if they become out of sync by way of human error.

Yes, this would be the only advantage.

On the other hand, at least with GnuPG 2.1+ (if I understand it
correctly) the private keys are now just the mathematical
paraemters required for usage. This in turn means that only the
public keyring carries all the information (uids, expiry, etc.).
And public keys can be merged without a problem.

So for nearly all operations you could conceivably do (renewal,
recovation, adding new subkeys [*], signing other people's keys)
you don't actually need to keep them in sync explicitly, because
you always take your public key with you to your normal system,
so every time you boot the master key mgmt live CD, you have the
master key drives as input for the secret key, but the public
key can also come from the USB stick you use to interface the
live CD with the rest of the world.

I therefore don't see any real issue with synchronizing them.
Additionally, I think there is a perfectly valid workflow where
you don't modify all of the copies of the master key disk. For
example, let's say I want to store one of the copies with family
or in a safety deposit box, just as an additional backup against
the house burning down or so. Then I generate the GPG key, make
3 copies, keep 1 of those copies for myself, store one in a
safety deposit box at a local bank and give the other to e.g. my
parents for safekeeping. For daily usage I just use my own copy
of the master key, but if that fails (due to e.g. the flash
drive giving up), I can use one of the other backups.

> The Live DVD will have a terminal, like the installer, so anybody who
> doesn't want to use the workflow can drop into the shell and do
> whatever they want using the utilities that are present in the filesystem.

Sure. I still think semi-automatic copies are easier to handle
from an application standpoint than filesystem or block device
RAID.

Think of it this way: you need to put in 3 flash drives (be it
USB sticks or SD cards) for normal operation if you only have a
single copy: one for booting the live key management (unless you
still have a CD drive, which my laptop e.g. doesn't), one that
contains the master key, and one for exchanging data with the
outside world.

But if you just use copies, this is far easier: at the beginning
you ask the user to plug in the master key flash drive. The
.gnupg directory is then copied to a tmpfs ramdisk. The user is
then told that they can remove the master key flash drive again
and may now insert the flash drive used for data exchange with
the outside world. The user then performs the operations they
want to do (key renewal, signing of other users' keys, etc.). At
the end they are asked to insert the master key flash drive again,
causing the program to write any changes from the ram disk back.
Then they are asked if they have additional copies of the master
key flash drive and want to repeat the save operation. And if the
user de

Re: Extending the build profile namespace

2016-04-25 Thread Manuel A. Fernandez Montecelo

Hi,

2016-04-25 19:06 Helmut Grohne:

On Mon, Apr 25, 2016 at 01:28:00PM +0100, Manuel A. Fernandez Montecelo wrote:

Perhaps these OPTIONS and PROFILES should be merged, in a way that if
one is enabled, the other also is.  (Is this the plan already?)


They serve different needs. Quite a few options do not make sense as
profiles. What do you do with parallel=n? None of nocheck, noopt or
nostrip should change any package relations. noddebs changes which
packages are built, but those are not part of d/control anyway. Then
there is the non-standard but frequently used debug option.


(Same applies for "nodoc*").


As Ian pointed out, it may still make sense to transition some options
to profiles.


Reply to the above combined: I didn't mean to remove /all/ OPTIONS and
convert them to PROFILES (1st paragraph), but to merge them somehow when
they overlap or are the same (i.e., transition to profiles as in 2nd
paragraph), such as with "nocheck" and "nodoc*".



Even if for many packages it wouldn't be necessary to drop some/most
functionality, the target for bootstrap builds should be IMO to only
enable the minimal necessary deps for the packages to work (to save time
and space when [re]bootstrapping, if nothing else).


I actually think that it should be possible to use the result of a
bootstrap as is. You see, that's exactly what Yocto, Buildroot, PTXdist,
and others do. Just Debian has much better security support, long term
support, package quality and quantity than any of the contenders.


I think that people building tailor-made systems will always have (or
want) to do manual work to... well, create a tailor made system.  If you
want to make their life easier, that's fine.


From my perspective as porter (which is what I wanted to contribute),

for the purpose of bootstrapping new Debian architectures, the finer
detail is probably not necessary -- even if it could be nice to have.
Just being able to everything with as few interdependencies as possible
would be very nice, and then walk up from there enabling features as
necessary towards a "full" Debian system.



I'd consider also a "noaudit", it's a lot of rdeps and the use of this
framework is not needed at the time of bootstrapping.


Yes. Obvious omission. It is not clear though whether bootstrapping with
or without audit is easier.


IIRC it has bindings to other languages, depends on swig, go-related
packages... so one less to worry about.



The failure (or unsafeness) of packages to behave normally doesn't
matter, as long as it eventually allows to reach a stage where your
whole set can be used to rebuild and improve itself to a full-fledge
Debian.


It matters as soon as your reverse dependencies fail to build. Spoiler:
They do occasionally.


That doesn't fulfill the precondition of my statement above.  If as part
of an automatic process the intermediate dep is rebuilt with the extra
feature (because e.g. by the 2nd/3rd/etc time that it's rebuilt it can
be built with the extra features needed for further phases), and then
lets the rdeps build fine and continue progress, all is fine.

In any case, having to rebuild by hand a few packages with extra
features is the least of one's worries when bootstrapping a new
architecture, compared all other requirements / efforts.


Cheers.
--
Manuel A. Fernandez Montecelo