Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Cory Benfield

> On 12 Jun 2016, at 07:11, Theodore Ts'o  wrote:
> 
> On Sat, Jun 11, 2016 at 05:46:29PM -0400, Donald Stufft wrote:
>> 
>> It was a RaspberryPI that ran a shell script on boot that called
>> ssh-keygen.  That shell script could have just as easily been a
>> Python script that called os.urandom via
>> https://github.com/sybrenstuvel/python-rsa instead of a shell script
>> that called ssh-keygen.
> 
> So I'm going to argue that the primary bug was in the how the systemd
> init scripts were configured.  In generally, creating keypairs at boot
> time is just a bad idea.  They should be created lazily, in a
> just-in-time paradigm.

Agreed. I hope that if there is only one thing every participant has learned 
from this (extremely painful for all concerned) discussion, it’s that doing 
anything that requires really good random numbers should be delayed as long as 
possible on all systems, and should absolutely not be done during the boot 
process on Linux. Don’t generate key pairs, don’t make TLS connections, just 
don’t perform any action that requires really good randomness at all.

> So some people will freak out when the keygen systemd unit hangs,
> blocking the boot --- and other people will freak out of the systemd
> unit doesn't hang, and you get predictable SSH keys --- and some wiser
> folks will be asking the question, why the *heck* is it not
> openssh/systemd's fault for trying to generate keys this early,
> instead of after the first time sshd needs host ssh keys?  If you wait
> until the first time the host ssh keys are needed, then the system is
> fully booted, so it's likely that the entropy will be collected -- and
> even if it isn't, networking will already be brought up, and the
> system will be in multi-user mode, so entropy will be collected very
> quickly.

As far as I know we still only have three programs that were encountering this 
problem: Debian’s autopkgtest (which patched with PYTHONHASHSEED=0), 
systemd-cron (which is moving from Python to Rust anyway), and cloud-init (not 
formally reported but mentioned to me by a third-party). It remains unclear to 
me why the systemd-cron service files can’t simply request to be delayed until 
the kernel CSPRNG is seeded: I guess systemd doesn’t have any way to express 
that constraint? Perhaps it should.

Of this set, only cloud-init worries me, and it worries me for the *opposite* 
reason that Guido and Larry are worried. Guido and Larry are worried that 
programs like cloud-init will be delayed by two minutes while they wait for 
entropy: that’s an understandable concern. I’m much more worried that programs 
like cloud-init may attempt to establish TLS connections or create keys during 
this two minute window, leaving them staring down the possibility of performing 
“secure” actions with insecure keys.

This is why I advocate, like Donald does, for having *some* tool in Python that 
allows Python programs to crash if they attempt to generate cryptographically 
secure random bytes on a system that is incapable of providing them (which, in 
practice, can only happen on Linux systems). I don’t care how it’s spelled, I 
just care that programs that want to use a properly-seeded CSPRNG can error out 
effectively when one is not available. That allows us to ensure that Python 
programs that want to do TLS or build key pairs correctly refuse to do so when 
used in this state, *and* that they provide a clearly debuggable reason for why 
they refused. That allows the savvy application developers that Ted talked 
about to make their own decisions about whether their rapid startup is 
sufficiently important to take the risk.

Cory


[0]: 
https://github.com/systemd-cron/systemd-cron/issues/43#issuecomment-160343989



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Paul Moore
On 11 June 2016 at 22:46, Donald Stufft  wrote:
> I guess one question would be, what does the secrets module do if it’s on a
> Linux that is too old to have getrandom(0), off the top of my head I can
> think of:
>
> * Silently fall back to reading os.urandom and hope that it’s been seeded.
> * Fall back to os.urandom and hope that it’s been seeded and add a
> SecurityWarning or something like it to mention that it’s falling back to
> os.urandom and it may be getting predictable random from /dev/urandom.
> * Hard fail because it can’t guarantee secure cryptographic random.
>
> Of the three, I would probably suggest the second one, it doesn’t let the
> problem happen silently, but it still “works” (where it’s basically just
> hoping it’s being called late enough that /dev/urandom has been seeded), and
> people can convert it to the third case using the warnings module to turn
> the warning into an exception.

I have kept out of this discussion as I don't know enough about
security to comment, but in this instance I think the answer is clear
- there is no requirement for Python to protect the user against
security bugs in the underlying OS (sure, it's nice if it can, but
it's not necessary) so fallng back to os.urandom (with no warning) is
fine. A warning, or even worse a hard fail, that 99.99% of the time
should be ignored (because you're *not* writing a boot script) seems
like a very bad idea.

By all means document "if your OS provides no means of getting
guaranteed secure randon mumbers (e.g., older versions of Linux very
early in the boot sequence) then the secrets module cannot give you
results that are any better than the OS provides". It seems
self-evident to me that this would be the case, but I see no reason to
object if the experts feel it's worth adding.

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Theodore Ts'o
On Sun, Jun 12, 2016 at 01:49:34AM -0400, Random832 wrote:
> > The intention behind getrandom() is that it is intended *only* for
> > cryptographic purposes. 
> 
> I'm somewhat confused now because if that's the case it seems to
> accomplish multiple unrelated things. Why was this implemented as a
> system call rather than a device (or an ioctl on the existing ones)? If
> there's a benefit in not going through the non-atomic (and possibly
> resource limited) procedure of acquiring a file descriptor, reading from
> it, and closing it, why is that benefit not also extended to
> non-cryptographic users of urandom via allowing the system call to be
> used in that way?

This design was taken from OpenBSD, and the goal with getentropy(2)
(which is also designed only for cryptographic use cases), was so that
a denial of service attack (fd exhaustion) could force an application
to fall back to a weaker -- in some cases, very weak or non-existent
--- source of randomness.

Non-cryptographic users don't need to use this interface at all.  They
can just use srandom(3)/random(3) and be happy.

> > Anyway, if you don't need cryptographic guarantees, you don't need
> > getrandom(2) or getentropy(2); something like this will do just fine:
> 
> Then what's /dev/urandom *for*, anyway?

/dev/urandom is a legacy interface.  It was intended originally for
cryptographic use cases, but it was intended for the days when very
few programs needed a secure cryptographic random generator, and it
was assumed that application programmers would be very careful in
checking error codes, etc.

It also dates back to a time when the NSA was still pushing very hard
for cryptographic export controls (hence the use of SHA-1 versus an
encryption algorithm) and when many people questioned whether or not
the SHA-1 algorithm, as designed by the NSA, had a backdoor in it.
(As it turns out, the NSA put a back door into DUAL-EC, so retrospect
this concern really wasn't that unreasonable.)  Because of those
concerns, the assumption is those few applications who really wanted
to get security right (e.g., PGP, which still uses /dev/random for
long-term key generation), would want to use /dev/random and deal with
entropy accounting, and asking the user to type randomness on the
keyboard and move their mouse around while generating a random key.

But times change, and these days people are much more likely to
believe that SHA-1 is in fact cryptographically secure, and future
crypto hash algorithms are designed by teams from all over the world
and NIST/NSA merely review the submissions (along with everyone else).
So for example, SHA-3 was *not* designed by the NSA, and it was
evaluated using a much more open process than SHA-1.

Also, we have a much larger set of people writing code which is
sensitive to cryptographic issues (back when I wrote /dev/random, I
probably had met, or at least electronically corresponded with a large
number of the folks who were working on network security protocols, at
least in the non-classified world), and these days, there is much less
trust that people writing code to use /dev/[u]random are in fact
careful and competent security engineers.  Whether or not this is a
fair concern or not, it is true that there has been a change in API
design ethos away from the "Unix let's make things as general as
possible, in case someone clever comes up use case we didn't think
of", to "idiots are ingenious so they will come up with ways to misuse
an idiot-proof interface, so we need to lock it down as much as
possible."  OpenBSD's getentropy(2) interface is a strong example of
this new attitude towards API design, and getrandom(2) is not quite so
doctrinaire (I added a flags field when getentropy(2) didn't even give
those options to progammers), but it is following in the same tradition.

Cheers,

- Ted
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Theodore Ts'o
On Sun, Jun 12, 2016 at 11:40:58AM +0100, Cory Benfield wrote:
> 
> Of this set, only cloud-init worries me, and it worries me for the
> *opposite* reason that Guido and Larry are worried. Guido and Larry
> are worried that programs like cloud-init will be delayed by two
> minutes while they wait for entropy: that’s an understandable
> concern. I’m much more worried that programs like cloud-init may
> attempt to establish TLS connections or create keys during this two
> minute window, leaving them staring down the possibility of
> performing “secure” actions with insecure keys.

There are patches in the dev branch of:

 https://git.kernel.org/cgit/linux/kernel/git/tytso/random.git/

which will automatically use virtio-rng (if it is provided by the
cloud provider) to initialize /dev/urandom.  It also uses a much more
aggressive mechanism to initialize the /dev/urandom pool, so that
getrandom(2) will block for a much shorter period of time immediately
after boot time on real hardware.  I'm confident it's secure for x86
platforms.  I'm still thinking about whether I should fall back to
something more conservative for crappy embedded processors that don't
have a cycle counter or an CPU-provided RDRAND-like instruction.
Related to this is whether I should finally make the change so that
/dev/urandom will block until it is initialized.  (This would make
Linux work like FreeBSD, which *will* also block if its entropy pool
is not initialized.)

> This is why I advocate, like Donald does, for having *some* tool in
> Python that allows Python programs to crash if they attempt to
> generate cryptographically secure random bytes on a system that is
> incapable of providing them (which, in practice, can only happen on
> Linux systems).

Well, it can only happen on Linux because you insist on falling back
to /dev/urandom --- and because other OS's have the good taste not to
use systemd and/or Python very early in the boot process.  If someone
tried to run a python script in early FreeBSD init scripts, it would
block just as you were seeing on Linux --- you just haven't seen that
yet, because arguably the FreeBSD developers have better taste in
their choice of init scripts than Red Hat and Debian.  :-)

So the question is whether I should do what FreeBSD did, which will
statisfy those people who are freaking out and whinging about how
Linux could allow stupidly written or deployed Python scripts get
cryptographically insecure bytes, by removing that option from Python
developers.  Or should I remove that one line from changes in the
random.git patch series, and allow /dev/urandom to be used even when
it might be insecure, so as to satisfy all of the people who are
freaking out and whinging about the fact that a stupildly written
and/or deployed Python script might block during early boot and hang a
system?

Note that I've tried to do what I can to make the time that
/dev/urandom might block as small as possible, but at the end of the
day, there is still the question of whether I should remove the choice
re: blocking from userspace, ala FreeBSD, or not.  And either way,
some number of people will be whinging and freaking out.  Which is why
I completely sympathetic to how Guido might be getting a little
exasperated over this whole thread.  :-)

- Ted
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New hash algorithms: SHA3, SHAKE, BLAKE2, truncated SHA512

2016-06-12 Thread Christian Heimes
On 2016-05-25 12:29, Christian Heimes wrote:
> Hi everybody,
> 
> I have three hashing-related patches for Python 3.6 that are waiting for
> review. Altogether the three patches add ten new hash algorithms to the
> hashlib module: SHA3 (224, 256, 384, 512), SHAKE (SHA3 XOF 128, 256),
> BLAKE2 (blake2b, blake2s) and truncated SHA512 (224, 256).
> 
> 
> SHA-3 / SHAKE: https://bugs.python.org/issue16113
> BLAKE2: https://bugs.python.org/issue26798
> SHA512/224 / SHA512/256: https://bugs.python.org/issue26834
> 
> 
> I like to push the patches during the sprints at PyCon. Please assist
> with reviews.

Hi,

I have unassigned myself from the tickets and will no longer pursue the
addition of new crypto hash algorithms. I might try again when blake2
and sha3 are more widely adopted and the opposition from other core
contributors has diminished. Acceptance is simply not high enough to be
worth the trouble.

Kind regards,
Christian

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] C99

2016-06-12 Thread Michael Felt

I am using IBM xlc aka vac - version 11.

afaik it will deal with c99 features (by default I set it to behave that 
way because a common 'issue' is C++ style comments, when they should not 
be that style (fyi: not seen that in Python).


IMHO: GCC is not just a compiler - it brings with it a whole set of 
infrastructure requirements (aka run-time environment, rte). Certainly 
not an issue for GNU environments, but non-gnu (e.g., posix) will/may 
have continual side-effects from "competing" rte.. At least that was my 
experience when I was using gcc rather than xlc.



On 6/4/2016 9:53 AM, Martin Panter wrote:

Sounds good for features that are well-supported by compilers that
people use. (Are there other compilers used than just GCC and MSVC?)


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] C99

2016-06-12 Thread Stefan Krah
Michael Felt  felt.demon.nl> writes: 
> I am using IBM xlc aka vac - version 11.
> 
> afaik it will deal with c99 features (by default I set it to behave that 
> way because a common 'issue' is C++ style comments, when they should not 
> be that style (fyi: not seen that in Python).

We had a couple of exotic build machines a while ago: xlc, the
HPUX compiler and a couple of others all support the subset of C99
we are aiming for.  In fact the support of the commercial Unix
compilers for C99 is quite good -- the common error messages
suggest that several of them use the same front end (Comeau?).


Stefan Krah

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] writing to /dev/*random [was: BDFL ruling request: should we block ...]

2016-06-12 Thread Donald Stufft

> On Jun 11, 2016, at 8:16 PM, Stephen J. Turnbull  wrote:
> 
> This fails for unprivileged users on Mac.  I'm not sure what happens
> on Linux; it appears to succeed, but the result wasn't what I
> expected.


I think that on Linux it will mix in whatever you write into the entropy, but 
it won’t increase the entropy counter for it.

—
Donald Stufft



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Nathaniel Smith
On Jun 11, 2016 11:13 PM, "Theodore Ts'o"  wrote:
>
> On Sat, Jun 11, 2016 at 05:46:29PM -0400, Donald Stufft wrote:
> >
> > It was a RaspberryPI that ran a shell script on boot that called
> > ssh-keygen.  That shell script could have just as easily been a
> > Python script that called os.urandom via
> > https://github.com/sybrenstuvel/python-rsa instead of a shell script
> > that called ssh-keygen.
>
> So I'm going to argue that the primary bug was in the how the systemd
> init scripts were configured.  In generally, creating keypairs at boot
> time is just a bad idea.  They should be created lazily, in a
> just-in-time paradigm.
>
> Consider that if you assume that os.urandom can block, this isn't
> necessarily going to do the right thing either --- if you use
> getrandom and it blocks, and it's part of a systemd unit which is
> blocking futher boot progress, then the system will hang for 90
> seconds, and while it's hanging, there won't be any interrupts, so the
> system will be dead in the water, just like the orignal bug report
> complaining that Python was hanging when it was using getrandom() to
> initialize its SipHash.

Hi Ted,

>From another perspective, I guess one could also argue that the best place
to fix this is in the kernel: if a process is blocked waiting for entropy
then the kernel probably shouldn't take that its cue to turn off all the
entropy generation mechanisms, just like how if a process is blocked
waiting for disk I/O then we probably shouldn't power down the disk
controller. Obviously this is a weird case because the kernel is
architected in a way that makes the dependency between the disk controller
and the I/O request obvious, while the dependency between the random pool
and... well... everything else, more or less, is much more subtle and goes
outside the usual channels, and we wouldn't want to rearchitect everything
just for this. But for example, if a process is actively blocked waiting
for the initial entropy, one could spawn a kernel thread that keeps the
system from quiescing by attempting to scrounge up entropy as fast as
possible, via whatever mechanisms are locally appropriate (e.g. doing a
busy-loop racing two clocks against each other, or just scheduling lots of
interrupts -- which I guess is the same thing, more or less). And the
thread would go away again as soon as userspace wasn't blocked on entropy.
That way this deadlock wouldn't be possible.

I guess someone *might* complain about the idea of the entropy pool
actually spending resources instead of being quietly parasitic, because
this is the kernel and someone will always complain about everything :-).
But complaining about this makes about much sense as complaining about the
idea of spending resources trying to service I/O when a process is blocked
on that ("maybe if we wait long enough then some other part of the system
will just kind of accidentally page in the data we need as a side effect of
whatever it's doing, and then this thread will be able to proceed").

Is this an approach that you've considered?

> At which point there will be another bug complaining about how python
> was causing systemd to hang for 90 seconds, and there will be demand
> to make os.random no longer block.  (Since by definition, systemd can
> do no wrong; it's always other programs that have to change to
> accomodate systemd.  :-)

FWIW, the systemd thing is a red herring -- this was debian's configuration
of a particular daemon that is not maintained by the systemd project, and
the exact same thing would have happened with sysvinit if debian had tried
using python 3.5 early in their rcS.

-n
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Cory Benfield

> On 12 Jun 2016, at 14:43, Theodore Ts'o  wrote:
> 
> Well, it can only happen on Linux because you insist on falling back
> to /dev/urandom --- and because other OS's have the good taste not to
> use systemd and/or Python very early in the boot process.  If someone
> tried to run a python script in early FreeBSD init scripts, it would
> block just as you were seeing on Linux --- you just haven't seen that
> yet, because arguably the FreeBSD developers have better taste in
> their choice of init scripts than Red Hat and Debian.  :-)

Heh, yes, so to be clear, I said “this can only happen on Linux” because I’m 
talking about the world that we live in: the one where I lost this debate. =D

Certainly right now the codebase as it stands could encounter the same problems 
on FreeBSD. That’s a problem for Python to deal with.

> So the question is whether I should do what FreeBSD did, which will
> statisfy those people who are freaking out and whinging about how
> Linux could allow stupidly written or deployed Python scripts get
> cryptographically insecure bytes, by removing that option from Python
> developers.  Or should I remove that one line from changes in the
> random.git patch series, and allow /dev/urandom to be used even when
> it might be insecure, so as to satisfy all of the people who are
> freaking out and whinging about the fact that a stupildly written
> and/or deployed Python script might block during early boot and hang a
> system?
> 
> Note that I've tried to do what I can to make the time that
> /dev/urandom might block as small as possible, but at the end of the
> day, there is still the question of whether I should remove the choice
> re: blocking from userspace, ala FreeBSD, or not.  And either way,
> some number of people will be whinging and freaking out.  Which is why
> I completely sympathetic to how Guido might be getting a little
> exasperated over this whole thread.  :-)

I don’t know that we need to talk about removing the choice. I understand the 
desire to commit to backwards compatibility, of course I do. My problem with 
/dev/urandom is not that it *exists*, per se: all kinds of stupid stuff exists 
for the sake of backward compatibility.

My problem with /dev/urandom is that it’s a trap, lying in wait for someone who 
doesn’t know enough about the problem they’re solving to step into it. And it’s 
the worst kind of trap: it’s one you don’t know you’ve stepped in. Nothing 
about the failure mode of /dev/urandom is obvious. Worse, well-written apps 
that try their best to do the right thing can still step into that failure mode 
if they’re run in a situation that they weren’t expecting (e.g. on an embedded 
device without hardware RNG or early in the boot process).

So my real problem with /dev/urandom is that the man page doesn’t say, in 
gigantic letters, “this device has a really nasty failure mode that you cannot 
possibly detect by just running the code in the dangerous mode”. It’s 
understandable to have insecure weak stuff available to users: Python has loads 
of it. But where possible, the documentation marks it as such. It’d be good to 
have /dev/urandom’s man page say “hey, by the way, you almost certainly don’t 
want this: try using getrandom() instead”.

Anyway, regarding changing the behaviour of /dev/urandom: as you’ve correctly 
highlighted, at this point you’re damned if you do and damned if you don’t. If 
you don’t change, you’ll forever have people like me saying that /dev/urandom 
is dangerous, and that its behaviour in the unseeded/poorly-seeded state is a 
misfeature. I trust you’ll understand when I tell you that that opinion has 
nothing to do with *you* or the Linux kernel maintainership. This is all about 
the way software security evolves: things that used to be ok start to become 
not ok over time. We learn, we improve.

Of course, if you do change the behaviour, you’ll rightly have programmers 
stumble onto this exact problem. They’ll be unhappy too. And the worst part of 
all of this is that neither side of that debate is *wrong*: they just 
prioritise different things. Guido, Larry, and friends aren’t wrong, any more 
than I am: we just rate the different concerns differently. That’s fine: after 
all, it’s probably why Guido invented and maintains an extremely popular 
programming language and I haven’t and never will! I have absolutely no problem 
with breaking “working” code if I believe that that code is exposing users to 
risks they aren’t aware of (you can check my OSS record to prove it, and I’m 
happy to provide references).

The best advice I can give anyone in this debate, on either side, is to make 
decisions that you can live with. Consider the consequences, consider the 
promises you’ve made to users, and then do what you think is right. Guido and 
Larry have decided to go with backward-compatibility: fine. They’re 
responsible, the buck stops with them, they know that. The same is true for 
you, Ted, with the /dev/urandom device.

If 

Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Theodore Ts'o
On Sun, Jun 12, 2016 at 11:07:22AM -0700, Nathaniel Smith wrote:
> But for example, if a process is actively blocked waiting
> for the initial entropy, one could spawn a kernel thread that keeps the
> system from quiescing by attempting to scrounge up entropy as fast as
> possible, via whatever mechanisms are locally appropriate (e.g. doing a
> busy-loop racing two clocks against each other, or just scheduling lots of
> interrupts -- which I guess is the same thing, more or less).

There's a lot of snake oil, or at least, hand waving, that goes on
with respect to what will actually work to gather randomness.  One of
the worst possible choices is a standard, kernel-defined workload that
tries to just busy loop two clocks against each other.  For one thing,
on many embedded systems, all of your clocks are generated off of a
single master oscillator anyway.  And in early boot, it's not
realistic for the kernel to be able to measure network interrupt
timings and radio strength indicators from the WiFi, which ultimately
is going to be much more likely to be unpredictable by an outside
attacker sitting in Fort Meade than pretending that you can just
"schedule lots of interrupts".

Again, part of the problem here is that if you really want to be
secure, it needs to be a full stack perspective, where the hardware
designers, the OS developers, and the application level developers are
all working together.  If one side tries to exert a strong "somebody
else's problem field", it's very likely the end solution isn't going
to be secure.  Because in many cases this is simply not practical, we
all have to make assumptions at the OS and C-Python interpreter level,
and hope that the assumptions that we make are are conservative
enough.

> Is this an approach that you've considered?

Ultimately, the arguments made by approaches such as Jitterbug are, to
put it succiently and perhaps a little unfairly, "gee whillikers, the
Intel L1/L2 cache hierarchy is really complicated and it's a closed
hardware implementation so no one can understand it, and besides, the
statistical analysis of the output looks good".

To which I would say, "the first argument is an argument of security
through ignorance", and "AES(NSA_KEY, COUNTER++)" also has really
great statistical results, and if you don't know the NSA_KEY, it will
look very strong and as far as we know, we wouldn't be able to
distinguish it from truly secure random number generator --- but it
really isn't secure.

So yeah, I don't buy it.  In order for it to be secure, we need to be
grabbing measurements which can't be replicated or determined by a
remote attacker.  So having the kernel kick off a kernel thread is not
going to be useful unless we can mix in entropy from the user, or the
workload, or the local configuration, or from the local environment.
(Using RSSI is helpful because the remote attacker might not know
whether your mobile handset is in the knapsack under the table, or on
the desk, and that will change the RSSI numbers.)  Remember, the whole
*point* of modern CPU designs is that the huge amounts of engineering
effort is put into making the CPU be predictable, and so spawning a
kernel thread in isolation isn't going perform magic in terms of
getting guaranteed unpredictability.

> FWIW, the systemd thing is a red herring -- this was debian's configuration
> of a particular daemon that is not maintained by the systemd project, and
> the exact same thing would have happened with sysvinit if debian had tried
> using python 3.5 early in their rcS.

It's not a daemon.  It's the script in
/lib/systemd/system-generators/systemd-crontab-generator, and it's
needed because systemd subsumed the cron daemon, and developers who
wanted to not break user's existing crontab files turned to it.  I
suppose you are technically correct that it is not mainained by
systemd, but the need for it was generated out of systemd's lack of
concern of backwards compatibility.

Because FreeBSD and Mac OS are not using systemd, they are not likely
to run into this problem.  I will grant that if they decided to try to
run a python script out of their /etc/rc script, they would run into
the same problem.

- Ted
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Theodore Ts'o
On Sun, Jun 12, 2016 at 09:01:09PM +0100, Cory Benfield wrote:
> My problem with /dev/urandom is that it’s a trap, lying in wait for
> someone who doesn’t know enough about the problem they’re solving to
> step into it.

And my answer to that question is absent backwards compatibility
concerns, use getrandom(2) on Linux, or getentropy(2) on *BSD, and be
happy.  Don't use /dev/urandom; use getrandom(2) instead.  That way
you also solve a number of other problems such as the file descriptor
DOS attack issue, etc.

The problem with Python is that you *do* have backwards compatibility
concerns.  At which point you are faced with the same issues that we
are in the kernel; except I gather than that the commitment to
backwards compatibility isn't quite as absolute (although it is
strong).  Which is why I've been trying very hard not to tell
python-dev what to do, but rather to give you folks the best
information I can, and then encouraging you to do whatever seems most
"Pythony" --- which might or might not be the same as the decisions
we've made in the kernel.

Cheers,

- Ted

P.S.  BTW, I probably won't change the behaviour of /dev/urandom to
make it be blocking.  Before I found out about Pyhton Bug #26839, I
actually had patches that did make /dev/urandom blocking, and they
were planned to for the next kernel merge window.  But ultimately, the
reason why I won't is because there is a set of real users (Debian
Stretch users on Amazon AWS and Google GCE) for which if I changed how
/dev/urandom worked, then I would be screwing them over, even if
Python 3.5.2 falls back to /dev/urandom.  It's not a problem for bare
metal hardware and cloud systems with virtio-rng; I have patches that
will take care of those scenarios.

Unfortunately, both AWS and GCE don't support virtio-rng currently,
and as much as some poeple are worried about the hypothetical problems
of stupidly written/deployed Python scripts that try to generate
long-term secrets during early boot, weighed against the very real
prospect of user lossage on two of the most popular Cloud environments
out there --- it's simply no contest.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: 3.6.0a2 snapshot 2016-06-13 12:00 UTC

2016-06-12 Thread Larry Hastings


On 06/10/2016 03:23 PM, Ned Deily wrote:

Also note that Larry has announced plans to do a 3.5.2 release candidate 
sometime this weekend and Benjamin plans to do a 2.7.12 release candidate.  So 
get important maintenance release fixes in ASAP.


To clarify: /both/ 3.5.2rc1 /and/ 3.4.5rc1 were tagged yesterday and 
will ship later today.



//arry/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

2016-06-12 Thread Nathaniel Smith
On Sun, Jun 12, 2016 at 4:28 PM, Theodore Ts'o  wrote:
> P.S.  BTW, I probably won't change the behaviour of /dev/urandom to
> make it be blocking.  Before I found out about Pyhton Bug #26839, I
> actually had patches that did make /dev/urandom blocking, and they
> were planned to for the next kernel merge window.  But ultimately, the
> reason why I won't is because there is a set of real users (Debian
> Stretch users on Amazon AWS and Google GCE) for which if I changed how
> /dev/urandom worked, then I would be screwing them over, even if
> Python 3.5.2 falls back to /dev/urandom.  It's not a problem for bare
> metal hardware and cloud systems with virtio-rng; I have patches that
> will take care of those scenarios.
>
> Unfortunately, both AWS and GCE don't support virtio-rng currently,
> and as much as some poeple are worried about the hypothetical problems
> of stupidly written/deployed Python scripts that try to generate
> long-term secrets during early boot, weighed against the very real
> prospect of user lossage on two of the most popular Cloud environments
> out there --- it's simply no contest.

Speaking of full-stack perspectives, would it affect your decision if
Debian Stretch were made robust against blocking /dev/urandom on
AWS/GCE? Because I think we could find lots of people who would be
overjoyed to fix Stretch before the next merge window even opens
(AFAICT the quick fix is literally a 1 line patch), if that allowed
the blocking /dev/urandom patches to go in upstream...

(It looks like Jessie isn't affected, because while Jessie does
provide a systemd-cron package for those who decide to install it,
Jessie's systemd-cron is still using python2, python2 doesn't have
hash randomization so it doesn't touch /dev/urandom at startup, and
systemd-cron doesn't have any code that would trigger access to
/dev/urandom otherwise. It looks like Xenial *is* affected, because
they ship systemd-cron with python3, but their python3 is still
unconditionally using getrandom() in blocking mode, so they need to
patch that regardless, and could just as easily make it robust against
blocking /dev/urandom at the same time. I don't understand the RPM
world as well, but I can't find any evidence that Fedora or SuSE ship
systemd-cron at all.)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [RELEASED] Python 3.4.5rc1 and Python 3.5.2rc1 are now available

2016-06-12 Thread Larry Hastings


On behalf of the Python development community and the Python 3.4 and 
Python 3.5 release teams, I'm pleased to announce the availability of 
Python 3.4.5rc1 and Python 3.5.2rc1.


Python 3.4 is now in "security fixes only" mode.  This is the final 
stage of support for Python 3.4.  All changes made to Python 3.4 since 
Python 3.4.4 should be security fixes only; conventional bug fixes are 
not accepted.  Also, Python 3.4.5rc1 and all future releases of Python 
3.4 will only be released as source code--no official binary installers 
will be produced.


Python 3.5 is still in active "bug fix" mode.  Python 3.5.2rc1 contains 
many incremental improvements over Python 3.5.1.


Both these releases are "release candidates".  They should not be 
considered the final releases, although the final releases should 
contain only minor differences.  Python users are encouraged to test 
with these releases and report any problems they encounter.



You can find Python 3.4.5rc1 here:

   https://www.python.org/downloads/release/python-345rc1/

And you can find Python 3.5.2rc1 here:

   https://www.python.org/downloads/release/python-352rc1/ 




Python 3.4.5 final and Python 3.5.2 final are both scheduled for release 
on June 26th, 2016.


Happy Pythoneering,


//arry/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [RELEASE] Python 2.7.12 release candidate 1

2016-06-12 Thread Benjamin Peterson
Python 2.7.12 release candidate 1 is now available for download. This is
a preview release of the next bugfix release in the Python 2.7.x series.
Assuming no horrible regressions are located, a final release will
follow in two weeks.

Downloads for 2.7.12rc1 can be found python.org:
https://www.python.org/downloads/release/python-2712rc1/

The complete changelog may be viewed at
https://hg.python.org/cpython/raw-file/v2.7.12rc1/Misc/NEWS

Please test the pre-release and report any bugs to
   https://bugs.python.org

Servus,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stop using timeit, use perf.timeit!

2016-06-12 Thread Steven D'Aprano
On Sat, Jun 11, 2016 at 07:43:18PM -0400, Random832 wrote:
> On Fri, Jun 10, 2016, at 21:45, Steven D'Aprano wrote:
> > If you express your performances as speeds (as "calculations per 
> > second") then the harmonic mean is the right way to average them.
> 
> That's true in so far as you get the same result as if you were to take
> the arithmetic mean of the times and then converted from that to
> calculations per second. Is there any other particular basis for
> considering it "right"?

I think this is getting off-topic, so extended discussion should 
probably go off-list. But the brief answer is that it gives a physically 
meaningful result if you replace each of the data points with the mean. 
Which specific mean you use depends on how you are using the data 
points.

http://mathforum.org/library/drmath/view/69480.html


Consider the question:

Dave can paint a room in 5 hours, and Sue can paint the same room in 3 
hours. How long will it take them, working together, to paint the room?

The right answer can be found the long way:

Dave paints 1/5 of a room per hour, and Sue paints 1/3 of a room per 
hour, so together they paint (1/5+1/3) = 8/15 of a room per hour. So to 
paint one full room, it takes 15/8 = 1.875 hours.

(Sanity check: after 1.875 hours, Sue has painted 1.875/3 of the room, 
or 62.5%. In that same time, Dave has painted 1.875/5 of the room, or 
37.5%. Add the percentages together, and you have 100% of the room.)

Using the harmonic mean, the problem is simple:

data = 5, 3  # time taken per person
mean = 3.75  # time taken per person on average

Since they are painting the room in parallel, each person need only 
paint half the room on average, giving total time of:

3.75/2 = 1.875 hours

If we were to use the arithmetic mean (5+3)/2 = 4 hours, we'd get the 
wrong answer.



-- 
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com