On Sun, Oct 1, 2017 at 03:33 Félix Cloutier <[email protected]> wrote:
> Le 30 sept. 2017 à 08:22, Xiaodi Wu <[email protected]> a écrit : > > On Thu, Sep 28, 2017 at 12:16 PM, Félix Cloutier <[email protected] > > wrote: > >> >> >> Le 27 sept. 2017 à 17:29, Xiaodi Wu <[email protected]> a écrit : >> >> What I was trying to respond to, by contrast, is the design of a >> hierarchy of protocols CSPRNG : PRNG (or, in Alejandro's proposal, >> UnsafeRandomSource : RandomSource) and the appropriate APIs to expose on >> each. This is entirely inapplicable to your examples. It stands to reason >> that a non-instantiable source of random numbers does not require a >> protocol of its own (a hypothetical RNG : CSPRNG), since there is no reason >> to implement (if done correctly) more than a single publicly >> non-instantiable singleton type that could conform to it. For that matter, >> the concrete type itself probably doesn't need *any* public API at all. >> Instead, extensions to standard library types such as Int that implement >> conformance to the protocol that Alejandro names "Randomizable" could call >> internal APIs to provide all the necessary functionality, and third-party >> types that need to conform to "Randomizable" could then in turn use >> `Int.random()` or `Double.random()` to implement their own conformance. In >> fact, the concrete random number generator type doesn't need to be public >> at all. All public interaction could be through APIs such as `Int.random()`. >> >> >> If there is globally-available CSPRNG that people are encouraged to use, >> what is the benefit of a CSPRNG : PRNG hierarchy? >> > > There are plenty of use cases that do not require cryptographically secure > pseudorandom sequences but can benefit from the speed of an algorithm like > xoroshiro128+. For instance, suppose I want to simulate Brownian motion in > an animation; I do not care about the cryptographic properties of my random > number source. However, the underlying function to generate a normally > distributed random number can rely on either a source of cyptographically > secure or cryptographically insecure uniformly distributed random numbers. > Put another way, a protocol hierarchy is justified because useful functions > that produce random values in various desired distributions can be written > that work with any PRNG, while there are (obviously) uses that are suitable > for CSPRNGs only. > > > It was never in question that a CSPRNG can act as a PRNG, as far as > generating numbers goes. That doesn't explain the usefulness of the CSPRNG > interface if we're going to prefer *the* CSPRNG. > > > - For cryptographic applications, the expectation is that you'll use > the global CSPRNG. > - If you have a cryptographic application that requires you to > initialize a CSPRNG with some seed, then that algorithm has been specified, > and it doesn't matter if the generator implements a CSPRNG protocol (on top > of a PRNG protocol) because you're working with concrete types. > - If you have a non-crypto application that needs repeatable sequences > of elements, then any seedable PRNG implementation is sufficient and there > is no benefit to calling it crypto-secure (even if it is). > > > It seems to me that a CSPRNG interface is useful if you need crypto-secure > random numbers, but you can trust input from sources that don't come with > the guarantees of the global CSPRNG, and from algorithms that you can't > exhaustively list in advance. Does that sound right? Do you have a use case > in mind, or an example of an application that has similar requirements? > TOTP, used for two-factor authentication, takes a shared secret as seed and generates a deterministic sequence of cryptographically secure one-time passwords. > >> What is the benefit of clearly identifying an algorithm as crypto-secure >> if the recommendation is that you use *the* crypto-secure object/functions? >> > > The default source of random numbers is unseedable, but the user may > instead require repeatable generation of a sequence of random numbers. The > user may have a use case that requires the use of a particular specified > CSPRNG. > >> Again, I'm not only talking about urandom. As far as I'm aware, every API >> to retrieve cryptographically secure sequences of random bits on every >> platform for which Swift is distributed can potentially return an error >> instead of random bits. The question is, what design for our API is the >> most sensible way to deal with this contingency? On rethinking, I do >> believe that consistently returning an Optional is the best way to go about >> it, allowing the user to either (a) supply a deterministic fallback; (b) >> raise an error of their own choosing; or (c) trap--all with a minimum of >> fuss. This seems very Swifty to me. >> >> >> With Linux's getrandom, if you read from urandom (the default) and ask >> for as much or less than 256 bytes, the only possible error is that >> urandom hasn't been seeded >> <http://man7.org/linux/man-pages/man2/getrandom.2.html>. (With more than >> 256 bytes, it also becomes possible for the system call to be interrupted >> by a signal.) OpenBSD's getentropy is literally just arc4random running >> in the kernel >> <https://github.com/openbsd/src/blob/master/sys/dev/rnd.c#L889>, and >> will only fail if you ask for more than 256 bytes because it is a >> hard-coded limit. >> > > Yes, but again, what do you think of the possible Swift API design choices > to accommodate these errors? We have to pick one, but none of them are very > appealing. > > > Is that still the right thing to do then? How do these not very appealing > alternatives compare against instance methods on Random objects, like > random.any(1...6)? > I think I’m not stating my question clearly. Regardless of how you spell it, what do you think is the optimal behavior when requesting a random number fails for lack of entropy? Should we return nil, throw, trap, or block? One of these has to happen either when you try to initialize a generator or when you try to first use the global/thread-local generator. Meanwhile, getentropy() has the problem that, if there is insufficient > randomness to initialize the entropy pool, it will block; on systems where > there is never going to be sufficient randomness (i.e. a VM), it will block > forever. By contrast, getrandom() permits a flag that will make this > scenario non-blocking. > > Ironically, I don't know how you get entropy on Darwin. If it's more >> failable that this, I'd argue that it's time to improve a bit... >> >> Félix >> >
_______________________________________________ swift-evolution mailing list [email protected] https://lists.swift.org/mailman/listinfo/swift-evolution
