On Wed, Jun 13, 2007 at 11:28:52AM +0100, Luis Matos wrote: > > > The current system is fine but: > > > - priority from unstable should less than testing or stable ( as i > > > think - not for sure - happens nowadays). On experimental has less > > > priority. > > > - There are no guaranties that testing is always working and stable. > > > - there are no guaranties that testing is secure (please security team, > > > can you clarify this?)
> > You won't find a contractual guarantee from Debian about either of these > > things, for *any* of the Debian suites. > look ... i don't want guaranties ... you know what i mean ... want a > place where it says "testing HAS security support, we focus on having it > stable. I don't want written contract ... i want a desktop user to > discard stable and use testing. For that debian needs do publicly advice > the use of testing in these cases ... and i mean for real. You are never going to get a statement from the Debian project telling users to use one suite or another (or at least, you shouldn't); the most we should be doing is giving users a list of pros and cons for each suite and letting them decide which fits their needs. I'm all in favor of reducing the number of decisions users have to make *in the software* :), but on something as high-level as which distro/suite to use, misestimating a user's needs is the kind of thing that will sour the user on Debian for a very long time. > > There is a testing security team that addresses unembargoed security issues > > in testing. Fixes for embargoed security issues are generally not prepared > > in advance for testing. However, more people have access to work on the > > unembargoed security issues anyway (in the general case: anyone can upload > > to t-p-u), so it's not definite that stable is always more secure than > > testing. > So, maybe, have more strict upload rules? Or, on the other way, > maintainers can upload packages directly into testing (from t-p-u?). More strict upload rules for what? > > > - Testing simply moves too fast and the automatically passage process > > > between unstble and testing *DOES* break testing. For one example, > > > package "foo" requires package "bar<=0.3" but package "bar 0.4" > > > automatically passes to testing. > > Um, no. That does not happen automatically. In rare cases it happens > > because the release team has overridden the installability check for a > > package, because maintainers have not coordinated their transitions in > > unstable and as a result something needs to be broken to ever get any of the > > packages updated because you can't get 300 maintainers to get their packages > > in a releasable state *and* leave them alone long enough to transition to > > testing as a group. > So please, don't do those "oh, let them pass" transitions ... they BREAK > stuff ... for real. What? > > That's a problem of the packaging of those kernel modules, then, not a > > problem of testing per se; even if you track stable and therefore the > > problem only affects you once every two years, it's still a problem that > > should be addressed -- e.g., with metapackages like nvidia-kernel-2.6-686 > > (oh look, this one already exists). > kernel upgrades from 2.6.50 to 2.6.51 ... nvidia packages don't build in > time (they are not free, right?) ... kernel passes to testing ... That doesn't happen. > this is a simple upgrade ... because kernel packages are always NEW, the > kernel will pass because it has no reverse dependency problems in > testing. False. -- Steve Langasek Give me a lever long enough and a Free OS Debian Developer to set it on, and I can move the world. [EMAIL PROTECTED] http://www.debian.org/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]