Hi Adam,
(long time no see!)
On Sun, Mar 07, 2010 at 04:13:56PM +, Adam Conrad wrote:
> On Mon, Mar 01, 2010 at 02:00:31AM +0100, Wouter Verhelst wrote:
> > On Sun, Feb 28, 2010 at 10:26:48AM -0800, Steve Langasek wrote:
> > >
> > > You could obviously just fall back to using the full .so in
On Mon, Mar 01, 2010 at 02:00:31AM +0100, Wouter Verhelst wrote:
> On Sun, Feb 28, 2010 at 10:26:48AM -0800, Steve Langasek wrote:
> >
> > You could obviously just fall back to using the full .so in the case of
> > initramfs generation.
If we can detect that the libc generated is unsuitable, then
On Sun, Feb 28, 2010 at 10:26:48AM -0800, Steve Langasek wrote:
> On Sun, Feb 28, 2010 at 12:05:03PM +0100, Wouter Verhelst wrote:
> > This is not really a big deal in the case of d-i, since first, when
> > things fail, they fail for everyone who uses the same image, and second,
> > if the installe
On Sun, Feb 28, 2010 at 12:05:03PM +0100, Wouter Verhelst wrote:
> Don't know, but I'm not sure that's a very good idea. The library
> reduction as implemented by mklibs is a bit of a kludge IMO, which seems
> to fail every so often for one of our architectures, because that
> architecture's ABI us
On Sat, Feb 20, 2010 at 01:21:15PM -0800, Steve Langasek wrote:
> On Sat, Feb 20, 2010 at 12:02:24PM +0100, Goswin von Brederlow wrote:
> > The reason would be size. I don't see anything else there.
>
> > For network based boots, specifically high performance cluster, the size
> > can make a real
Goswin von Brederlow wrote:
> When 100 nodes all want to talk to the one bootserver then that one poor
> port will be overflown. With switches you won't have collisions like in
> the old days when they would combine exponentially but you still get
> slowdowns.
Add more switches. Add more network c
Philipp Kern writes:
> On 2010-02-20, Goswin von Brederlow wrote:
>> For network based boots, specifically high performance cluster, the size
>> can make a real difference. When you turn the cluster on it is not just
>> one system downloading an extra meg but 100+ nodes. That largely
>> increase
m...@linux.it (Marco d'Itri) writes:
> On Feb 20, Goswin von Brederlow wrote:
>
>> For network based boots, specifically high performance cluster, the size
>> can make a real difference. When you turn the cluster on it is not just
>> one system downloading an extra meg but 100+ nodes. That largel
On Sat, Feb 20, 2010 at 12:02:24PM +0100, Goswin von Brederlow wrote:
> The reason would be size. I don't see anything else there.
> For network based boots, specifically high performance cluster, the size
> can make a real difference. When you turn the cluster on it is not just
> one system downl
On 2010-02-20, Goswin von Brederlow wrote:
> For network based boots, specifically high performance cluster, the size
> can make a real difference. When you turn the cluster on it is not just
> one system downloading an extra meg but 100+ nodes. That largely
> increases the network collisions, err
On Sat, Feb 20, 2010 at 12:02:24PM +0100, Goswin von Brederlow wrote:
> Michael Tokarev writes:
> > Goswin von Brederlow wrote:
> >> I googled a bit and found this old mail about a klibc only initramfs:
> >>
> >> http://lists.debian.org/debian-kernel/2006/07/msg00400.html
> >>
> >> I would reall
On Feb 20, Goswin von Brederlow wrote:
> For network based boots, specifically high performance cluster, the size
> can make a real difference. When you turn the cluster on it is not just
> one system downloading an extra meg but 100+ nodes. That largely
> increases the network collisions, errors
Michael Tokarev writes:
> Goswin von Brederlow wrote:
>> Hi,
>>
>> I googled a bit and found this old mail about a klibc only initramfs:
>>
>> http://lists.debian.org/debian-kernel/2006/07/msg00400.html
>>
>> I would really like to do this and it has been close to 4 years since
>> that mail. B
Goswin von Brederlow wrote:
> Hi,
>
> I googled a bit and found this old mail about a klibc only initramfs:
>
> http://lists.debian.org/debian-kernel/2006/07/msg00400.html
>
> I would really like to do this and it has been close to 4 years since
> that mail. But it doesn't look like there has be
14 matches
Mail list logo