Problems with inittab

1997-05-30 Thread Goswin Brederlow

Short question inbetween:

Why is the Debian /etc/inittab different from one use on Watchtower?

Mrvn


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Clean Debian installed, but a few problems left.

1997-05-30 Thread Goswin Brederlow

I got Debian installed on a clean disk now using the base1.2tgz and the 
instfiles.tar, but I run across some problems:

The /etc/inittab from instfiles.tar gave errors and failed to execute 
some program (like dselect). Also the inittab it restored after it was 
run was taken from watchtower by the script and didn't work because the 
format is different (why?).

After correcting this I was able to boot from hd and runn dselect 
manually without problems.

After hours of installing I was left with two problems, probably related 
to the failures of the inittab on the first boot:

/dev/mouse not found.
nslookup faild (-> couldn't configure smail, ...)

I will try to fix the nslookup problem, but for the mouse I could need 
some hint.

Last two little anoying bugs:
1. When suplying a base path for dselect (first question under 
   acess/mounted) it complains about not being able to find 
   stable/binary-i386. Is that a bug in the source or just a broken 
   configfile?

   I was able to work around the problem by answering none at the first 
   prompt and then supplying the correct path at the next. Doing that worked 
   fine.

2. dselect doesn't give any message / warning when you try to install a 
   package you don't have. As a result all packages that depend on a 
   missing package will fail without the missing package failing. (Which 
   left me confused for some time.)

-[ Hello to all fans in domestic surveillance ]
 security NORAD plutonium DES radar Semtex Clinton Uzi NSA domestic disruption 
 smuggle kibo nuclear fissionable Legion of Doom SDI cryptographic SEAL Team 6
-[ Hello to all fans in domestic surveillance ]



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: Problems with inittab

1997-05-30 Thread Goswin Brederlow
On Fri, 30 May 1997, Mark Baker wrote:

> Date: Fri, 30 May 1997 13:58:00 +0100
> From: Mark Baker <[EMAIL PROTECTED]>
> To: Goswin Brederlow <[EMAIL PROTECTED]>,
> debian-devel@lists.debian.org
> Subject: Re: Problems with inittab
> 
> 
> In article <[EMAIL PROTECTED]>,
>   Goswin Brederlow <[EMAIL PROTECTED]> writes:
> > 
> > Short question inbetween:
> > 
> > Why is the Debian /etc/inittab different from one use on Watchtower?
> 
> Why should it be the same?
>

I don't mean 100% equal, just the syntax should be the same, so a inittab 
working with watchtower should also work with debian. Unfortunally they 
differ in syntax, so even a basic thing as starting the getty for login 
fails, although the same kernel is used.

> 
> I don't know what the Watchtower one was like (I've never had the misfortune
> to run Watchtower), but there's no reason why it should be the same. It
> depends on what gettys etc you want to start (you can also get inittab to
> start your X server for you, although the debian inittab doesn't do this by
> default) and what kind of boot scripts should be run.
> 
> Look in inittab(5) for more information.
> .
> 

-[ Hello to all fans in domestic surveillance ]
 security NORAD plutonium DES radar Semtex Clinton Uzi NSA domestic disruption 
 smuggle kibo nuclear fissionable Legion of Doom SDI cryptographic SEAL Team 6
-[ Hello to all fans in domestic surveillance ]



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: How do we encourage bug reports?

1997-06-14 Thread Goswin Brederlow
Lars Wirzenius wrote:
> 
> Milan Zamazal:
> 
> > One problem with reporting bugs I feel personally is what to do to
> > avoid repeating reports.
> 
> I think it is not the job of the person reporting a bug to worry about
> duplicate reports. Especially not if the person is an end-user and not
> a Debian developer.
> 
> 1. User discovers a bug.
> 2. User reports a bug.
> 
> Eager people will check the bug system before they repeat a bug,
> if only because they might find a solution quicker, if the bug
> has already been reported. However, in no way should we indicate
> that bug reporters are expected to do this.
> 
> --
> Please read  before mailing me.

I think it's allways a good idea to report bugs, even if you repeat
someone elses report. Most of the time you have different hardware and
setups and it makes it much easier to find bugs when you know if its a
problem specific to one hardware and setup or if it fails on a more
general base.

May the Source be with you.
Mrvn


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Bug in Boot-Disk Package?

1997-06-21 Thread Goswin Brederlow
I found a bug in the installation procedure on my Amiga, but the same
will probably happen on all systems.

When I try to partition the drive my root.bin is on during installation,
it pops up a requester asking to unmount / before starting fdisk on it.
After quitting fdisk it remounts /, but the installation routines
complain about / being read-only and nothing works anymore.

To reproduce this behavior on another system you need a spare partition
at least the size of the root.bin (e.g. the swap partition). Dump the
root.bin onto it and boot with it as root. Select the keyboard and then
fdisk it (be carefull not to change anything that might erase youre
data). Just quiting it again should do the trick. 
Before fdisk is started the installation routine will complain about the
root mounted from that drive and unmount it. After fdisk it will remount
it and you have the above bug.

Can somebody second this on another system or is it just my Amiga?


May the Source be with you.
Mrvn.


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: Bug in Boot-Disk Package?

1997-06-21 Thread Goswin Brederlow
OK, it's unmounted then, but it should remount the drive if its
untouched 
or ask if it should remount it. I'm not repartitioning the drive, but I
had 
to change the types of the partitions, cause I can't do it easily from 
AmigaOS (I dunno the hex for LNX\0). The partition holding root is
unchanged
so remounting it would be harmless.

The reason why I did use a partition to hold root.bin was that I tried
to 
install Debian with only 4 MB. With only 4 MB ram you don't have enough
space
the kernel and a ramdisk, so I used a spare partition for it. It works
fine, 
except from the reboot I had to make to get root remounted again.


Bruce Perens wrote:
> 
> Yes, it would do that. The problem is that un-mounting / leaves it
> mounted read-only, not unmounted. You can't unmount root. If it
> remounts it at all, it doesn't do it correctly, and re-partitioning the
> disk that root is running on is problematical, to say the least.
> On the PC installation floppy root would be a RAM disk at this point,
> and this problem would never come up. I wonder if your boot parameters
> are wrong, or if it is a 68k-specific issue.
> 
>     Thanks
> 
> Bruce
> 
> From: Goswin Brederlow <[EMAIL PROTECTED]>
> > I found a bug in the installation procedure on my Amiga, but the same
> > will probably happen on all systems.
> >
> > When I try to partition the drive my root.bin is on during installation,
> > it pops up a requester asking to unmount / before starting fdisk on it.
> > After quitting fdisk it remounts /, but the installation routines
> > complain about / being read-only and nothing works anymore.
> >
> > To reproduce this behavior on another system you need a spare partition
> > at least the size of the root.bin (e.g. the swap partition). Dump the
> > root.bin onto it and boot with it as root. Select the keyboard and then
> > fdisk it (be carefull not to change anything that might erase youre
> > data). Just quiting it again should do the trick.
> > Before fdisk is started the installation routine will complain about the
> > root mounted from that drive and unmount it. After fdisk it will remount
> > it and you have the above bug.
> >
> > Can somebody second this on another system or is it just my Amiga?
> >
> >
> > May the Source be with you.
> >   Mrvn.
> >
> --
> Bruce Perens K6BP   [EMAIL PROTECTED]   510-215-3502
> Finger [EMAIL PROTECTED] for PGP public key.
> PGP fingerprint = 88 6A 15 D0 65 D4 A3 A6  1F 89 6A 76 95 24 87 B3


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Unidentified subject!

1997-06-21 Thread Goswin Brederlow

I've downloaded the Debian 1.3 Source CD as isoimage and cat'ed all 
peaces together. Mounting the file over a loopback device I got an error: 
'Invalid Audio Disk   Lentgh 0:00:00' from the filesystem.

Now I wonder which chunk has an error in it? Could somebody with the 
proper rights create chsums for the files please or check them agains my 
checksums?

Thanx for you're help.



---[ cksum * > ../source.cksum ]---

902672379 10485760 xaa
1964344229 10485760 xab
3103465723 10485760 xac 
3051413027 10485760 xad 
563480720 10485760 xae  
3676171232 10485760 xaf  
1496207183 10485760 xag
3982017721 10485760 xah
1996529395 10485760 xai
347485891 10485760 xaj
4224425993 10485760 xak
66454489 10485760 xal
694356895 10485760 xam
4230220127 10485760 xan
3211775232 10485760 xao
3288838828 10485760 xap
3541577135 10485760 xaq
281846891 10485760 xar
4206128507 10485760 xas
3941621231 10485760 xat
1702358845 10485760 xau
776497012 10485760 xav
2698546466 10485760 xaw
3091640865 10485760 xax
3661039644 10485760 xay
1887882477 10485760 xaz
1172648290 10485760 xba
1148886921 10485760 xbb
3528467631 10485760 xbc
4247028238 10485760 xbd 
3640030866 10485760 xbe 
989355418 10485760 xbf  
1177760836 10485760 xbg 
1760558446 10485760 xbh 
1705652203 10485760 xbi  
2413912409 10485760 xbj
4183587600 10485760 xbk
2077502854 10485760 xbl
1114555149 10485760 xbm
4249618532 10485760 xbn
3655281930 10485760 xbo
2189570894 10485760 xbp
2498623941 10485760 xbq
694282994 10485760 xbr
3161855180 10485760 xbs
890386295 10485760 xbt
3184349101 5382144 xbu

-[ end ]---

-[ Hello to all fans in domestic surveillance ]
 security NORAD plutonium DES radar Semtex Clinton Uzi NSA domestic disruption 
 smuggle kibo nuclear fissionable Legion of Doom SDI cryptographic SEAL Team 6
-[ Hello to all fans in domestic surveillance ]



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: Bug in Boot-Disk Package?

1997-06-21 Thread Goswin Brederlow
Bruce Perens wrote:
> 
> From: Goswin Brederlow <[EMAIL PROTECTED]>
> > OK, it's unmounted then, but it should remount the drive if its
> > untouched or ask if it should remount it. I'm not repartitioning
> > the drive, but I had to change the types of the partitions
> 
> I think it should not unmount the root, but it should complain
> before you partition a disk that the root is running on. I don't
> know how many people will hit this.
> 
> Thanks
> 
> Bruce
> --
> Bruce Perens K6BP   [EMAIL PROTECTED]   510-215-3502
> Finger [EMAIL PROTECTED] for PGP public key.
> PGP fingerprint = 88 6A 15 D0 65 D4 A3 A6  1F 89 6A 76 95 24 87 B3

It's the only choise if you have a low mem system. 
As Kai Henning pointed out:

>> The boot disks should probably force a reboot at that point.
>> The low memory boot disk probably does the same thing on the x86.

The script complains that you have mounted filesystems on that drive
when you try to partition it, which is perfectly valid.
The script should eigther reboot after the disk holding root is
partitioned or try to remount root r/w. Rebooting is a bit anoying when
you only changed the type of another partition from DOS\0 to LNX\0,
whereas remounting might get stuck if the partition holding root has
changed name or place. So rebooting is probably the savest.

May the source be with you.
Mrvn


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: Bug in Boot-Disk Package?

1997-06-21 Thread Goswin Brederlow
Kai Henningsen wrote:
> 
> [EMAIL PROTECTED] (Goswin Brederlow)  wrote on 21.06.97 in <[EMAIL 
> PROTECTED]>:
> 
> > OK, it's unmounted then, but it should remount the drive if its
> > untouched
> > or ask if it should remount it. I'm not repartitioning the drive, but I
> > had
> > to change the types of the partitions, cause I can't do it easily from
> > AmigaOS (I dunno the hex for LNX\0). The partition holding root is
> > unchanged
> > so remounting it would be harmless.
> 
> Of course, it can't know that.
> 
> In general, the kernel cannot reread the partition table when it has
> mounted something from that drive, even read-only, so the only proper
> choice after changing the partition table is to reboot.
> 
> You can re-mount the partition (mount -o remount,rw /), but the kernel
> will not know about any changes you made, which can be very dangerous.

You can't remount because the install script goes into an endless loop
evaluating you're system, failing and poping up with the
colour/monochrom menu. You can't even reboot. The only option is a
keyboard reset, which thankfully makes a shutdown -r when
Ctrl-Amiga-Amiga is pressed. (It just resets on CTRL-Alt-Del. Why?)

> 
> The boot disks should probably force a reboot at that point.

That's probably the best option.

> 
> The low memory boot disk probably does the same thing on the x86.
> 
> MfG Kai
> 

May the Source be with you.
Mrvn


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: invalid CD

1997-06-21 Thread Goswin Brederlow
Bruce Perens wrote:
> 
> If it thinks your CD is an audio disk, it would be an error in the "xaa" file.
> The very first blocks on the CD tell what kind of CD it is.
> 
> Bruce
> --
> Bruce Perens K6BP   [EMAIL PROTECTED]   510-215-3502
> Finger [EMAIL PROTECTED] for PGP public key.
> PGP fingerprint = 88 6A 15 D0 65 D4 A3 A6  1F 89 6A 76 95 24 87 B3
> 

The filesystem allways reports an audio disk when it can't understand
the iso image. The error could be in xaa or in xbu, since both hold
vital information for the CD. 

Is the cksum wrong for any of those files?

May the Source be with you.
Mrvn.


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Re: FW: [NTSEC] (Fwd) DESCHALL Press Release

1997-06-23 Thread Goswin Brederlow
Jim Pick wrote:
> 
> > People did complain that we were promoting Debian to the
> > detriment of Linux.
> 
> Yes - but remember, some of the people participating in these
> contests were acting pretty infantile.  Instead of focusing on solving
> the problem, they want their team to be at the top of the
> list at all costs, including 'spamming' the servers to increase
> their odds.
> 
> The people who wrote to you complaining about the fact that there
> was a [EMAIL PROTECTED] team were just trying to get people to
> join their team - so they could get some more "nerd glory" or
> something.  I'm surprised that you've taken them so seriously,
> and that you think they even reflect the sentiments of even
> a fraction of the Linux community.
> 
> This is such a small thing -- nobody cares.  If you were to
> take a poll of Linux people about this, they'd overwhelmingly vote for
> 'go away, I don't care'.
> 
> BTW, in case you didn't notice - we do compete with the other Linux
> distributions every day -- for the honour of having our system installed
> on users computers.
> 
> But, I do agree that we shouldn't be competing against the wishes of the
> Linux community at large.
> 
> In summary:
> 
> Why the hell do we have to be so damn politically correct?
> 
> I'm mostly in this for fun.  :-)
> 
> Cheers,
> 
>  - Jim
> 

I have some computers up running in that challenge and I could easily
contribute there output to the debian group, if we are going to have
one. 

So will we have one, or will we do it each one by himself?

May the source be with you.
Mrvn


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .



Bug#374029: Fixing inconsisten and unusefull Build-Depends/Conflicts definition

2006-06-16 Thread Goswin Brederlow
Package: debian-policy
Severity: normal

Hi,

the current use and definition of Build-Depends/Conflicts[-Indep] in
policy 7.6 don't match. Both use and definition also greatly reduce
the usefullness of these fields. This issue has come up again and
again over the last few years and nothing has been done about it. I
hope this proposal provides a elegant and non disruptive way out so we
can finaly do something about it.


Currently policy reads:

| 7.6 Relationships between source and binary packages -
| Build-Depends, Build-Depends-Indep, Build-Conflicts,
| Build-Conflicts-Indep
|
| Source packages that require certain binary packages to be installed
| or absent at the time of building the package can declare
| relationships to those binary packages.
|
| This is done using the Build-Depends, Build-Depends-Indep,
| Build-Conflicts and Build-Conflicts-Indep control file fields.
|
|Build-dependencies on "build-essential" binary packages can be
|omitted. Please see Package relationships, Section 4.2 for more
|information.
|
|The dependencies and conflicts they define must be satisfied (as
|defined earlier for binary packages) in order to invoke the targets in
|debian/rules, as follows:[42]
|
|Build-Depends, Build-Conflicts
|
|The Build-Depends and Build-Conflicts fields must be satisfied
|when any of the following targets is invoked: build, clean,
|binary, binary-arch, build-arch, build-indep and binary-indep.

This comes down to Build-Depends have to be always installed. Buildds
always and only install Build-Depends.

|Build-Depends-Indep, Build-Conflicts-Indep
|
|The Build-Depends-Indep and Build-Conflicts-Indep fields must be
|satisfied when any of the following targets is invoked: build,
|build-indep, binary and binary-indep.  

But buildds do call the build targets (via dpkg-buildpackage) and
don't honor Build-Depends/Conflicts-Indep. And since build calls
build-indep that means anything needed to build the architecture
independent part needs to be included in Build-Depends. This make the
Build-Depends-Indep quite useless.

[Side note: Buildds/dpkg-buildpackage has no robust way of telling if
the optional build-arch field exists and must call build. This is
wastefull for both build dependencies and build time.]


Proposal:
-

Two new fields are introduced:

Build-Depends-Arch, Build-Conflicts-Arch

The Build-Depends-Arch and Build-Conflicts-Arch fields must be
satisfied when any of the following targets is invoked:
build-arch, binary-arch.

The existance of either of the two makes build-arch mandatory.


The old fields change their meaning:

Build-Depends, Build-Conflicts

The Build-Depends and Build-Conflicts fields must be satisfied
when any target is invoked.

Build-Depends-Indep, Build-Conflicts-Indep

The Build-Depends-Indep and Build-Conflicts-Indep fields must be
satisfied when any of the following targets is invoked:
build-indep, binary-indep.

The existance of either of the two makes build-indep mandatory.

The use of Build-Depends/Conflicts-Arch/Indep is optional but should
be used in architecture "all/any" mixed packages. If any of them is
omitted the respective Build-Depends/Conflicts field must be suffient
already.

### End of Proposal ###



Why is this proposal better than others that have failed before?

- non disruptive: Current buildd behaviour will continue to build all
  existing packages.

- Packages will not instantly have RC bugs.

- Simple to implement.
  + Trivial change in dpkg for the new field.
  + dpkg-checkbuilddeps has to parse 3 fields (2 with -B option)
instead of 2 (1).
  + sbuild, same change
  + Simple change for 'apt-get build-dep'

- Buildds/dpkg-buildpackage can use the build-arch target
  + reduces Build-Depends field and install time of same
  + build-indep is no longer called, reduces compile time and disk
space

- Build-Depends/Conflicts-Indep becomes usefull, build-indep becomes
  usefull

  Large packages no longer have to build indep stuff in the
  binary-indep target to avoid building it on buildds.

MfG
Goswin

-- System Information:
Debian Release: 3.1
Architecture: amd64 (x86_64)
Kernel: Linux 2.6.16-rc4-xen
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: boot-floppies status from an insider (was Re: Deficiencies in Debian)

1999-10-01 Thread goswin . brederlow
Eric Delaunay <[EMAIL PROTECTED]> writes:

> Do we still want to support very old hardware, especially low memory system
> (eg. some old sparc, and maybe old 386, 486 as well) ?

Then you also need a 1.x kernel. 2.0 is wasting mem and 2.2 doesn´t
boot on low mem maschines. I think its safe to assume 8 MB ram for a
linux installation. Everybody else should take a bootdisk from bo or slink.

> In this case, we need to add a "swap on NFS" patch to the kernel.
> I found one patch for 2.0.35 (sparc) kernel.  And for 2.2, NBD could be used
> along with a patch to the networking subsystem
> (http://atrey.karlin.mff.cuni.cz/~pavel/nbd/nbd.html).
> I will try to build bootdisks for sparc based on them.

Hmm, if you can nfs-swap, you will have acess to another maschine,
where you custombuild your kernel and copy that to the boot disk. If
ram is a problem you will want to compile a kernel anyway.

May the Source be with you.
Goswin



Re: Redesign of diskless NFS-root package & ITP diskless-image

1999-10-01 Thread goswin . brederlow
Brian May <[EMAIL PROTECTED]> writes:

> [1  ]
> On Wed, Sep 15, 1999 at 11:46:36AM +1000, Brian May wrote:
> > [description removed]
> 
> I have made most of the changed required for my redesign of diskless.
> Amazingly, it looks like no changed are required for dpkg. I haven't
> yet tested anything though, and implementing secure mode might be a bit
> awkward. Currently I am considering the following:
> 
> 1 installation moves /var, /dev, and /tmp into /rw/
> 2 symlinks are created: /var --> /rw/var,
> /dev --> /rw/dev, and
>   /tmp --> /rw/tmp.
> 
> On startup (non-secure mode) /var, /dev, and /tmp are mounted
> as usual, over the top of the directories under /rw/
> 
> (should I do the same thing for /etc too? I inclined not to
> - I think it should be read-only. Making /etc read-write
> might make adding extra hosts easier, as all host
> specific data can be copied and processed on startup)

Do you support multiple clients? I would have a normal /etc that links
most files to /share/etc and some to /rw/etc (the host specific stuff
like hostname). /share would be on / and ro so its available on boot.

> On startup (secure mode) the temp directory is mounted in a temp
> location (eg /copy), and files all files are copied from /rw to /copy.
> This means the read-write files are all contained on the one partition,
> that could easierly be erased/formatted on startup. At this stage, /copy
> is re-mounted over /rw. This would be easier if I didn't have to re-mount
> /copy twice, I will need to think about that.
> 
> The result will be that /var, /dev, and /tmp are read-write,
> but completely refreshed every-time the computer boots.

??? What do you mean here? Why copy anything at all? How should the
filesystems be broken? Isn´t it the servers responsibility to keep
them working? Do I have to copy my 2 GB /rw partition to /copy and
then back over my 40KB/s plip link?

> Anyway, as part of this redesign, I propose to package a new package,
> diskless-image.
> 
> I am not sure if I normally have to ITP a new package when it is based
> on the same source package, but this one is a bit unusual...
> 
> ...it is not meant to be manually installed! Rather, when you create a
> new diskless-image with diskless-newimage, this script automatically
> installs diskless-image.deb with the dpkg --root parameter, so that it
> only effects the NFS-root image. I think it is important to have
> this in the Debian archive though, to make upgrades easier.
> 
> I believe that creating a separate package makes diskless for modular
> and easier to maintain. I suspect that it will make installing different
> architectures easier, as diskless-image can be installed a host other then
> the server, but I can't test this.
> 
> As it is possible that diskless-image could break your computer if
> installed on the root directory by mistake, I have a check in the preinst
> file to ensure the directory /etc/diskless-image exists. If it doesn't
> installation will abort.
> 
> Any comments?

Try to install for an m68k/ppc/alpha diskless station on your i386. I
might test i386/m68k/alpha in any comnbination for server and client
if I find the diskspace for it.

May the Source be with you.
Goswin



Re: /usr/etc and /usr/local/etc?

1999-10-05 Thread goswin . brederlow
Richard Kaszeta <[EMAIL PROTECTED]> writes:

> Martin Schulze writes ("Re: /usr/etc and /usr/local/etc?"):
> >Aaron Van Couwenberghe wrote:
> >> Just a quick inquiry --
> >> 
> >>   Why is it that we exclude /usr/etc from our distribution? FHS and FSSTND
> >
> >Because configuration belongs to /etc.  Period.

Good point, but etc blows up to quite a size and can´t be shared
across hosts.

...
> Config files are, by their nature, host-specific, and should not be in
> /usr

They are not. e.g. /etc/hosts should be the same across a pool. Nearly 
all files in /etc can be shared and none should be rewritten on the
fly.

Apart from /etc/mtab (which can be linked to /proc/mounts) normaly
nothing gets written to /etc and / can be ro. For diskless systems
/usr/etc and /usr/share/etc could reduce the size of the ramdisk or
root fs needed to boot and more data could be shared across a pool.

Alternatively /etc/share/, /etc/arch and /etc/local could be
used. Just as one likes.

May the Source be with you.
Goswin



Re: Release-critical Bugreport for March 31, 2000

2000-04-03 Thread Goswin Brederlow
Ben Collins <[EMAIL PROTECTED]> writes:

> On Sat, Apr 01, 2000 at 10:13:38AM -0800, esoR ocsirF wrote:
> > Caution, IANAD. Just tring to help
> > 
> > Package: cricket (debian/main)
> > Maintainer: Matt Zimmerman <[EMAIL PROTECTED]>
> >   56948  cricket depends on non-existant package
> > 
> > Package: ftp.debian.org (pseudo)
> > Maintainer: Guy Maor <[EMAIL PROTECTED]>
> >   60707  cricket depends on a nonexistent package
> > 
> > Shouldn't these be merged?
> 
> No, you can only merge bugs on the same package. One is against the
> pacakge itself, the other is for ftp.debian.org to remove that package.

Shouldn`t it be reassigned to the same package and merged?

If its realy a bug of ftp.de.debian.org I could submit a lot of such bug
reports for non-i386 packages, many of them violating their license.

May the Source be with you.
Goswin



Re: Bochs / VGA-Bios license question / freebios anyone?

2000-08-14 Thread goswin . brederlow
Ulrich Eckhardt <[EMAIL PROTECTED]> writes:

> On Sun, 13 Aug 2000, Roland Bauerschmidt wrote:
> > As Goswin mentioned earlier it's also possible to use bochs with some
> > other bios 
> [snip]
> 
> I´m not sure if this even touches this discussion but what about using the 
> bios
> that is already present on most computers? 
> Wouldn´t that reduce the dependencies to nil?

If you tell me how to read that and how to make simulate the same
hardware it runs on down to the last bit.

If the user has a PII-X ide controler you must emulate that, if he has
a Promise ide controler, you must emulate that. Now way.

MfG
Goswin




Re: WG: Broken bootable SPARC CD#1, and why this happened

2000-08-17 Thread goswin . brederlow
Peter Makholm <[EMAIL PROTECTED]> writes:

> [EMAIL PROTECTED] writes:
> 
> > Not for me...
> 
> Life is nice isn't it?
> 
> 
> (And then stop sending this "Not for me"-answers all the time or
> something bad could happend)

I would guess thats a mial loop, like an away message.

MfG
Goswin




Re: policy changes toward Non-Interactive installation

2000-08-19 Thread goswin . brederlow
Joey Hess <[EMAIL PROTECTED]> writes:

> Manoj Srivastava wrote:
>... 
> Quite simply: This type of thing can not be handled before unpacking, so
> it isn't. Debconf allows package to ask questions in their postinst,
> this is just *strongly* discouraged. See the realplayer installer for a
> package that (rarely) has to use debconf interactively in its postinst.

My option on this that packages that need to ask questions after
unpacking should not be setup toether with packages that don't need
that.

There should be a preconfigure and postconfigure and the unpacking
inbetween those should be non-interactive. There is no need to stop
all packages from being configured just because the kernel-image
packages needs some input. Downtime of services should be minimal.
 
> > Let me try a concrete example; the kernel image postinst, and
> >  see how far we get. These are the questions the kernel image
> >  postinstall asks, and I have tried to figure out if the question can
> >  be pre asked. 
> > 
> > ==
> >  - Question depends on test on fie system
> > /  Question important (IMHO)
> > |/ --- Depends on previous answer 
> > ||/ -- Needs run time test
> > |||/ __ can be pre asked
> > /
> > |
> > |   
> > X...Y1) Ask to remove /System.map files
> > .X  Y2) ask to prepare a boot floppy
> > XXX.Y3) ask which floppy drive to use
> > .XX.?4) do I need to format the floppy?
> > .XXXN5) Insert floppy, hit return
> > .XXXN6) failure, retry?
> > .XXXN7) failure, you have formatted floppy?
> > .XXXN8) you have floppy, hit return when ready
> > .XXXN9) Failure writing floppy, retry?
> > .XXXN   10) failure, hit return when youhave new floppy
> > XX..Y   11) if conf exists ask if we should run $loader with old 
> > config
> > XXX.Y   12)Or else ask if a new $loader config
> > .XX.Y   13) Or else ask if loader needed at all
> > .XX.N   14) Install boot vlock on partition detected at runtime
> > N   15) Install mbr root disk
> > .XXXN   16) Failure writing mbr, do this manually, hit return 
> > .XX.N   17) make that partition active?
> > ==
> 
> Right. This is obviously a rather important package, which *cannot*
> fail, plus is is very dependant on the actual state of the system. As
> such, the best you'll be able to do is allow a few questions to be
> pre-asked, and defer the remainder to the appropriate maintainer script.

I think that the configuring the package twice is not so good. If you
know that you need to bother the user after unpacking anyway, lets do
it all in one step.
 
> (BTW, I'm ccing this to Wichert and AJ as a sterling example of why
> debconf has to continue to support this type of thing in maintainer
> scripts. I think both of them don't want it to have this capability, but
> it is, as you have noted, essential for some packages.

May the Source be with you.
Goswin




Re: policy changes toward Non-Interactive installation

2000-08-19 Thread goswin . brederlow
Joey Hess <[EMAIL PROTECTED]> writes:

> Manoj Srivastava wrote:
> > Right. I just do nt see these invariants being very useful. I
> >  would much rather have a mk-realplayer package that helps me create a
> >  realplayer-blah.deb; and the invariants are then natural and not
> >  artificially imposed. When that realplayer.deb is installed,
> >  realplayer is installed (duh), and the version of that package tells
> >  me what version I have installed. 
> > 
> > I can then move the .deb to my local apt-able tree, and all
> >  other machines in my environemnt can just install this.
> >
> > the ml-realplayer does not have to be upgrade every time
> >  realplayer changes. I can install an older verison of real player if
> >  I wish (getting it off a CD, or something).
> 
> Well I for one find being able to make sure I am upgraded to the current
> version is very useful, especially given the historical buginess of
> realplayer.

Why not just ask in the preinst whether to update or not and provide a
script to do so later as well.

Also all information needed for downloading could be asked in the
preinst as well or not? (Never used that package)

> 
> >  Joey> If you don't want to download realplayer right now, why are you
> >  Joey> installing the package?
> > 
> > It is not useful hectoring the user when they report a
> >  percieved problem.  
> 
> I'm not hectoring, I'm asking a question. That is why my sentence ended
> with a question mark.

A good example for this are the xanim modules. The packages asks
whether to downlaod or not and one can start installing the modules at
any time as well.

May the Source be with you.
Goswin




Re: how to setup an apt-getable site

2000-08-19 Thread goswin . brederlow
"Dr. Guenter Bechly" <[EMAIL PROTECTED]> writes:

> Hi,
> 
> I just have some weird problems to make my uploaded inofficial deb-packages
> apt-getable. The referring site is http://www.bechly.de/debian/. I had all
> five Debian packages in a local directory called 'debian' and correctly run
> dpkg-scanpackages on it. After adjusting my apt sources.list it was no 
> problem 
> at all to access the local directory with the Packages.gz file with apt-get 
> or 
> dselect. I then uploaded the complete directory to my website via ftp (binary 
> mode). Now I can access the site with 'apt-get update' without errors, but as 
> soon as I use 'apt-get install foo' the package foo is downloaded but the 
> installation fails with the error 'Size mismatch'.
> I checked the referring manpages and docs, and several Debian books, and
> did not find the slightest clue (therefore PLEAAASE no RTFMs).
> Any help would be greatly appreciated.

Run dpkg-scanpackages again. You seem to have updated the deb after
building the Packages.gz. Anothe rproblem might be some http/ftp proxy
inbetween.

Compare the size entry in the Packages.gz with the actual size of the
debs.

May the Source be with you.
Goswin




Re: CD rom image for net install

2000-08-19 Thread goswin . brederlow
Kenneth Scharf <[EMAIL PROTECTED]> writes:

> Sometime ago someone here mentioned the existance of a bootable cd rom
> image that contained only the contents of the boot floppies to allow
> install over the network on a computer with NO os installed.  Anyone
> know the URL where I can find this image for Potato?

I don't know about a finished iso, but download the disks directory
and burn that with the 2.88Mb rescue disk as boot image.

May the Source be with you.
Goswin




Re: new developper with new packages

2000-08-19 Thread goswin . brederlow
"Christophe Prud'homme" <[EMAIL PROTECTED]> writes:

> Hi
> 
> I am developper on two projects:
> 1- corelinux (LGPL):   http://corelinux.sourceforge.net OOA and OOD for Linux
> 2- freefem(GPL):   http://kfem.sourceforge.net Finite Element Code and
> 
> 
> I have created rather involved debian packages for them and I would like to 
> submit them to woody ( see on the respective web sites )
> I read some stuff on the developper's corner, but the actions to do to become 
> a debian developper are not clear to me.
> I was one a long time ago but for a short time.
> 
> It seems that I am an applicant and some kind of applicant manager should
> review my application if I am worthy enough .
> so what is the process?
> 
> I have other packages in store:
> 1- vtk Visualisation ToolKit http://www.kitware.com
> 2- vtkqgl a Qt widget for vtk 
> 
> thx for any help,
> Christophe

Please subscribe to debian-mentors. There ask for a mentor for your
Packages. Normaly a few people respond that have similar
interests. You choose one of them to be your mentor and he helps you
with the packages and will upload them for you.

Getting a mentor first is a good thing, because its easy and
fast. You have direct connection to someone experienced if you have
any question and you have someone watching over you a bit, pointing
out little mistakes you might make or improvements.

Becoming a maintainer is a more lengthy process and debian-mentor
and your mentor will help you with any questions about that as well.

May the Source be with you.
Goswin




Re: build dependencies

2000-08-19 Thread goswin . brederlow
Peter S Galbraith <[EMAIL PROTECTED]> writes:

> # apt-get source --compile gri
> 
> Or have I missed something?

What is still missing is

apt-get --compile dist-upgrade

That should download all sources (in the correct order) build them,
move the debs to its cache and install.

May the Source be with you.
Goswin




Re: Problem with apt on slink systems

2000-08-19 Thread goswin . brederlow

> Where the heck the word 'stable' comes from? I removed my hole
> /var/state/apt/ and I do not know where it comes from. Hardcoded anywhere
> perhaps? Or did I miss something grave?
> 
> 
> MfG/Regards, Alexander

What revision of slink do you have? slink 2.1R3 doesn't have that
problem.

Try to update to potato, since thats now stable.

May the Source be with you.
Goswin




Re: corelinux debian packages

2000-08-19 Thread goswin . brederlow
"Christophe Prud'homme" <[EMAIL PROTECTED]> writes:

> Hi,
> 
> I am waiting for my debian maintainer application to take place.
> In the mean time, I want to provide my work to the masses
> 
> Here is the apt line to add if you want corelinux (OOA and OOD library for 
> Linux)
> These packages were compiled using WOODY
> 
> deb http://augustine.mit.edu/~prudhomm/debian ./
> 
> more packages (not corelinux related) to follow in the future:

How about source? Does

deb-src http://augustine.mit.edu/~prudhomm/debian ./

work?

May the Source be with you.
Goswin

PS: Get a mentor to upload those. :)




rsync'ing pools (Was: Re: DEBIAN IS LOOSING PACKAGES AND NOBODY CARES!!!)

2001-01-01 Thread Goswin Brederlow
>>>>> " " == Tinguaro Barreno Delgado <[EMAIL PROTECTED]> writes:

 > Hello again.

 > On Sun, Dec 31, 2000 at 02:22:45PM +, Miquel van
 > Smoorenburg wrote:
>>  Yes. The structure of the archive has changed because of
>> 'package pools'.  You need to mirror 'pool' as well.
>> 
>> Also, "woody" is no longer "unstable". "sid" is. "woody" is
>> "testing".
>> 
>> Mike.
>> 

 > Ok. Thanks to Peter Palfrader too. Then, there is a more
 > complicated issue for those who has a partial mirror (only i386
 > for me), but I think that is possible with rsync options.

There was a script posted here to do partial rsync mirrors.

I used that script and added several features to it. Whats missing is
support for the debian-installed in sid, but I'm working on that.

Changes:
- multiple architectures
- keep links from woody -> potato
- mirror binary-all
- mirror US and non-US pools
- use last version as template for new files
- mirror disks

People intrested in only one arch and only woody/sid should remove
binary-all and should resolve links.

Joey, can you put that where it originally came from? or next to the
original script? Any changes to the script from your side?

So heres the script for all who care:

----------
#!/bin/sh -e
# Anon rsync partial mirror of Debian with package pool support.
# Copyright 1999, 2000 by Joey Hess <[EMAIL PROTECTED]>, GPL'd.
# Add ons by Goswin Brederlow <[EMAIL PROTECTED]>

# update potato/woody files and Packages.gz or use old once? If you
# already have the new enough once say yes. This is for cases when you
# restart a scan after the modem died.
# No is the save answere here, but wastes bandwith when resumeing.
HAVE_PACKAGE_FILES=no

# Should a contents file kept updated? Saying NO won't delete old
# Contents files, so when resuming you might want to say no here
# temporarily.
CONTENTS=yes

# Flags to pass to rsync. More can be specified on the command line.
# These flags are always passed to rsync:
FLAGS="$@ -rlpt --partial -v --progress"
# These flags are not passed in when we are getting files from pools.
# In particular, --delete is a horrid idea at that point, but good here.
FLAGS_NOPOOL="$FLAGS --exclude Packages --delete"
# And these flags are passed in only when we are getting files from pools.
# Remember, do _not_ include --delete.
FLAGS_POOL="$FLAGS"
# The host to connect to. Currently must carry both non-us and main
# and support anon rsync, which limits the options somewhat.
HOST=ftp.de.debian.org
# Where to put the mirror (absolute path, please):
DEST=/mnt/raid/rsync-mirror/debian
# The distribution to mirror:
DISTS="sid potato woody"
# Architecture to mirror:
ARCHS="i386 alpha m68k"
# Should source be mirrored too?
SOURCE=yes
# The sections to mirror (main, non-free, etc):
SECTIONS="main contrib non-free"
# Should symlinks be generated to every deb, in an "all" directory?
# I find this is very handy to ease looking up deb filenames.
SYMLINK_FARM=no

###

mkdir -p $DEST/dists $DEST/pool

# Snarf the contents file.
if [ "$CONTENTS" = yes ]; then
for DIST in ${DISTS}; do
for ARCH in ${ARCHS}; do
echo Syncing  $DEST/dists/${DIST}/Contents-${ARCH}.gz
rsync $FLAGS_NOPOOL \
$HOST::debian/dists/$DIST/Contents-${ARCH}.gz \
$DEST/dists/${DIST}/
echo Syncing  
$DEST/non-US/dists/${DIST}/non-US/Contents-${ARCH}.gz
rsync $FLAGS_NOPOOL \

$HOST::debian-non-US/dists/$DIST/non-US/Contents-${ARCH}.gz \
$DEST/non-US/dists/${DIST}/non-US/
done
done
fi

# Generate list of archs to download
ARCHLIST="binary-all"
DISKS_ARCHLIST=""
NONUS_ARCHLIST="binary-all"

for ARCH in ${ARCHS}; do
ARCHLIST="${ARCHLIST} binary-${ARCH}"
DISKS_ARCHLIST="${DISKS_ARCHLIST} disks-${ARCH}"
NONUS_ARCHLIST="${NONUS_ARCHLIST} binary-${ARCH}"
done

if [ "$SOURCE" = yes ]; then
ARCHLIST="${ARCHLIST} source"
NONUS_ARCHLIST="${NONUS_ARCHLIST} source"
fi

# Download packages files (and .debs and sources too, until we move fully
# to pools).

if [ x$HAVE_PACKAGE_FILES != xyes ]; then
for DIST in ${DISTS}; do
for section in $SECTIONS; do
for type in ${ARCHLIST}; do
echo Syncing  $DEST/dists/$DIST/$section/$type
mkdir -p $DEST/dists/$DIST/$section/$type
rsync $FLAGS_NOPOOL \
  

Re: finishing up the /usr/share/doc transition

2001-01-01 Thread Goswin Brederlow
> " " == Joey Hess <[EMAIL PROTECTED]> writes:

 > So it will need to:

 > 1. Remove all symlinks in /usr/doc that correspond to symlinks
 >or directories with the same names in /usr/share/doc
 > 2. If there are any directories with the same names in /usr/doc
 >and /usr/share/doc, merge them. (And probably whine about it,
 >since that's a bug.)
 > 3. Move any remaining directories and symlinks that are in
 >/usr/doc to /usr/share/doc
 > 4. Move any files in /usr/doc to /usr/share/doc (shouldn't be
 >necessary, but just in case).
 > 5. Remove /usr/doc
 > 6. Link /usr/doc to /usr/share/doc

What is the reason for linking /usr/doc to /usr/hare/doc (or
share/doc)?

Maybe I have architecure dependent documentation that should not be in
share.

This got probably answered a thousand times, but please, just once
more for me.

MfG
Goswin

PS: and don't say so that users looking in /usr/doc find the docs in
/usr/share/doc, users should adapt. :)




Re: finishing up the /usr/share/doc transition

2001-01-01 Thread Goswin Brederlow
>>>>> " " == Joey Hess <[EMAIL PROTECTED]> writes:

 > Goswin Brederlow wrote:
>> What is the reason for linking /usr/doc to /usr/hare/doc (or
>> share/doc)?

 > So that packages that are not policy complient and contain
 > files only in /usr/doc still end up installing them in
 > /usr/share/doc.

So bugs won't be noticed. Maybe a simple grep in the Contents files
would be enough to find all such packages.
Does lintian check for /usr/[share/]doc?

/debian/dists/woody% zgrep "usr/doc" Contents-i386.gz \
  | while read FILE PACKAGE; do echo $PACKAGE; done | sort -u | wc
748 748   12849

Seems to be a lot of packages still using /usr/doc.

>> Maybe I have architecure dependent documentation that should
>> not be in share.

 > Er. Well policy does not allow for this at all. If you do
 > actually have such a thing (it seems unlikely), perhaps you
 > should bring it up on the policy list and ask for a location to
 > put it.

I don't have any and I don't think anyone can make a good point for
any. What reason could there be that I can't read some i386 specific
dokumentation on an alpha and use that e.g. in plex or bochs?
Only exception would be documentation in an executable form, which is
a) evil and b) should be in /usr/bin.

MfG
Goswin




Re: bugs + rant + constructive criticism (long)

2001-01-02 Thread Goswin Brederlow
> " " == Erik Hollensbe <[EMAIL PROTECTED]> writes:

 > Some packages refuse to install, and of course, break apt in
 > the process.  Right now, I'm *hopefully* going to be able to
 > repair a totally hosed server that failed an apt-get because
 > MAN AND GROFF failed to install properly, ending the upgrade
 > process and therefore stopping the install of all the
 > perl/debian-perl packages except the binary, rendering apt
 > practically useless.

Try to configure the unpacked packages with "dpkg --configure
--pending". Helps a lot most of the time. Apart from that have a look
at what gets updatet. If you update 200 Packages of unstable in one go
you will kill your system with 99% certainty. Be a bit selective and
do "apt-get install " for the major components like libc,
perl, apt, dpkg before updating all the other stuff.

I know that should not be neccessary, but with unstable, being unstable,
I found that a good way to reduce the likelyhood of unneccessary
packages breaking vital once.

 > No doubt the failure of man and groff has to do with the
 > problem that i've been having with many other packages, which I
 > will detail below.

 > Please, please, please, please... Checking your shell scripts
 > for SYNTAX ERRORS is not a bad idea before you submit it to the
 > package repository!  You have no idea how many times, that I
 > have helped people in #debian on OPN fix shell script errors
 > for packages like mysql-server, which, could have easily
 > rendered a semi-production system completely dead (hopefully
 > they compile from source, but that's not the point, is it?)
 > simply because someone forgot a bracket or used the wrong 'set'
 > parameters in their script.

 > Other issues with apt in general - there is no OBVIOUS way
 > (short of reading the APT/DPKG perl classes) to force certain
 > flags.

RTFM

 > For instance - install package 'realplayer', then, upgrade your
 > copy of xfree86-server or xfree86-common, and watch them fail
 > as it tries to write to a file in /etc/X11. I don't think I
 > need to go into detail about how much stuff like this pisses
 > off the average user. rpm anyone? (no, apt-get -f install does
 > not work, so don't even bother)

Did you file a bugreport?

 > And why are packages being REMOVED (lib-pg-perl for example)
 > when I dist upgrade?

RTFM, thats what dist-upgrade is for. probably a conflicts of some
package updated.

 > apt-get and it's kin need more simple getopt-style flags that
 > allow overriding of certain things, mainly conflicts. Also, an
 > option to actually view what's being upgraded before you
 > download 250 packages that are only going to break your system
 > would be nice as well.

RTFM:

apt-get -u dist-upgrade

Also do an "apt-get -u update" first. That won't change wich packages
are installed, but only update whats possible.

 > I dunno - I was using debian back when hamm was released, and I
 > have never seen such an utter mess of incompatibilities and
 > stupid human error even in the worst mess of unstable upgrades
 > (which happens, and is understandable). Almost all of this is
 > due to a significant lack of adequate testing by package
 > maintainers.

Your are the tester, keep testing and FILE BUGREPORTS.

Alltogether I must say that unstable has become better and better. For
the last 3 years I never had to reinstall stable after an unstable
update. For the last year I didn't need a rescue disk after an
unstable update. For the last month I didn't even had an error on
update (but I haven't updatetd for the last 3 weeks, so that might
explain it).

MfG
Goswin




maybe ITP rsync mirror script for pools

2001-01-02 Thread Goswin Brederlow
Hi,

I've been asked about my rsync mirror script, which is an extension
from Joey Hess's one, on irc and here several times.

So would there be intrest in a deb of the script coming with a debconf
interface for configuration, cronjob or ip-up support and whatever else
is needed to keep an uptodate mirror.

Or do you all prefer to do it your own way? I don't want to package
something just for 2 or 3 people.

MfG
Goswin

PS: Just send me a yes I want privately if you want such a package.




Re: autodetecting MBR location

2001-01-02 Thread Goswin Brederlow
> " " == Tollef Fog Heen <[EMAIL PROTECTED]> writes:

 > * Russell Coker | My lilo configuration scripts need to be able
 > to infer the correct location | for the MBR.  I am currently
 > using the following algorithm: | take root fs device from
 > /etc/fstab and do the following: | s/[0-9]*// | s/part$/disc/

 > What is the use of the first s/?  Unless your first letter is a
 > digit, it will just remove the zero-width string '' between the
 > first / and the beginning of the string.

 > A better solution will probably be to

 > s/[0-9]$//

 > which will remove 5 from /dev/hda5.

You forgot /dev/hda17, which would become /dev/hda1 with your syntax.

MfG
Goswin




Problem with start-stop-daemon and pidfile

2001-01-02 Thread Goswin Brederlow
Hi,

I want to use start-stop-daemon to start the debian-mirror script if
its not already running. I don't trust the script, so I run it as user
mirror:nogroup.

But then start-stop-daemon can't write a pidfile to /var/run.

Whats the right[tm] way for this?

root:~% start-stop-daemon -S -m -c mirror:nogroup -u mirror -p 
/var/run/debian-mirror.pid -x /usr/sbin/debian-mirror
start-stop-daemon: Unable to open pidfile `/var/run/debian-mirror.pid' for 
writing: Permission denied

May the Source be with you.
Goswin




Re: Problem with start-stop-daemon and pidfile

2001-01-02 Thread Goswin Brederlow
>>>>> " " == Adam Heath <[EMAIL PROTECTED]> writes:

 > On 3 Jan 2001, Goswin Brederlow wrote:
>> Hi,
>> 
>> I want to use start-stop-daemon to start the debian-mirror
>> script if its not already running. I don't trust the script, so
>> I run it as user mirror:nogroup.
>> 
>> But then start-stop-daemon can't write a pidfile to /var/run.
>> 
>> Whats the right[tm] way for this?
>> 
>> root:~% start-stop-daemon -S -m -c mirror:nogroup -u mirror -p
>> /var/run/debian-mirror.pid -x /usr/sbin/debian-mirror
>> start-stop-daemon: Unable to open pidfile
>> `/var/run/debian-mirror.pid' for writing: Permission denied

 > Touch the file first, then chown it, before calling s-s-d.


Ok, that helps somewhat. But now start-stop-daemon allways starts the
script.

--
#!/bin/sh
#
# debian-mirror cron script
#
# This will start the debian-mirror script to update the local debian
# mirror, unless its still running.

set -e

test -x /usr/sbin/debian-mirror || exit 0

touch /var/run/debian-mirror.pid
chown mirror.nogroup /var/run/debian-mirror.pid

touch /var/log/debian-mirror.log
chown mirror.nogroup /var/log/debian-mirror.log

start-stop-daemon -S -m -c mirror:nogroup -u mirror -p 
/var/run/debian-mirror.pid -x /usr/sbin/debian-mirror 
>>/var/log/debian-mirror.log &
--

Thats how I start the script know and "ps aux" shows:

mirror   20123  0.5  0.4  2076 1044 pts/3S02:07   0:00 sh -e 
/usr/sbin/debian-mirror
mirror   20125  0.2  0.2  1516  640 pts/3S02:07   0:00 rsync -rlpt 
--partial -v --progress --exclude Packages --delete ftp.de.debian.org 
:debian/dists/sid/Contents-i386.gz /mnt/raid/rsync-mirror/debian/dists/sid/

and cat /var/run/debian-mirror.pid:
20123

But running the script again starts a new instance:

mirror   20123  0.0  0.4  2076 1044 pts/3S02:07   0:00 sh -e 
/usr/sbin/debian-mirror
mirror   20135  0.1  0.4  2076 1044 pts/3S02:07   0:00 sh -e 
/usr/sbin/debian-mirror
mirror   20137  0.0  0.2  1516  696 pts/3S02:07   0:00 rsync -rlpt 
--partial -v --progress --exclude Packages --delete ftp.de.debian.org 
:debian/dists/sid/Contents-i386.gz /mnt/raid/rsync-mirror/debian/dists/sid/
mirror   20143  0.0  0.2  1516  668 pts/3S02:08   0:00 rsync -rlpt 
--partial -v --progress --exclude Packages --delete ftp.de.debian.org 
:debian-non-US/dists/sid/non-US/Contents-i386.gz 
/mnt/raid/rsync-mirror/debian/non-US/dists/sid/non-US/

and cat /var/run/debian-mirror.pid:
20135



So what am I doing wrong there?

MfG
Goswin




rsync mirror script for pools - first pre alpha release

2001-01-03 Thread Goswin Brederlow
>>>>> " " == Goswin Brederlow <[EMAIL PROTECTED]> writes:

 > Hi, I've been asked about my rsync mirror script, which is an
 > extension from Joey Hess's one, on irc and here several times.

 > So would there be intrest in a deb of the script coming with a
 > debconf interface for configuration, cronjob or ip-up support
 > and whatever else is needed to keep an uptodate mirror.

 > Or do you all prefer to do it your own way? I don't want to
 > package something just for 2 or 3 people.

 > MfG Goswin

 > PS: Just send me a yes I want privately if you want such a
 > package.

So here it is. The script works fine, but you need to configure
/etc/debian-mirror.conf manually at the moment. Once configured it
works fine for me.

On install it will ask you about debian-mirror being run in cron.daily
via debconf and all the other option will be asked there as well in
the future.

It will create a user mirror when installed and runs as mirror:nogroup
when started from /etc/cron.daily/debian-mirror.

The package is called debian-mirror and is available from:

deb ftp://rut.informatik.uni-tuebingen.de unstable main
deb-src ftp://rut.informatik.uni-tuebingen.de unstable main

Suggestions to the script are welcome, esspecially: How do I make
debconf popup a checklist like:

++
| What distributions should be mirrored? |
||
| [ ] potato |
| [ ] woody  |
| [ ] sid|
||
|   < OK>|
++

Happy testing,
Goswin




Re: maybe ITP rsync mirror script for pools

2001-01-04 Thread Goswin Brederlow
>>>>> " " == Marco d'Itri <[EMAIL PROTECTED]> writes:

 > On Jan 02, Goswin Brederlow
 > <[EMAIL PROTECTED]> wrote:
>> So would there be intrest in a deb of the script coming with a
>> debconf interface for configuration, cronjob or ip-up support
>> and whatever else is needed to keep an uptodate mirror.
 > Please don't encourage private mirrors!

 > I have been the administrator of ftp.it.debian.org since a long
 > time, and I notice there are many sites doing nightly mirrors
 > for their own use.  Mirroring is free for them because it's
 > done at night when offices are empty and there is nobody
 > downloading porn, but the aggregated traffic is significant for
 > me!  They could save bandwidth and disk space just by using a
 > correctly configured squid cache.

 > -- ciao, Marco

First thats not my problem, sorry. I just provide the means to do it
efficiently.

People will mirror anyway and my script is for any partial mirrors
(which might be public or private, for a company or for people
burining CDs). I use it to heavily downcut the downloading time
(comfort) and to actually reduce traffic (because my modem is to slow
to keep an ftp mirror up-to-date).

If you don't want people to do nightly mirrors, tell them so, or deny
the service. Not providing a script for people needing one will only
make them write their own, probably less eficcient scripts.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-04 Thread Goswin Brederlow
> " " == zhaoway  <[EMAIL PROTECTED]> writes:

 > hi, [i'm not sure if this has been resolved, lart me if you
 > like.]

 > my proposal to resolve big Packages.gz is through package pool
 > system.


Whats the problem with a big Packages file?

If you don't want to download it again and again just because of small
changes I have a better solution for you:

rsync

apt-get update could rsync all Packages files (yes, not the .gz once)
and thereby download only changed parts. On uncompressed files rsync
is very effective and the changes can be compressed for the actual
transfer. So on upload you will pratically get a diff.gz to your old
Packages file.

If that suits your needs, feel free to write a bugreport on apt about
this.

MfG
Goswin




Re: useful tools for packages--any comprehensive list?

2001-01-04 Thread Goswin Brederlow
> " " == Mikael Hedin <[EMAIL PROTECTED]> writes:

 > Hi,
>> from time to time people mentions some nifty tools (mostly
>> scripts?)
 > to search for info about packages and similar.  Eg the citation
 > below.  Is there some list/collection/etc of such utilities?
 > Or for other usefull things like `apt-cache search ',
 > which can be found in the manuals but is a bit tricky to find?

 > I couldn't find an easy way to find the biggest packages
 > installed so I hacked a script, but I suppose lots of these
 > things are really done, just I can't find it.

console-apt
s (sort) s (sort by size) [well, the interface has changed a bit, but
you can still sort by installed size or similar]

MfG
Goswin




Re: Problem with start-stop-daemon and pidfile

2001-01-04 Thread Goswin Brederlow
>>>>> " " == Matt Zimmerman <[EMAIL PROTECTED]> writes:

 > On Wed, Jan 03, 2001 at 02:10:19AM +0100, Goswin Brederlow
 > wrote:
>> touch /var/run/debian-mirror.pid chown mirror.nogroup
>> /var/run/debian-mirror.pid
>> 
>> touch /var/log/debian-mirror.log chown mirror.nogroup
>> /var/log/debian-mirror.log

 > Please don't do this.  nogroup should not be the group of any
 > files, just as nobody should not be the owner of any files.

Ups, yes.

What should I use? root?

MfG
Goswin




Potato depopularisation, wired links

2001-01-05 Thread Goswin Brederlow
Hi,

it seems that more and more Packages disapear from potato and are
replaced by links into the pools. And thats not new pakages that are
becoming stable, but old once getting moved.

Did I miss something there?

Also a link is placed in /debian/dists/potato/main/source for each
package thats now in the pools. Directly in source, not source/x11 or
similar.

First, why the links at all? And then, why not sorted into sections?

MfG
Goswin




tar -I incompatibility

2001-01-05 Thread Goswin Brederlow
Hi,

the Author of tar changed the --bzip option again. This time its even
worse than the last time, since -I is still a valid option but with a
totally different meaning.

This totally changes the behaviour of tar and I would consider that a
critical bug, since backup software does break horribly with the new
semantic.

Any good reason not to whack the author with a 50 pound unix manual
and revert the changes?

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-05 Thread Goswin Brederlow
>>>>> " " == Sami Haahtinen <[EMAIL PROTECTED]> writes:

 > On Fri, Jan 05, 2001 at 03:05:03AM +0100, Goswin Brederlow
 > wrote:
>> Whats the problem with a big Packages file?
>> 
>> If you don't want to download it again and again just because
>> of small changes I have a better solution for you:
>> 
>> rsync
>> 
>> apt-get update could rsync all Packages files (yes, not the .gz
>> once) and thereby download only changed parts. On uncompressed
>> files rsync is very effective and the changes can be compressed
>> for the actual transfer. So on upload you will pratically get a
>> diff.gz to your old Packages file.


 > this would bring us to, apt renaming the old deb (if there is
 > one) to the name of the new package and rsync those. and we
 > would save some time once again...

Thats what the debian-mirror script does (its about halve of the
script just for that). It also uses old tar.gz, orig.tar.gz, diff.gz
and dsc files.

 > Or, can rsync sync binary files?

Of cause, but forget it with compressed data.

 > hmm.. this sounds like something worth implementing..

I'm currently discussing some changes to the rsync client with some
people from the rsync ML which would uncompress compressed data on the
client side (no changes to the server) and rsync those. Sounds like
not improving anything, but when reading the full description on this
it actually does.

Before that rsyncing new debs with old once hardly ever saves
anything. Where it hels is with big packages like xfree, where several
packages are identical between releases.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-05 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 5 Jan 2001, Goswin Brederlow wrote:

>> If that suits your needs, feel free to write a bugreport on apt
>> about this.

 > Yes, I enjoy closing such bug reports with a terse response.

 > Hint: Read the bug page for APT to discover why!

 > Jason

I couldn't find any existing bugreport concerning rsync support for
apt-get in the long list of bugs.

So why would you close such a wishlist bugreport?
And why with a terse response?

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-05 Thread Goswin Brederlow
>>>>> " " == Junichi Uekawa <[EMAIL PROTECTED]> writes:

 > In 05 Jan 2001 19:51:08 +0100 Goswin Brederlow
 > <[EMAIL PROTECTED]> cum veritate
 > scripsit : Hello,

>> I'm currently discussing some changes to the rsync client with
>> some people from the rsync ML which would uncompress compressed
>> data on the client side (no changes to the server) and rsync
>> those. Sounds like not improving anything, but when reading the
>> full description on this it actually does.
>> 
>> Before that rsyncing new debs with old once hardly ever saves
>> anything. Where it hels is with big packages like xfree, where
>> several packages are identical between releases.

 > No offence, but wouldn't it be a tad difficult to play around
 > with it, since deb packages are not just gzipped archives, but
 > ar archive containing gzipped tar archives?

Yes and no.

The problem is that deb files are special ar archives, so you can't
just download the files and ar them together.

One way would be to download the files in the ar, ar them together and
rsync again. Since ar does not chnage the data in it, the deb has the
same data just at different places, and rsync handles that well.

This would be possible, but would require server changes.

The trick is to know a bit about ar, but not to much. Just rsync the
header of the ar file till the first real file in it and then rsync
that recursively, then a bit more ar file data and another file and so
on. Knowing when subfiles start and how long they are is enough.

The question will be how much intelligence to teach rsync. I like
rsync stupid but still intelligent enough to do the job.

Its pretty tricky, so it will be some time before anything in that
direction is useable.

MfG
Goswin




Re: rsync mirror script for pools - first pre alpha release

2001-01-06 Thread Goswin Brederlow
> " " == esoR ocsirF <[EMAIL PROTECTED]> writes:

 > I would like to set up our local partial mirror to run without
 > attendance through multiple releases. If I hard code the
 > release candidate name into the mirror script, wont it just
 > break when testing goes stable?

The problem is that those are links and rsync can eigther keep links
or follow them.

At the moment there are a lot for links in potato/woody so I can't
follow links, so no mirroring of stable/unstable/testing.

I could check what stable/testing/unstable is and then mirror what
they point to, but who cares. In a year or so an update or
debian-mirror will ask you weather you want to start mirroring the new
unstable and weather to drop the old stable.

MfG
Goswin




Re: Upcoming Events in Germany

2001-01-06 Thread Goswin Brederlow
> " " == Martin Schulze <[EMAIL PROTECTED]> writes:

 > May 19-20 Berliner Linux Infotage
 > http://www.belug.org/infotage/

Intresting. Gotta check my calendar for a vistit to my parents in
Berlin during that time.

 > July 5-8 LinuxTag 2001, Stuttgart http://www.linuxtag.org/
 > http://www.infodrom.ffis.de/Debian/events/LinuxTag2001/

Already planed to be there.

MfG
Goswin




Re: diskless package and devfs (Linux 2.4.x)

2001-01-06 Thread Goswin Brederlow
> " " == Brian May <[EMAIL PROTECTED]> writes:

 > Hello, would anyone object if I made the diskless package
 > depend on devfs support from 2.4.x in future versions?

Please do.

MfG
Goswin (a devfs fan).




Re: tar -I incompatibility

2001-01-06 Thread Goswin Brederlow
>>>>> " " == Scott Ellis <[EMAIL PROTECTED]> writes:

>> Goswin Brederlow wrote: > the Author of tar changed the --bzip
>> option again. This time its even > worse than the last time,
>> since -I is still a valid option but with a > totally different
>> meaning.  > > This totally changes the behaviour of tar and I
>> would consider that a > critical bug, since backup software
>> does break horribly with the new > semantic.
>> 
>> Yes, I think that this should definetely be changed back. The
>> first time I encountered this problem, I thought that the
>> tar.bz2 archive was broken from the error message tar
>> reported. (Not a valid tar archive or so.) This change is
>> confusing and unreasonable IMHO.

 > Of course the -I option to tar was completely non-standard.
 > The changelog explains why it changed, to be consistant with
 > Solaris tar.  I'd prefer portability and consistancy any day,
 > it shouldn't take that long to change any custom scripts you
 > have.  I always use long options for nonstandard commands when
 > building scripts anyway :)

The problem is that -I works although it should completly break
everything. The only difference is that the tar file won't be
compressed anymore.

No warning, no error and noone reads changelogs unless something
breaks. (well, most people don't).

"mkdir bla"
"tar -cIvvf bla.tar.bz2 bla" should give:

"bla.tar.bz2: No such file" Since -I reads the files to be included
from a file.

"bla: Failed to open file, bla is a directory"  Since tar should try
to create a tra file named bla, which is a directoy.

or

"tar: cowerdly refusing to create empty archive"Since there
are no file given as parameters and none read from
bla.tar.bz2.

So where are the errors?

MfG
Goswin

PS: Why not change the Solaris version to be compatible with the widely used 
linux version? I'm sure there are more people and tools out there for linux 
using -I then there are for solaris.




Re: [devfs users]: evaluate a patch please

2001-01-06 Thread Goswin Brederlow
> " " == Martin Bialasinski <[EMAIL PROTECTED]> writes:

 > Hi, there is a bug in the mc package, that most likely is
 > related to devfs. I can't reproduce it, nor does it seem to be
 > common.

 > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=57557&repeatmerged=yes

 > mc hangs occasionally on starup on the VC.

Oh, thats the reason why it hangs. I wondered about that.

 > There is a patch on the buttom of the report.

 > Could you tell me, if it is formally OK and if it fixes the
 > problem for you, if you can reproduce the bug?

Gotta test that.

I will be back.
Goswin




Re: What do you wish for in an package manager? Here you are!

2001-01-06 Thread Goswin Brederlow
> " " == Thorsten Wilmer <[EMAIL PROTECTED]> writes:

 > Hello Petr Èech wrote:
>> Adam Lazur wrote:
>>> The ability to install more than one version of a package
>>> simultaneously.
>>  Hmm. SO you install bash 2.04-1 and bash 2.02-3. Now what will
>> be /bin/bash 2.04 or 2.02 version? You will divert both of them
>> and symlink it to the old name - maybe, but but how will you
>> know, to what name it diverts to use it?
>> 
>> Give me please 3 sane examples, why you need this. And no,
>> shared libraries are NOT an excuse for this.

The only useable way is to have /bin/bash allways point to a stable
version.

Apart from that, anyone who cares what version to use must use the
full path to the binary or a versioned name, like /bin/bash-2.04-1.

I would like binaries to be compiled to reside in versioned
directories but I also see a lot of problems with it as
well. Especially with the /etc /usr /usr/share and so on. Every
directory would have to have a subdir for every package that has files
there. What a chaos.

Of cause in spezial cases you can install all packages to
/usr/share/software/package-version/ and symlinc, but thats not a
general solution to the problem. For stuff like /bin/sh a network
filesystem doesn't work.

MfG
Goswin




Re: What to do about /etc/mtab

2001-01-06 Thread Goswin Brederlow
> " " == s Lichtmaier  writes:

>> > >[EMAIL PROTECTED]:/tmp>mount -o loop foo 1 > Why dont we just
>> patch mount to use /var/run/mtab?  > I dont know about any
>> other program which modifies it.
>> 
>> because /var is not always on the same partition as /

 >  /etc/mtab shouldnt exist, all the information should be
 > handled by the kernel itself. But for the time being, I think I
 > have a better solution than the current one:

 >  Allocate a shared memory area. SHM areas are kept in memory
 > like small ramdisk. /etc/mtab is rather small, never longer
 > than a 4k page, besides the memory is swappable.

 >  And theres an advantage: With a SHM capable mount program
 > there would be no problem when mounting root read only.

umount /proc
cp /etc/mtab /proc/mounts
mount /proc
rm /etc/mtab
ln /proc/mounts /etc/mtab

Works fine even with 2.4 + devfs. Only find seems to still have a
slight bug there.

MfG
Goswin




Re: tar -I incompatibility

2001-01-06 Thread Goswin Brederlow
>>>>> " " == Sam Couter <[EMAIL PROTECTED]> writes:

 > Goswin Brederlow <[EMAIL PROTECTED]>
 > wrote:
>> PS: Why not change the Solaris version to be compatible with
>> the widely used linux version? I'm sure there are more people
>> and tools out there for linux using -I then there are for
>> solaris.

 > This is an incredibly Linux-centric point of view. You sound
 > worse than the BSD bigots.

Just as linux-centric as the other way is solaris-centric.

Letting an option die out is bad. Changing an option name is
evil. Chaning the meaning of an option to mean something else on the
fly is pure evil[tm].

I think Debian should patch -I back to the old meaning. If
compatibility with solaris tar is wanted, then let -I print a warning
that its depreciated. In a few month give an error and maybe in a year
adopt a new meaning for -I (if thats realy wanted).

 > There are many, many, many different unices that are *not*
 > Linux. You can't hope to change them all to be Just Like Linux
 > (tm). You'll be lucky if any of them follow Linux behaviour,
 > rather than the other way around.

I don't want to change them but I also don't want to be changed by
them in ways that are plain stupid. And the -I just changing meaning
without any warning is plain stupid.

 > Hint: Adopt some cross-platform habits like: "bzip2 -dc
 > foo.tar.bz2 | tar xf -"

 > Not only will you then become more immune to changes in
 > behaviour that was non-standard to begin with, you'll also find
 > adjustment to other systems a lot easier.


I like systems that don't change on a day to day basis. I don't want
"ls *" to do "rm *" tomorrow just because some other unix does it and
the author feels like it.


"tar -xIvvf file.tar.bz2" has been in use under linux for over a year
by pretty much everybody. Even if the author never released it as
stable, all linux distributions did it. I think that should count
something. Enough to at least ease the transition.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
> " " == Sam Vilain <[EMAIL PROTECTED]> writes:

 > On Fri, 5 Jan 2001 09:33:05 -0700 (MST) Jason Gunthorpe
 > <[EMAIL PROTECTED]> wrote:

>> > If that suits your needs, feel free to write a bugreport on
>> apt about this.  Yes, I enjoy closing such bug reports with a
>> terse response.  Hint: Read the bug page for APT to discover
>> why!

>> From bug report #76118:

 > No. Debian can not support the use of rsync for anything other
 > than mirroring, APT will never support it.

 > Why?  Because if everyone used rsync, the loads on the servers
 > that supported rsync would be too high?  Or something else?  --
 > Sam Vilain, [EMAIL PROTECTED] WWW: http://sam.vilain.net/ GPG
 > public key: http://sam.vilain.net/sam.asc

Actually the load should drop, providing the following feature add
ons:

1. cached checksums and pulling instead of pushing
2. client side unpackging of compressed streams

That way the rsync servers would have to first server the checksum
file from cache (being 200-1000 smaller than the real file) and then
just the blocks the client asks for. So if 1% of the file being
rsynced fits its even and everything above that saves bandwidth.

The current mode of operation of rsync works in the reverse, so all
the computation is done on the server every time, which of cause is a
heavy load on the server.

I hope both features will work without chaning the server, but if not,
we will have to wait till servers catch up with the feature.

MfG
Goswin




Re: What to do about /etc/debian_version

2001-01-07 Thread Goswin Brederlow
> " " == Martin Keegan <[EMAIL PROTECTED]> writes:

 > Martijn van Oosterhout <[EMAIL PROTECTED]> writes:

>> Joey Hess wrote: > I think /etc/mtab is on its way out. A 2.4.x
>> kernel with devfs has a > /proc/mounts that actually has a
>> proper line for the root filesystem.  > Linking the two files
>> would probably actually work on such a system > without
>> breakage.
>> 
>> Does 2.4 now also include the information on which loop devices
>> are related to which filesystems? AFAIK that's the only thing
>> that went strange after linking /proc/mounts and /etc/mtab;
>> loop devices not being freed after unmounting.

No. Not that I saw a change for it. How could it?  Currently when
mounting a loop device, mount writes the filename that gets attached
to the loop device into /etc/mtab and then mounts /dev/loopX. Because
/etc/mtab is read-only mount can't write the filename and thus doesn't
know what to detach when unmounting.

mount can't know the difference between
"mount -oloop file path"
and
losetup /dev/loop0 file
"mount /dev/loop0 path"

Maybe the mount or loopback interface could be changed to record that
umount has to free the loop device upon umount.

 > When doing this I had a problem with the mount programme
 > insisting on explicitly checking whether /etc/mtab were a
 > symlink and explicitly breaking if it were. Why is this?

Never had that problem.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Matt Zimmerman <[EMAIL PROTECTED]> writes:

 > On Sun, Jan 07, 2001 at 03:49:43PM +0100, Goswin Brederlow
 > wrote:
>> Actually the load should drop, providing the following feature
>> add ons: [...]

 > The load should drop from that induced by the current rsync
 > setup (for the mirrors), but if many, many more client start
 > using rsync (instead of FTP/HTTP), I think there will still be
 > a significant net increase in load.

 > Whether it would be enough to cause a problem is debatable, and
 > I honestly don't know either way.

When the checksums are cached there will be no cpu load caused by
rsync, since it will only transfer the file. And the checksum files
will be realy small as I said, so if some similarity is found the
reduction in data will make more than up for the checksum download.

The only increase is the space needed to store the checksums in some
form of cache.

MfG
Goswin




Re: RFDisscusion: Big Packages.gz and Statistics and Comparing solution

2001-01-07 Thread Goswin Brederlow
> " " == zhaoway  <[EMAIL PROTECTED]> writes:

 > Hi, [Sorry for the thread broken, my POP3 provider stopped.]
 > [Please Cc: me! <[EMAIL PROTECTED]>. Sorry! ;-)]

 > 1. RFDiscussion on big Packages.gz

 > 1.1. Some statistics

 > % grep-dctrl -P
 > 
-sPackage,Priority,Installed-Size,Version,Depends,Provides,Conflicts,Filename,Size,MD5sum
 > -r '.*'
 > ftp.jp.debian.org_debian_dists_unstable_main_binary-i386_Packages
 > | gzip -9 > test.pkg.gz % gzip -9
 > ftp.jp.debian.org_debian_dists_unstable_main_binary-i386_Packages
 > % ls -alF *.gz -rw-r--r-- 1 zw zw 1157494 Jan 7 21:20
 > ftp.jp.debian.org_debian_dists_unstable_main_binary-i386_Packages.gz
 > -rw-r--r-- 1 zw zw 341407 Jan 7 21:23 test.pkg.gz %

Ahh, what does it do? Just take out the descriptions?

 > This approach is simple and straight and almost compatible. But
 > could accpect 10K more packages come into Debian with little
 > loss. Worth consideration. IMHO.

 > Better, if `Description:' etc. could come into seperate gzipped
 > file along with the Debian package.

The problem is that people want to browse descriptions to find a
package fairly often or just run "apt-cache show package" to see what
a package is about. So you need a method to download all descriptions.

Also many small files compress far less than one big file.


 > 2. Compare with DIFF and RSYNC method of APT

 > 2.1. They need server support. (More than a directory layout
 > and client tool changing.)

As far as I see theres no server support needed for rsync support to
operate better on compressed files.

 > 2.2. If you don't update for a long time, DIFF won't
 > help. RSYNC help less.

If you update often, saving 1 Byte every time is worth it. If you
update seldomely, it doesn't realy matter that you download a big
Packages.gz. You would have to downlaod all the small Packages.gz
files also.

And after that you download 500 MB of updates. So who cares about 2MB
packages.gz?

Also, diff and rsync do a great job even after a long time:

diff potato_Packages woody_Packages| gzip -9 | wc --bytes
 339831

% ls -l /debian/dists/woody/main/binary-i386/Packages.gz
-rw-r--r--1 mrvn mrvn   955259 Jan  6 21:03 
/debian/dists/woody/main/binary-i386/Packages.gz

So you see, between potato and woody diff saves about 60%.
Also note that rsync usually performs better than cvs, since it does
not include the to be removed lines in the download.

 > 3. Additional benefits

 > Seperate changelog.Debian and `Description:' etc. out into
 > meta-info file could help users: 1) reduce the bandwidth eaten
 > 2) help their upgrade decisions easily.

A global Description.gz might benefit from the fact that the
description doesn't change for each update, but the extra work needed
for this to realy work is not worth it. It would only benefit people
that do daily mirroring, where rsync would do just as good.

MfG
Goswin




Re: tar -I incompatibility

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Marcus Brinkmann <[EMAIL PROTECTED]> writes:

 > On Sun, Jan 07, 2001 at 02:05:27AM -0500, Michael Stone wrote:
>> On Sun, Jan 07, 2001 at 04:25:43AM +0100, Marcus Brinkmann
>> wrote: > On Sun, Jan 07, 2001 at 03:28:46AM +0100, Goswin
>> Brederlow wrote: > > "tar -xIvvf file.tar.bz2" has been in use
>> under linux for over a year > > by pretty much everybody. Even
>> if the author never released it as > > stable, all linux
>> distributions did it. I think that should count > > something.
>> > > It tells a lot about the people making the distributions at
>> least.
>> 
>> Before making such snide comments, take a look at the
>> changelog.Debian entries relating to the switch from 1.13 to
>> 1.13.x.

 > I see. Well, I don't think that Bdale did something wrong with
 > including 1.13.x. But I find the reactions to the flag change
 > shown here by some people quite inappropriate. When using
 > unreleased software, people have to expect such changes,
 > especially for non-standard extensions. It happens all the
 > time.

On anything apart from Debian I wouldn't say a word about it.

BUT on Debian tar -I is a standard and its stable. So I start
screaming. Since the Debian maintainer made -I stable with a unstable
upstream source, its his responsibility to watch it.

Its the authors fault to have not resolved the problem for so long and
suddenly resolve it in such a disasterous way, but also the Debian
maintainers fault not to warn us and ease our transition.

Fault might be a to strong word, I just mean that there should be a
new upload asap that eigther reverts the -I change or tells the user
about it. Having -I silently just do something else is not an option
in my eyes.

MfG
Goswin




Re: RFDisscusion: Big Packages.gz and Statistics and Comparing solution

2001-01-07 Thread Goswin Brederlow
>>>>> " " == zhaoway  <[EMAIL PROTECTED]> writes:

 > [A quick reply. And thanks for discuss with me! And no need to
 > Cc: me anymore, I updated my DB info.]

 > On Sun, Jan 07, 2001 at 05:51:26PM +0100, Goswin Brederlow
 > wrote:
>> The problem is that people want to browse descriptions to find
>> a package fairly often or just run "apt-cache show package" to
>> see what a package is about. So you need a method to download
>> all descriptions.

 > The big Packages.gz is still there. No conflict between the two
 > method.  And the newest, most updated information is always on
 > freshmeat.net. ;)

>> As far as I see theres no server support needed for rsync
>> support to operate better on compressed files.

 > Um, I don't know. But doesn't RSYNC need a server side RSYNC to
 > run?  Or, can I expect a HTTP server to provide RSYNC? (Maybe I
 > am stupid, I'll read RSYNC man page, later.)

Yes, eigther rsyncd or rshd/sshd needs to be running. But thats
already the case.

What I ment was that the new feature to uncompress archives before
rsyncing can (hoepfully) be done without any changes to existing
servers and without unpacking on the server side. All old servers
should do fine. Thats what I aim to archive.

>> If you update often, saving 1 Byte every time is worth it. If
>> you update seldomely, it doesn't realy matter that you download
>> a big Packages.gz. You would have to downlaod all the small
>> Packages.gz files also.

 > There is an approach to help this. But that is another
 > story. Later.

>> So you see, between potato and woody diff saves about 60%.
>> Also note that rsync usually performs better than cvs, since it
>> does not include the to be removed lines in the download.

 > Pretty sounding argument. My only critic on DIFF or RSYNC now
 > is just server support now. (Again, I'll read RSYNC man page
 > later. ;-)

 > The point is, can a storage server which provides merely HTTP
 > and/or FTP service do the job for apt-get?

Nope, but rsync servers already exist. Time to push people to convert
their services by pushing the users to use them.

Also think of the benefit when updating. With some extra code on the
client side (for example in apt) a pseudo deb can be created from the
installed version and then rsynced against the new version. You
wouldn't need a local mirror and you still save a lot of download.

Of cause this all needs support to rsync compressed archives
uncompressed in the rsync client.

MfG
Goswin




apt maintainers dead?

2001-01-07 Thread Goswin Brederlow
Hi,

I tried to contact the apt maintainers about rsync support for
apt-get (a proof of concept was included) but haven't got an answere
back yet.

Is the whole team on vacation? Who is actually on that list?

>From the number of bugs open against apt-get I would think they are
all dead. Please proof me wrong.

MfG
Goswin




Re: tar -I incompatibility

2001-01-07 Thread Goswin Brederlow
 > On Mon, Jan 08, 2001 at 12:12:59AM +1100, Sam Couter wrote:
>> Goswin Brederlow <[EMAIL PROTECTED]>
>> wrote: > Just as linux-centric as the other way is
>> solaris-centric.
>> 
>> Not true. There's the way GNU tar works, then there's the way
>> every other tar on the planet works (at least with respect to
>> the -I option). GNU tar is (used to be) the odd one out. Now
>> you're saying that not behaving like the odd man out is being
>> Solaris-centric? I don't think so.

I worked and still work on several paltform and the first think I
usually do is make them compatible:

compile bash, make, gcc, bash (again, to correct the stupid cc bugs),
make, automake, autoconf, zsh, xemacs, tar, gzip, bzip2, qvwm.

All the normal tools from comercial unixes are all proprietary to
their system. The only way to standardise those is to use the free
comon source of what we have under linux.

So I say that what debian uses can be the default on all unix systems.

Just my 2c.
Goswin




Re: tar -I incompatibility

2001-01-07 Thread Goswin Brederlow
> " " == Paul Eggert <[EMAIL PROTECTED]> writes:

>> Date: Sun, 7 Jan 2001 12:07:14 -0500 From: Michael Stone
>> <[EMAIL PROTECTED]>

>> I certainly hope that the debian version at least prevents
>> serious silent breakage by either reverting the change to -I
>> and printing a message that the option is deprecated or
>> removing the -I flag entirely.

 > Why would deprecating or removing the -I flag help prevent
 > serious silent breakage?  I would think that most people using
 > -I in the 1.13.17 sense would use it like this:

 > tar -xIf archive.tar

 > and this silently breaks in 1.13.18 only in the unlikely case
 > where "f" is a readable tar file.

% tar --version
tar (GNU tar) 1.13.18
Copyright 2000 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute it under the terms of the GNU General Public License;
see the file named COPYING for details.
Written by John Gilmore and Jay Fenlason.
% tar -cIvvf bla.tar.bz2 bla
tar: bla: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors
% mkdir bla
% tar -cIvvf bla.tar.bz2 bla
drwxr-xr-x mrvn/mrvn 0 2001-01-07 22:50:27 bla/
% file bla.tar.bz2  
bla.tar.bz2: GNU tar archive
% tar -tIvvf bla.tar.bz2 
drwxr-xr-x mrvn/mrvn 0 2001-01-07 22:50:27 bla/

As you see -I is silently ignored in violation to
/usr/share/doc/tar/NEWS.gz:

* The short name of the --bzip option has been changed to -j,
  and -I is now an alias for -T, for compatibility with Solaris tar.

Thats part of the problem, people won't get any error message at the
moment. Everything looks fine until you compare the size, run file or
try to bunzip the file manually.

As I said before tar -I in its old useage should give one of several
errors, but doesn't. Can't remeber the bug number, but its in the BTS.

 > I'm not entirely opposed to deprecating -I for a while -- but I
 > want to know why it's helpful to do this before installing such
 > a change.

If its depreciated people will get a message every time they use
-I. cron jobs will generate a mail every time they run.
Just think how anoying a daily mail is and how fast people will change
the option.
BUT nothing will break, no data will be lost.

On the other hand, just changing the meaning or deleting the option
will result in severe breakage in 3rd party software. Sometimes
without even giving a hint of the cause. You know how bad 3rd party
software can be. :)

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-07 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

 > It's commonly agreed that compression does prevent rsync from
 > profit of older versions of packages when synchronizing Debian
 > mirrors. All the discussion about fixing rsync to solve this,
 > even trough a deb-plugin is IMHO not the right way. Rsync's
 > task is to synchronize files without knowing what's inside.

 > So why not solve the compression problem at the root? Why not
 > try to change the compression in a way so it does produce a
 > compressed result with the same (or similar) difference rate as
 > the source?

 > As my understanding of compression goes, all have a kind of
 > lookup table at the beginning where all compression codes where
 > declared. Each time this table is created new, each time
 > slightly different than the previous one depending on the

Nope. Only a few compression programs use a table at the start of the
file. Most build the table as they go along. Saves a lot of
information not to copy the table.

gzip (I hope I remeber that correctly) for example increases its table
with every character it encodes, so when you compress a file that does
only contain 0, the table will not contain any a's, so a can't even be
encoded.

bzip2 on the other hand resorts the input in some way to get better
compression ratios. You can't resort the input in the same way with
different data. The compression rate will dramatically drop otherwise.

ppm, as a third example, builds a new table for every character thats
transfered and encoded the probability range of the real character in
one of the current contexts. And the contexts are based on all
previous characters. The first character will be plain text and the
rest of the file will (most likely) differ if that char changes.

 > source. So to get similar results when compressing means using
 > the same or at least an aquivalent lookup table.  If it would
 > be possible to feed the lookup table of the previous compressed
 > file to the new compression process, an equal or at least
 > similar compression could be achieved.

 > Of course using allways the same lookup table means a deceasing
 > of the compression rate. If there is an algorithmus which
 > compares the old rate with an optimal rate, even this could be
 > solved. This means a completly different compression from time
 > to time. All depends how easy an aquivalent lookup table could
 > be created without loosing to much of the compression rate.

Knowing the structure of the data can greatly increase the compression
ratio. Also knowing the structure can greatly reduce the differences
needed to sync two files.

So why should rsync stay stupid?

MfG
Goswin




Linux Gazette [Was: Re: big Packages.gz file]

2001-01-07 Thread Goswin Brederlow
> " " == Chris Gray <[EMAIL PROTECTED]> writes:

> Brian May writes:
> "zhaoway" == zhaoway  <[EMAIL PROTECTED]> writes:
zhaoway> 1) It prevent many more packages to come into Debian, for
zhaoway> example, Linux Gazette are now not present newest issues
zhaoway> in Debian. People occasionally got fucked up by packages

Any reasons why the Linux gazette is not present anymore?

And is there a virtual package for the Linux gazette that allays
depends on the newest version?

MfG
Goswin




Re: Drag-N-Drop Interface

2001-01-07 Thread Goswin Brederlow
> " " == Michelle Konzack <[EMAIL PROTECTED]> writes:

 > Hello and good evening.  Curently I am programing a new
 > All-In-One Mail-Client (for Windows- Changers ;-)) ) and I need
 > to program a Drag-N-Drop interface.

 > Please can anyone point me to the right resources ???  I
 > program in C.

Well, look at kde and gnome. AFAIK they share a common drag&drop
interface.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Brian May <[EMAIL PROTECTED]> writes:

>>>>> "Goswin" == Goswin Brederlow <[EMAIL PROTECTED]> writes:
Goswin> Actually the load should drop, providing the following
Goswin> feature add ons:

 > How does rproxy cope? Does it require a high load on the
 > server?  I suspect not, but need to check on this.

 > I think of rsync as just being a quick hack, rproxy is the
 > (long-term) direction we should be headed. rproxy is the same
 > as rsync, but based on the HTTP protocol, so it should be
 > possible (in theory) to integrate into programs like Squid,
 > Apache and Mozilla (or so the authors claim).  -- Brian May
 > <[EMAIL PROTECTED]>

URL?

Sounds more like encapsulation of an rsync similar protocol in html,
but its hard to tell from the few words you write. Could be intresting
though.

Anyway, it will not resolve the problem with compressed files if its
just like rsync.

MfG
Goswin




Re: apt maintainers dead?

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 7 Jan 2001, Goswin Brederlow wrote:

>> I tried to contact the apt maintainers about rsync support for
>> apt-get (a proof of concept was included) but haven't got an
>> answere back yet.

 > No, you are just rediculously impatatient.

 > Date: 06 Jan 2001 19:26:59 +0100 Subject: rsync support for apt

 > Date: 07 Jan 2001 22:42:02 +0100 Subject: apt maintainers dead?

 > Just a bit over 24 hours? Tsk Tsk.

Usually with people living wordwide someone is allways reading their
mails, so an answere within minutes is possible.

 > The short answer is exactly what you should expect - No,
 > absolutely not.  Any emergence of a general rsync for APT

Then why did it take so long? :)

 > method will result in the immediate termination of public rsync
 > access to our servers.

I think that is something to be discussed. As I said before, I expect
the rsync + some features to produce less load than ftp or http.

Given that it doesn't need more resources than those two, is the
answere still no?

 > I have had discussions with the rproxy folks, and I feel that
 > they are currently the best hope for this sort of thing. If you
 > want to do something, then help them.

I'm still at the "designing technical detail and specs" stage, so
anything is possible. Gotta check rproxy out when I wake up
again. I hope I got an url for it by then.

MfG
Goswin




Re: package pool and big Packages.gz file

2001-01-07 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 7 Jan 2001, Goswin Brederlow wrote:

>> Actually the load should drop, providing the following feature
>> add ons:
>> 
>> 1. cached checksums and pulling instead of pushing 2. client
>> side unpackging of compressed streams

 > Apparently reversing the direction of rsync infringes on a
 > patent.

When I rsync a file, rsync starts ssh to connect to the remote host
and starts rsync there in the reverse mode.

You say that the recieving end is violating a patent and the sending
end not?

Hmm, which patent anyway?

So I have to fork a rsync-non-US because of a patent?

 > Plus there is the simple matter that the file listing and file
 > download features cannot be seperated. Doing a listing of all
 > files on our site is non-trivial.

I don't need to get a filelisting, apt-get tells me the name. :)
Also I can do "rsync -v host::dir" and parse the output to grab the
actual files with another rsync. So filelisting and downloading is
absolutely seperable.

Doing a listing of all file probably results in a timeout. The
harddrives are too slow.

 > Once you strip all that out you have rproxy.

 > Reversed checksums (with a detached checksum file) is something
 > someone should implement for debian-cd. You calud even quite
 > reasonably do that totally using HTTP and not run the risk of
 > rsync load at all.

At the moment the client calculates one roling checksum and md5sum per
block.

The server, on the other hand, calculates the rolling checksum per
byte and for each hit it calculates an md5sum for one block.

Given a 650MB file, I don't want to know the hit/miss ratios for the
roling checksum and the md5sum. Must be realy bad.

The smaller the file, the less wrong md5sums need to be calculated.

 > Such a system for Package files would also be acceptable I
 > think.

For Packages file even cvs -z9 would be fine. They are comparatively
small to the rest of the load I would think.

But I, just as you do, think that it would be a realy good idea to
have precalculated rolling checksums and md5sums, maybe even for
various blocksizes, and let the client do the time consuming guessing
and calculating. That would prevent rsync to read every file served
twice, as it does now when they are dissimilar.

May the Source be with you.
Goswin




Re: package pool and big Packages.gz file

2001-01-08 Thread Goswin Brederlow
>>>>> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 8 Jan 2001, Goswin Brederlow wrote:
 
>> I don't need to get a filelisting, apt-get tells me the
>> name. :)

 > You have missed the point, the presence of the ability to do
 > file listings prevents the adoption of rsync servers with high
 > connection limits.

Then that feature should be limited to non-recursive listings or
turned off. Or .listing files should be created that are just served.

>> > Reversed checksums (with a detached checksum file) is
>> something > someone should implement for debian-cd. You calud
>> even quite > reasonably do that totally using HTTP and not run
>> the risk of > rsync load at all.
>> 
>> At the moment the client calculates one roling checksum and
>> md5sum per block.

 > I know how rsync works, and it uses MD4.

Ups, then s/5/4/g.

>> Given a 650MB file, I don't want to know the hit/miss ratios
>> for the roling checksum and the md5sum. Must be realy bad.

 > The ratio is supposed to only scale with block size, so it
 > should be the same for big files and small files (ignoring the
 > increase in block size with file size).  The amount of time
 > expended doing this calculation is not trivial however.

Hmm, in the technical paper it says that it creates a 16 bit external
hash, each entry a linked list of items containing the full 32 Bit
rolling checksum (or the other 16 bit) and the md4sum.

So when you have more blocks, the hash will fill up. So you have more
hits on the first level and need to search a linked list. With a block
size of 1K a CD image has 10 items per hash entry, its 1000% full. The
time wasted alone to check the rolling checksum must be huge.

And with 65 rolling checksums for the image, theres a ~10/65536
chance chance of hitting the same checksum with differen md4sum, so
thats about 100 times per CD, just by pure chance.

If the images match, then its 65 times.

So the better the match, the more blocks you have, the more cpu it
takes. Of cause larger blocks take more time to compute a md4sum, but
you will have less blocks then.

 > For CD images the concern is of course available disk
 > bandwidth, reversed checksums eliminate that bottleneck.

That anyway. And ram.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == Jason Gunthorpe <[EMAIL PROTECTED]> writes:

 > On 7 Jan 2001, Bdale Garbee wrote:

>> > gzip --rsyncable, aloready implemented, ask Rusty Russell.
>> 
>> I have a copy of Rusty's patch, but have not applied it since I
>> don't like diverging Debian packages from upstream this way.
>> Wichert, have you or Rusty or anyone taken this up with the
>> gzip upstream maintainer?

 > Has anyone checked out what the size hit is, and how well
 > ryncing debs like this performs in actual use? A study using
 > xdelta on rsyncable debs would be quite nice to see. I recall
 > that the results of xdelta on the uncompressed data were not
 > that great.

That might be a problem of xdelta, I heard its pretty inefective.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

>>> So why not solve the compression problem at the root? Why not
>>> try to change the compression in a way so it does produce a
>>> compressed
 > result
>>> with the same (or similar) difference rate as the source?
>>  Are you going to hack at *every* different kind of file format
>> that you might ever want to rsync, to make it rsync friendly?
>> 
 > No, I want rsync not even to be mentioned. All I want is
 > something similar to

 > gzip --compress-like=old-foo foo

AFAIK thats NOT possible with gzip. Same with bzip2.

 > where foo will be compressed as old-foo was or as aquivalent as
 > possible. Gzip does not need to know anything about foo except
 > how it was compressed. The switch "--compress-like" could be
 > added to any compression algorithmus (bzip?) as long as it's
 > easy to retrieve the compression scheme. Besides the following
 > is completly legal but probably not very sensible

 > gzip --compress-like=foo bar

 > where bar will be compressed as foo even if they might be
 > totally unrelated.

 > Rsync-ing Debian packages will certainly take advantage of this
 > solution but the solution itself is 100% pure compression
 > specific. Anything which needs identical compression could
 > profit from this switch. It's up to profiting application to
 > provide the necessary wrapper around.

>> gzip --rsyncable, aloready implemented, ask Rusty Russell.

 > The --rsyncable switch might yield the same result (I haven't
 > checked it sofar) but will need some internal knowledge how to
 > determine the old compression.

As far as I understand the patch it forces gzip to compress the binary
into chunks of 8K. So every 8K theres a break where rsync can try to
match blocks. It seems to help somehow, but I think it handles
movement of data in a file badly (like when a line is inserted).

 > As I read my mail again the syntax for "compressing like" could
 > be

 > gzip --compress=foo bar

 > where bar is compressed as foo was. Foo is of course a
 > compressed file (how else could the compression be retrieved)
 > while bar is not.

I wish it where that simple.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == Andrew Lenharth <[EMAIL PROTECTED]> writes:

 > What is better and easier is to ensure that the compression is
 > deturministic (gzip by default is not, bzip2 seems to be), so
 > that rsync can decompress, rsync, compress, and get the exact
 > file back on the other side.

gzip encodes timestamps, which makes identical files seem to be
different when compressed.

Given the same file with the same timestamp, gzip should allways
generate an equal file.

Of cause that also depends on the options used.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-08 Thread Goswin Brederlow
> " " == John O Sullivan <[EMAIL PROTECTED]> writes:

 > There was a few discussions on the rsync mailing lists about
 > how to handle compressed files, specifically .debs I'd like to
 > see some way of handling it better, but I don't think it'll
 > happen at the rsync end. Reasons include higher server cpu load
 > to (de)compress every file that is transferred and problems
 > related to different compression rates.  see this links for
 > more info
 > http://lists.samba.org/pipermail/rsync/1999-October/001403.html

Did you read my proposal a few days back? That should do the trick,
works without unpacking on the server side and actually reduces the
load on the server, because it then can cache the checksum,
i.e. calculate them once and reuse them every time.

MfG
Goswin




Re: Debian unstable tar incompatible with 1.13.x?

2001-01-08 Thread Goswin Brederlow
> " " == safemode  <[EMAIL PROTECTED]> writes:

 > I have used tar with gzip and bzip2 in debian unstable and in
 > each case users who use older versions of tar ( like 1.13.11 )
 > were unable to decompress it.

Well, bzip2 is known. Just doesn't work anymore (see the big flameware
here :)

 > [49: huff+mtf rt+rld]data integrity (CRC) error in data

Thats strange. Do you have a small example as tar and tar.gz file?
Best would be the same data once with the old and once with the new
tar.

 > and such error messages like that .  This troubles me greatly.
 > Any info about this?  I'm using debian unstable's current tar
 > and bzip2 and gzip to make the tarballs.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-09 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

>> > gzip --compress-like=old-foo foo
>> 
>> AFAIK thats NOT possible with gzip. Same with bzip2.
>> 
 > Why not.

gzip creates a dictionary (that gets realy large) of strings that are
used and encodes references to them. At the start the dictionary is
empty, so the first char is pretty much unencoded and inserted into
the dictionary. The next char is encoded using the first one and so
on. That way longer and longer strings enter the dictionary.

Every sequence of bytes creates an unique (maybe not unique, but
pretty much so) dictionary that can be completly reconstructed from
the compressed data. Given the dictionary after the first n characters
the n+1's characted can be decoded and the next dictionary can be
calculated.

I think its pretty much impossible to find two files resulting in the
same dictionary. It certainly is impossible for the speed we need.

You cannot encoded two random files with the same dictionary, not
without adding the dictionary to the file, which gzip does not (since
its a waste).

So, as you see, that method is not possible.

But there is a little optimisation that can be used (and is used by
the --rsyncable patch):

If the dictionary gets to big, the compression ratio drops. It becomes
ineffective. Then gzip flushes the dictionary and starts again with an
empty one.

The --rsyncable patch now changes the moments when that will
happen. It looks for block of bytes that have a certain rolling
checksum and if it matches it flushes the dictionary. Most likely two
similar files will therefore flush the dictionary at exactly the same
places. If two files are equal after such a flush, the data will be
encoded the same way and rsync can match those blocks.

The author claims that it takes about 0.1-0.2% more space for
rsyncable gzip files, which is a loss I think everybody is willing to
pay.

>> I wish it where that simple.
>> 
 > I'm not saying it's simple, I'm saying it's possible. I'm not a
 > compression speciallist but from the theory there is nothing
 > which prevents this except from the actual implementation.

 > Maybe it's time to design a compression alogrithmus which has
 > this functionality (same difference rate as the source) from
 > the ground up.

There are such algorithms, but they eigther allys use the same
dictionary or table (like some i386.exe runtime compressors that are
specialiesed to the patterns used in opcodes) or they waste space by
adding the dictionary/table to the compressed file. Thats a huge waste
with all the small diff files we have.


The --rsyncable patch looks promising for a start and will greatly
reduce the downloads for source mirrors, if its used.

MfG
Goswin




Re: big Packages.gz file

2001-01-09 Thread Goswin Brederlow
> " " == Brian May <[EMAIL PROTECTED]> writes:

> "sluncho" == sluncho  <[EMAIL PROTECTED]> writes:
sluncho> How hard would it be to make daily diffs of the Package
sluncho> file? Most people running unstable update every other day
sluncho> and this will require downloading and applying only a
sluncho> couple of diff files.

sluncho> The whole process can be easily automated.

 > Sounds remarkably like the process (weekly not daily though) to
 > distribute Fidonet nodelist diffs. Also similar to kernel
 > diffs, I guess to.

 > Seems a good idea to me (until better solutions like rproxy are
 > better implemented), but you have to be careful not to get
 > apply diffs in the wrong order.  -- Brian May <[EMAIL PROTECTED]>

Or missing one or having a corrupted file to begin with or any other
of 1000 possibilities.

Also mirrors will allways lack behind, have erratic timestamping on
those files and so on. I think it would become a mess pretty soon.

The nice thing about rsync is that its self repairing. Its allso more
efficient than a normal diff.

MfG
Goswin




Re: Solving the compression dilema when rsync-ing Debian versions

2001-01-09 Thread Goswin Brederlow
> " " == Otto Wyss <[EMAIL PROTECTED]> writes:

>> > gzip --compress-like=old-foo foo > > where foo will be
>> compressed as old-foo was or as aquivalent as > possible. Gzip
>> does not need to know anything about foo except how it > was
>> compressed. The switch "--compress-like" could be added to any
>> > compression algorithmus (bzip?) as long as it's easy to
>> retrieve the
>> 
>> No, this won't work with very many compression algorithms.
>> Most algorithms update their dictionaries/probability tables
>> dynamically based on input.  There isn't just one static table
>> that could be used for another file, since the table is
>> automatically updated after every (or near every) transmitted
>> or decoded symbol.  Further, the algorithms start with blank
>> tables on both ends (compression and decompression), the
>> algorithm doesn't transmit the tables (which can be quite large
>> for higher order statistical models).
>> 
 > Well the table is perfectly static when the compression
 > ends. Even if the table isn't transmitted itself, its
 > information is contained in the compressed file, otherwise the
 > file couldn't be decompressed either.

Yes THEY are. Most of the time each character is encoded by its own
table, which is constrcted out of all the characters encoded or
decoded before. The tables are static but 100% dependent on the
data. Change one char and all later tables change. (except when gzip
cleans the dictionary, see other mail).

MfG
Goswin




Re: big Packages.gz file

2001-01-10 Thread Goswin Brederlow
> " " == Brian May <[EMAIL PROTECTED]> writes:

> "zhaoway" == zhaoway  <[EMAIL PROTECTED]> writes:
zhaoway> This is only a small part of the whole story, IMHO. See
zhaoway> my other email replying you. ;)

>>> Maybe there could be another version of Packages.gz without
>>> the extended descriptions -- I imagine they would take
>>> something like 33% of the Packages file, in line count at
>>> least.

zhaoway> Exactly. DIFF or RSYNC method of APT (as Goswin pointed
zhaoway> out), or just seperate Descriptions out (as I pointed out
zhaoway> and you got it too), nearly 66% of the bits are
zhaoway> saved. But this is only a hack, albeit efficient.

 > At the risk of getting flamed, I investigated the possibility
 > of writing an apt-get method to support rsync. I would use this
 > to access an already existing private mirror, and not the main
 > Debian archive. Hence the server load issue is not a
 > problem. The only problem I have is downloading several megs of
 > index files every time I want to install a new package (often
 > under 100kb) from unstable, over a volume charged 28.8 kbps PPP
 > link, using apt-get[1].

I tried the same, but I used the copy method as template, which is
rather bad. Should have used http as starting point.

Can you send me your patch please.

 > I think (if I understand correctly) that I found three problems
 > with the design of apt-get:

 > 1. It tries to down-load the compressed Packages file, and has
 > no way to override it with the uncompressed file. I filed a bug
 > report against apt-get on this, as I believe this will also be
 > a problem with protocols like rproxy too.

 > 2. apt-get tries to be smart and passes the method a
 > destination file name that is only a temporary file, and not
 > the final file. Hence, rsync cannot make a comparison between
 > local and remote versions of the file.

I wrote to the deity mailinglist concerning those two problems with 2
possible sollution. Till now the only answere I got was "NO we don't
want rsync" after pressing the issue here on debian-devel.

 > 3. Instead, rsync creates its own temporary file while
 > downloading, so apt-get cannot display the progress of the
 > download operation because as far as it is concerned the
 > destination file is still empty.

Hmm, isn't there a informational message you can output to hint of the
progress? We would have to patch rsync to generate that style of
progress output or fork and parse the output of rsync and pass on
altered output.

 > I think the only way to fix both 2 and 3 is to allow some
 > coordination between apt-get and rsync where to put the
 > temporary file and where to find the previous version of the
 > file.

Doing some more thinking I like the second solution to the problem
more and more:

1. Include a template (some file that apt-get thinks matches best) in
the fetch request. The rsync method can then copy that file to the
destination and rsync on it. This would be the uncompressed Packages
file or a previous deb or the old source.

2. return wheather the file is compressed or not simply by passing
back the destination filename with the appropriate extension (.gz). So
the destination filename is altered to reflect the fileformat.

MfG
Goswin




469 packages still using dh_undocumented, check if one is yours

2003-07-03 Thread Goswin Brederlow
Hi,

I came accross some sources still using dh_undocumented so I did a
quick search through sids *.diff.gz files. Here is the result:

find -name "*diff.gz" | xargs zgrep ':+[[:space:]]*dh_undocumented' \
| cut -f 1 -d"_" | sort -u | cut -f6- -d"/"

./dists/potato/main/source/devel/fda
./dists/potato/main/source/libs/libgd-gif
./dists/potato/main/source/otherosfs/lpkg
./dists/potato/main/source/web/cern-httpd

3dwm

acfax acorn-fdisk adns adolc affiche alsaplayer am-utils amrita anthy
antiword apcupsd-devel apcupsd aprsd aprsdigi argus-client ascdc asd4
aspseek at-spi atlas august ax25-apps ax25-tools

bayonne bbmail bbpal bbsload bbtime bind binutils-avr bird bitcollider
blackbook bnc bnlib bonobo-activation bonobo-conf bonobo-config bonobo
bookview brahms bwbar

cam camstream canna capi4hylafax catalog cdrtoaster cfengine cftp
chasen checkmp3 chipmunk-log chrpath clanbomber clips codebreaker
console-tools cooledit coriander corkscrew courier cpcieject crack ctn
cutils cyrus-sasl

dancer-services db2 db3 db4.0 dcgui dcl dctc dds2tar dia2sql directfb
directory-administrator dotconf drscheme

eb eblook elastic electric elvis epwutil erlang ethstats evolution
ewipe ezpublish

fcron fidelio firebird flink fluxbox fnord fort freewnn ftape-tools
funny-manpages

gaby gasql gato gbase gbiff gcc-2.95 gcc-2.96 gcc-3.0 gcc-3.2 gcc-3.3
gcc-avr gcc-snapshot gconf-editor gconf gconf2 gcrontab gdis gdkxft
geda-doc geda-symbols gerris gimp1.2 gkrellm-newsticker
gkrellm-reminder glade-2 glbiff glib2.0 gnet gnome-db gnome-db2
gnome-doc-tools gnome-gv gnome-libs gnome-think gnome-vfs gnome-vfs2
gnomesword gnu-smalltalk gnudip gnumail goo gpa gpgme gpgme0.4 gpm
gpppkill gpsdrive gscanbus gstalker gtk+2.0 gtk-menu gtkglarea
gtkmathview gtkwave guile-core gwave

hamfax happy hesiod hmake hns2 htmlheadline hubcot hwtools

i2e ibcs ic35link icebreaker iceconf icom inews intltool iog ipv6calc
ipxripd ircp

jabber jack-audio-connection-kit jags jailtool jenova jfbterm jigdo
jless jlint jpilot junit-freenet

kakasi kdrill kfocus kon2 krb4 krb5 ksocrat

lablgtk lam lesstif1-1 lineakconfig linpqa lirc lm-batmon lm-sensors
libapache-mod-dav libax25 libbit-vector-perl libcap libcommoncpp2
libctl libdate-calc-perl libdbi-perl libgda2 libglpng libgnomedb
libgpio libjconv liblog-agent-logger-perl liblog-agent-perl
liblog-agent-rotate-perl libmng libnet-daemon-perl libnet-rawip-perl
libplrpc-perl libpod-pom-perl libprelude libproc-process-perl
libprpc-perl libquicktime librep libsdl-erlang libsigc++-1.1
libsigc++-1.2 libsigc++ libsmi libstroke libtabe libunicode libusb
libxbase

mailscanner makedev manderlbot mbrowse mdk medusa meshio mew mgetty
midentd mii-diag mingw32-binutils mingw32 mixer.app mmenu mnogosearch
mobilemesh mondo mosix motion mova mozilla-snapshot mozilla mp3blaster
mp3info mpatrol mped mpqc mtools mtrack multi-gnome-terminal
multiticker murasaki muse mwavem

namazu2 ncpfs net-snmp netcast netjuke nictools-nopci nmap notifyme
nte nvi-m17n

oaf obexftp ocaml octave2.1 oftc-hybrid oo2c openafs-krb5 openafs
opendchub opengate openh323gk openmash openmosix opensp openssl
overkill

pam pango1.0 parted passivetex pccts pdnsd peacock phaseshift phototk
pike pike7.2 pike7.4 pilot-link pimppa pinball pkf plum pong poppassd
postilion powertweak ppxp-applet ppxp progsreiserfs pronto ptex-bin
pybliographer pymol python-4suite python-stats python-xml-0.6
python2.1 python2.2 python2.3

qemacs qhull qm qstat quadra quota

radiusclient radiusd-livingston rblcheck rdtool read-edid
realtimebattle remem rfb rplay rubyunit rumba-manifold rumba-utils
rumbagui rumbaview

samhain sanitizer scandetd scanmail scite scrollkeeper scsitools
search-ccsb sg-utils sg3-utils shadow shapetools sidplay-base skkfep
smail sml-mode sms-pl sn snort soap-lite socks4-server sonicmail
sortmail soup soup2 sourcenav spass speech-tools speedy-cgi-perl
spidermonkey spong squidguard squidtaild stegdetect stopafter superd
sympa syscalltrack

tclex tclx8.2 tclx8.3 tcpquota tdb tetrinetx tex-guy texfam tik
tintin++ titrax tix8.1 tkisem tkpaint tkvnc tolua torch-examples
tptime tramp tsocks

ucd-snmp ucspi-proxy umodpack unixodbc usbmgr uw-imap

vdkxdb vdkxdb2 verilog vflib2 vflib3 vgagamespack vipec vtcl

w3cam webalizer webbase wine wings3d wmcdplay wmmon wmtime wwl

xbvl xclass xdb xdvik-ja xemacs21 xevil xfce xfree86 xfree86v3 xfreecd
xfs-xtt xirssi xitalk xkbset xlife xmacro xmms xnc xotcl xpa xpdf xpm
xracer xscorch xsmc-calc xstroke xsysinfo

zebra zmailer




Re: 469 packages still using dh_undocumented, check if one is yours

2003-07-05 Thread Goswin Brederlow
Joey Hess <[EMAIL PROTECTED]> writes:

> Artur R. Czechowski wrote:
> > OTOH, maybe dh_undocumented should be removed from debhelper with prior
> > notice? "This program does nothing and should no longer be used."
> 
> As a rule I try to avoid causing less than 469 FTBFS bugs with any given
> change I make to debhelper. I have removed programs when as many as
> three packages still used them, after appropriate bug reports and a two
> month grace period.

Could the dh_undocumented programm allways fail with an error "Don't
use me" as the next step? That way all new uploads will be forced to
care.

MfG
Goswin




Re: 469 packages still using dh_undocumented, check if one is yours

2003-07-05 Thread Goswin Brederlow
Bernd Eckenfels <[EMAIL PROTECTED]> writes:

> On Sat, Jul 05, 2003 at 04:43:56PM +0200, Goswin Brederlow wrote:
> > Could the dh_undocumented programm allways fail with an error "Don't
> > use me" as the next step? That way all new uploads will be forced to
> > care.
> 
> this will still create fail to build bugs for no good reason.

Some sort of "Hey you, your doing something wrong but I will let is
pass this time" feedback from autobuilder would be nice together with
keeping such nearly broken packages out of testing (to give some
incentive for mainatiner to fix the problem).

But that probably goes too far.

MfG
Goswin




Re: Resolvconf -- a package to manage /etc/resolv.conf

2003-07-05 Thread Goswin Brederlow
Thomas Hood <[EMAIL PROTECTED]> writes:

> Summary
> ~~~
> Resolvconf is a proposed standard framework for updating the
> system's information about currently available nameservers.
> 
> Most importantly, it manages /etc/resolv.conf , but it does 
> a bit more than that.

You should think of a mechanism for daemons to get notified about
changes in resolv.conf. Like providing a function to register a script
and a list of arguments (like the PID of the program to
notify). Whenever the resolv.conf changes all currently registered
scripts would be called with their respective arguments.

The simplest form would be:

resolv.conf-register /etc/init.d/squid reload

That would make squid to reload its config each time a nameserver is
added or removed.

MfG,
Goswin




Re: 469 packages still using dh_undocumented, check if one is yours

2003-07-05 Thread Goswin Brederlow
"Artur R. Czechowski" <[EMAIL PROTECTED]> writes:

> On Sat, Jul 05, 2003 at 11:26:44AM -0400, Joey Hess wrote:
> > Goswin Brederlow wrote:
> > > Could the dh_undocumented programm allways fail with an error "Don't
> > > use me" as the next step? That way all new uploads will be forced to
> > > care.
> > No. Breaking 400+ packages so our uses cannot build them from source is
> > unacceptable.
> What's about dh_undocumented looking like:
> --
> #!/bin/bash
> if [ $FORCE_UNDOCUMENTED = 1 ]; then
>   echo You are still using dh_undocumented which is obsoleted.
>   echo Stop it.
> else
>   echo You are using obsoleted dh_undocumented in your debian/rules.
>   echo Please stop it and prepare a manpage for your package.
>   echo If you really want to build this package read (pointer to
>   documentation which explains how to set FORCE_UNDOCUMENTED or how to remove 
> this from debian/rules and why using dh_undocumented is bad).
>   exit 1
> fi
> --
> 
> Pro:
>   - it is possible to build package with buildd or any other autobuilder
>   - human building package can force it to build too
>   - it requires an interaction from developer, but this interaction is not
> time consuming
> 
> This is a good compromise between technical and social means to achieve a 
> goal.

I would have reversed it. Use "$FAIL_UNDOCUMENTED" and have
autobuilders set that.

Unknowing users aren't bothered, old sources still compile but new
uploads are forced to handle the issue.

But Joey stoped reading this so nothing will happen. EOD.

MfG
Goswin




Re: Debconf and XFree86 X servers

2003-07-05 Thread Goswin Brederlow
Branden Robinson <[EMAIL PROTECTED]> writes:

> [Please direct any XFree86-specific followup to debian-x.]
> 
> On Sat, Jul 05, 2003 at 08:46:00AM -0400, Theodore Ts'o wrote:
> > Yet another reasons for wanting to decouple installation and
> > configuration is if some hardware company (such as VA^H^H Emperor
> > Linux) wishes to ship Debian pre-installed on the system.  In that
> > case, installation happens at the factory, and not when the user
> > receives it in his/her hot little hands.

So they should just provide a "setup.sh" script that calls
dpkg-reconfigure for relevant packages again.

Otherwise just type in "dpkg-reconfigure --all" and spend hours
configuring your system as much as you like.

MfG
Goswin




Re: A success story with apt and rsync

2003-07-05 Thread Goswin Brederlow
Koblinger Egmont <[EMAIL PROTECTED]> writes:

> Hi,
> 
> >From time to time the question arises on different forums whether it is
> possible to efficiently use rsync with apt-get. Recently there has been a
> thread here on debian-devel and it was also mentioned in Debian Weekly News
> June 24th, 2003. However, I only saw different small parts of a huge and
> complex problem set discussed at different places, I haven't find an
> overview of the whole situation anywhere.
...

I worked on an rsync patch for apt-get some years ago and raised some
design questions, some the same as you did in the deleted parts. Lets
summarize what I still remember:

1. debs are gziped so any change (even change in time) results in a
different gzip. The rsyncable patch for gzip helps a lot there. So
lets consider that fixed.

2. most of the time you have no old file to rsync against. Only
mirrors will have an old file and they already use rsync.

3. rsyncing against the previous version is only possible via some
dirty hack as apt module. apt would have to be changed to provide
modules access to its cache structure or at least pass any previous
version as argument. Some mirror scripts alreday use older versions as
templaes for new versions.

4. (and this is the knockout) rsync support for apt-get is NO
WANTED. rsync uses too much resources (cpu and more relevant IO) on
the server side and a widespread use of rsync for apt-get would choke
the rsync mirrors and do more harm than good.

> conclusion
> --
> 
> The good news is that it is working perfectly.
> 
> The bad news is that you can't hack it on your home computer as long as your
> distribution doesn't provide rsync-friendly packages. Maybe one could set up
> a public rsync server with high bandwidth that keeps syncing the official
> packages and repacks them with rsync-friendly gzip/zlib and sorting the
> files.

There is a growing lobby to use gzip --rsyncable for debian packages
per default. Its coming.


So what can be done?


Doogie is thinking about extending the Bittorrent protocol for use as
apt-get method. I talked with him on irc about some design ideas and
so far it looks realy good if he can get some mirrors to host it.

The bittorrent protocol organises multiple downloaders so that they
also upload to each other and thereby reduces the traffic on the main
server. The extension of the protocol should also utilise http/ftp
mirrors as sources for the files thereby spreading the load over
multiple servers evenly.

Bittorrent calculates a hash for each block of a file very similar to
what rsync needs to work. Via another small extension rolling
checksums for each block could be included in the protocol and a
client side rsync can be done. (I heard this variant of rsync would be
patented in US but never saw real proof of it.)


All together I think a extended bittorrent module for apt-get is by
far the better sollution but it will take some more time and designing
before it can be implemented.

MfG
Goswin




Re: Resolvconf -- a package to manage /etc/resolv.conf

2003-07-07 Thread Goswin Brederlow
Thomas Hood <[EMAIL PROTECTED]> writes:

> On Sun, 2003-07-06 at 01:00, Goswin Brederlow wrote:
> > You should think of a mechanism for daemons to get notified about
> > changes in resolv.conf.
> 
> There is already such a mechanism.  See below.
> 
> > Like providing a function to register a script
> > and a list of arguments (like the PID of the program to
> > notify). Whenever the resolv.conf changes all currently registered
> > scripts would be called with their respective arguments.
> > 
> > The simplest form would be:
> > 
> > resolv.conf-register /etc/init.d/squid reload
> > 
> > That would make squid to reload its config each time a nameserver is
> > added or removed.
> 
> Currently, scripts in /etc/resolvconf/update.d/ get run when
> resolver information changes.  So, would it suffice to create
> /etc/resolvconf/update.d/squid containing the following?
> #!/bin/sh
> /etc/init.d/squid reload
> 
> --
> Thomas Hood

Great.

MfG
Goswin




Re: A success story with apt and rsync

2003-07-07 Thread Goswin Brederlow
Michael Karcher <[EMAIL PROTECTED]> writes:

> On Sun, Jul 06, 2003 at 01:29:06AM +0200, Andrew Suffield wrote:
> > It should put them in the package in the order they came from
> > readdir(), which will depend on the filesystem. This is normally the
> > order in which they were created,
> As long as the file system uses an inefficient approach for directories like
> the ext2/ext3 linked lists. If directories are hash tables (like on
> reiserfs) even creating another file in the same directory may totally mess
> up the order.
> 
> Michael Karcher

ext2/ext3 has hashed dirs too if you configure it.

MfG
Goswin




OT (Re: Excessive wait for DAM - something needs to be done)

2003-08-05 Thread Goswin Brederlow
Martin Michlmayr - Debian Project Leader <[EMAIL PROTECTED]> writes:

> * Jamin W. Collins <[EMAIL PROTECTED]> [2003-07-21 18:52]:
> > Perhaps that is because only the DPL can appoint them (as far as I can
> > tell) and we haven't seen a request from you for them.
> 
> Request for help are usually very ineffective; examples: apt's
> maintainer asked for help and didn't get any (with the exception that
> mdz has started more apt work, but he worked on apt before so
> effectively there are no new volunteers), Bdale is looking for a
> co-maintainer of ntp as is md for mutt.

Mails with suggestions and offers to help (and I'm esspecially mean
apt here :( ) also go unanswered or patches for bugs or improvements
get just overlooked and never included.

Shit happens. Don't stop trying.

MfG
Goswin




Re: Excessive wait for DAM - something needs to be done

2003-08-05 Thread Goswin Brederlow
"Dwayne C. Litzenberger" <[EMAIL PROTECTED]> writes:

> On Sun, Jul 13, 2003 at 01:09:47PM -0600, Jamin W. Collins wrote:
> > 2001-01-24 - Dwayne Litzenberger <[EMAIL PROTECTED]>
> >http://nm.debian.org/nmstatus.php?email=dlitz%40dlitz.net
> 
> For the record, I'm still interested in becoming a DD.
> 
> Nice to see this finally being addressed!  Thanks!

http://nm.debian.org/nmstatus.php?email=brederlo%40informatik.uni-tuebingen.de

Received application2000-08-25

Still waiting too.

MfG
Goswin




Re: NM non-process

2003-08-05 Thread Goswin Brederlow
Steve Langasek <[EMAIL PROTECTED]> writes:

> On Mon, Jul 21, 2003 at 10:17:24AM -0400, Nathanael Nerode wrote:
> 
> > Martin Schulze is listed as the other DAM member.  He's also the Press 
> > Contact, so I certainly hope he has good communication skills!
> 
> And the Stable Release Manager, and a member of the Security Team, and
> a member of debian-admin.  What makes you think he would give a higher
> priority to DAM work than James currently does?
> 
> Actually, given that Joey is already listed as part of DAM, and isn't
> actively involved, doesn't this suggest he already gives a lower
> priority to this work?

As far as I heard Matrin is only there in case the DAM dies. He won't
create or delete account on his own while the DAM is still breathing
(or thought to be).

MfG
Goswin




Re: NM non-process

2003-08-05 Thread Goswin Brederlow
[EMAIL PROTECTED] (Nathanael Nerode) writes:

> Steve Langasek said:
> >I don't think it irrelevant that those clamouring loudest for the DPL
> >to do something to fix the situation are people who don't actually have
> >a say in the outcome of DPL elections.  While I'm not happy to see such
> >long DAM wait times, I'm also not volunteering to take on the thankless
> >job myself.
> 
> No, it's not irrelevant.  It means precisely that Debian is in danger of 
> becoming an unresponsive, closed group which does not admit new people.  
> If this continues for, say, 2 more years, I would expect a new Project 
> to be formed, replicating what Debian is doing, but admitting new 
> people.  I'd probably be right there starting it.
> 
> That would be a stupid waste of effort, so I hope it turns out to be 
> unnecessary.

I know of several DDs and non-DDs thinking about creating a Debian2 (or
whatever named) project due to this and other lack of responce
problems and the group is growing. The danger is already there and
should not be ignored.

MfG
Goswin




Re: NM non-process

2003-08-05 Thread Goswin Brederlow
Kalle Kivimaa <[EMAIL PROTECTED]> writes:

> Roland Mas <[EMAIL PROTECTED]> writes:
> > with.  The MIA problem is significant enough that NM might be the only
> > way to tackle with it seriously.  That means taking time to examine
> > applications.
> 
> BTW, has anybody done any research into what types of package
> maintainers tend to go MIA? I would be especially interested in a
> percentage of "old" style DD's, DD's who have gone through the NM
> process, people going MIA while in the NM queue, and people going MIA
> without ever even entering the NM queue. I'll try to do the statistics
> myself if nobody has done it before.

And how many NMs go MIA because they still stuck in the NM queue after
years? Should we ask them? :)

MfG
Goswin




Re: Request for maintainer

2003-08-05 Thread Goswin Brederlow
[EMAIL PROTECTED] writes:

> Hi!
> 
> Subject: RFP: GRubik -- A 3D Rubik cube game
> Package: wnpp
> Version: N/A; reported 2003-08-06
> Severity: wishlist
> 
> * Package name: GRubik
>   Version : 1.16
>   Upstream Author : John Darrington <[EMAIL PROTECTED]>
> * URL : http://www.freesoftware.fsf.org/rubik/grubik.html
> * License : GPL-2
>   Description : A 3D Rubik cube game
> 
> This is an OpenGL / GTK+ package which I have written.  I've had
> positive feedback from the people who've looked at it so far. It's fully
> internationalised, has a number (5?) localisations.
> 
> I'm not a DD, and am not currently able to commit to being one.
> However if a DD wants to become a maintainer of this package I will
> co-operate with him/her (eg upstream patches where needed). 
> 
> I have created an unofficial deb for this software, and you can get it
> from my website http://darrington.wattle.id.au/deb if you want to use
> that as a starting point.  

Maybe you should ask on the new maintainer list for some new
maintainer thats intrested and needs a package to maintain. Just an
idea.

MfG
Goswin




Re: NM non-process

2003-08-06 Thread Goswin Brederlow
Andreas Barth <[EMAIL PROTECTED]> writes:

> * Goswin Brederlow ([EMAIL PROTECTED]) [030806 05:35]:
> > Kalle Kivimaa <[EMAIL PROTECTED]> writes:
> > > BTW, has anybody done any research into what types of package
> > > maintainers tend to go MIA? I would be especially interested in a
> > > percentage of "old" style DD's, DD's who have gone through the NM
> > > process, people going MIA while in the NM queue, and people going MIA
> > > without ever even entering the NM queue. I'll try to do the statistics
> > > myself if nobody has done it before.
> 
> > And how many NMs go MIA because they still stuck in the NM queue after
> > years? Should we ask them? :)
> 
> Many. While cleaning up the ITPs/RFPs I asked many packagers about the
> status of their package and got quite often a "package is more or less
> ready, but I'm waiting of DAM-approval because I don't want the hassle
> of another sponsored package", or, what's worse a "package was ok some
> time ago, but as Debian doesn't want me I stopped fixing it".
> 
> Sad.

Till this morning I was one of those NMs not wanting the hassel of a
sponsor but now I had to change my maintainers email and fix some RC
bugs so I did bully someone to sponsor it.

You wait 5 Month for the DAM and thus one should become DD any day
now. Would you realy go hunting for a sponsor again? Now that I did I
probably become DD tomorrow so it was a waste of time. .oO( Damn, now
I jinxed become DD too again ).

MfG
Goswin




Re: NM non-process

2003-08-06 Thread Goswin Brederlow
Tollef Fog Heen <[EMAIL PROTECTED]> writes:

> * Adam Majer 
> 
> | My definition of MIA for DD: Doesn't fix release critical bugs for
> | his/her package(s) within a week or two and doesn't respond to
> | direct emails about those bugs.
> 
> I guess I'm MIA, then, since I have an RC bug which is 156 days (or
> so) old, which is waiting for upstream to rewrite the program.

Taged forwarded?

MfG
Goswin




Bug#204422: ITP: debix -- Live filesystem creation tool

2003-08-07 Thread Goswin Brederlow
Package: wnpp
Version: unavailable; reported 2003-08-07
Severity: wishlist

  Package name: debix
  Version : 0.1
  Upstream Author : Goswin von Brederlow <[EMAIL PROTECTED]>
  License : GPL
  Description : Live filesystem creation tool
  Sponsor : wanted

Debix is a collection of scripts to create live filesystems. Several
flavours are planed:

- Make a live filesystem image from any existing linux system
  Apart from a special initrd a plain image of the existing system is
  made without changes. The image on CD is made semingly writeable via
  LVM2 snaphots by the initrd and then the normal init is started.

- Pure live filesystem like knoppix (+zero reboot installation)
  Difference to Knoppix would be customizable size, being a pure
  Debian system and the possibility to migrate the live filesystem to
  harddisk on-the-fly to get a running Debian system (with the
  drawback that the partitioning scheme is mostly fixed, using online
  ext2/3 resize patches could solve that).

- Make a live filesystem with boot-floppies or debian-installer
  Console and X subflavours included. The advantage over the normal
  CDs would be better autodetection and access to www, irc and local
  docs during instalation (one could read the installation docs on
  www.debian.org in galeon while running boot-loppies in an xterm).
  A mixture of knoppix and installer.

A sponsor should be versed in /bin/sh and intrested in creating live
filesystems. Having a CD-rw or DVD-rw burner would be a big plus but
bochs or vmware will do to test stuff.

Sources aren't debianized yet but I have an example CD image made from
a normal woody system (flavour 1 from above) at
rsync://mrvn.homeip.net/images/

MfG
Goswin

-- System Information:
Debian Release: testing/unstable
Architecture: i386
Kernel: Linux dual 2.4.21-ac4 #1 SMP Sat Jul 5 17:53:13 CEST 2003 i686
Locale: LANG=C, LC_CTYPE=de_DE





debbugs: assign someone from a group of willing people to fix a bug

2002-08-29 Thread Goswin Brederlow
Package: debbugs
Version: 2.3-4
Severity: wishlist

Hi,

Ever notived how many bugs there are? How many don't get fixed, get
ignored, get forgotten? A lot of bugs are years old and might not even
exist anymore.

I know (most :) maintainers do their best to fix bugs but sometimes
there just isn't enough time or will. Or the problem is hard to
reproduce. Maintainers might also not have the same architecture or
setup as the reportie of a bug.

What to do?

I would like to propose a setup similar to the one used to translate
package descriptions:

If a bug is not delt with for some time (no mails or status changes
indication work being done) a person is selected out of a pool of
willing persons and is mailed the bug. He can then check out the bug
and fix it if possible and has the right to do an NMU or close the bug
etc.

If nothing happens to the bug or if the person sends a reject for the
bug another person gets drafted and so on.

Some comands could be introduced to control what persons gets the bug
next. Like selecting the architecture or some capabilities of the
person to be drafted next. Also maintainers should be able to force or
stop drafting someone, so e.g. if a maintainer things its an alpha
related problem he can tell the BTS to restrict the bug to people
having an alpha and draft someone immediatly (without some lengthy
wait for the drafting to kick in).

The easiest way might be to allways draft someone but draft the
maintainer first. If he doesn't react, say within a month, the next
person is drafted from the pool.


Criteria for drafting someone from the pool should be primary
workload. Take the person with the lowest number of bugs assigned.
Additionally factors like architecture, dist
(stable/testing/unstable), kernel should be matched to the bug
reportie if possible. Capabilities like knowing perl or C or favorisms
like loving games could also be considered.

When starting there should probably be a limit of a few bugs per
person, otherwise all tousands of open bugs wouldbe reassigned to a
few then soon unwilling helpers.


Any comment? Maybe something more than "send patch and we think about
it"?

May the Source be with you.
Goswin

-- System Information:
Debian Release: testing/unstable
Architecture: i386
Kernel: Linux dual 2.4.16 #19 SMP Sat Jul 6 04:37:14 CEST 2002 i686
Locale: LANG=C, LC_CTYPE=de_DE

Versions of packages debbugs depends on:
ii  ed0.2-19 The classic unix line editor
ii  exim [mail-transport-agent]   3.35-1 An MTA (Mail Transport Agent)
ii  libmailtools-perl [mailtools] 1.48-1 Manipulate email in perl programs
ii  perl [perl5]  5.6.1-7Larry Wall's Practical Extraction 

-- no debconf information




Re: RFD: Architecture field being retarded? [was: How to specify architectures *not* to be built?]

2002-08-30 Thread Goswin Brederlow
Andreas Rottmann <[EMAIL PROTECTED]> writes:

> > "Russell" == Russell Coker <[EMAIL PROTECTED]> writes:
> 
> Russell> On Sun, 11 Aug 2002 16:35, Geert Stappers wrote:
> >> When the cause of the buildproblem is in the package, fix the
> >> problem there. The package maintainer hasn't to do it by
> >> himself, he can/must/should cooperate with people of other
> >> architectures.  A sign like "!hurd-i386" looks to me like "No
> >> niggers allowed", it is not an invitation to cooperation.
> 
> Russell> So you think I should keep my selinux packages as
> Russell> architecture any, even though they will never run on on
> Russell> HURD or BSD?
> 
> Thanks, Russell, you are making my point. It is similiar with radvd,
> which was designed for Linux/BSD and won't work on the HURD, since it
> simply isn't supported upstream. I am not in the position to port
> radvd to the HURD, altough this would be the ideal way to go.

What about setting !hurd-i386 and file a bug regarding it with the tag
"Help needed".

That should encourage people to help and prevent autobuilders to send
build-failed mails for every release.

Just a random thought.
Goswin




Re: Is there a limitation on swap parition size linux can use?

2002-08-30 Thread Goswin Brederlow
Walter Tautz <[EMAIL PROTECTED]> writes:

> I heard that 2Gb is the limit. If so I would have
> to create distinct swap partitions if I wanted to
> have more than 2Gb swap? Just wondering...

The older blends of kernels only allowed swap partitions up to 128MB.
The newer kernels allow 2GB per swap partiton or file.

Nobody said you can only have one partition. :)


In fact its faster to spread the swap over several disks, having a
small partition on each all with the same priority. Linux will then
automatically "raid0" them for greater speed.

If you need even more swap than 2GB per disk you can have multiple
swap partitions or files on one disk. But then better keep them at
different priorities (default) so they don't get used in parallel.

MfG
Goswin




Re: RFD: Architecture field being retarded? [was: How to specify architectures *not* to be built?]

2002-08-30 Thread Goswin Brederlow
Adam Heath <[EMAIL PROTECTED]> writes:

> On Mon, 12 Aug 2002, Brian May wrote:
> 
> > This proposal would also allow, say bochs, to provide i386 too (although
> > I think more work might be needed here).
> 
> No, it wouldn't.
> 
> Say you install bochs on alpha.  If bochs provides i386, then this would tell
> dpkg that it is ok to install i386 binaries in the host.

Theres also a more suitable project under way. Instead of emulating a
complete system it just translates the assembler code and transaltes
syscalls to your architecture.

I don't know its name because I only heart from it. Falk Hueffner is
trying to get his Mathematica(i386) running on his alpha with it.

>From what he told me its way faster than bochs and transperent. Could
probably made into a binary_misc style module so the kernel supports
i386 elf binaries.

MfG
Goswin




Re: Large file support?

2002-08-30 Thread Goswin Brederlow
Torsten Landschoff <[EMAIL PROTECTED]> writes:

> On Fri, Aug 09, 2002 at 01:42:04PM +0200, Andreas Metzler wrote:
>  
> > This does not solve the issue, LFS requires 2.4 or a patched 2.2
> > Kernel.
> > http://www.suse.de/~aj/linux_lfs.html
> 
> But with a standard 2.2 kernel it should still work for files < 2GB 
> I hope? I built openldap2 with lfs support and I am only running 2.4
> kernels. Can somebody tell me if it is going to break on 2.2?
> 
> Greetings
> 
>   Torsten

Its not. glibc takes care of it since like forever.

Otherwise ls, dd, cat, tar,  all would be broken.

MfG
Goswin




  1   2   >