console-tools vs kbd
Hi there, I'm using a non-standard keyboard that sends odd scancodes. I wrote a startup script that does a whole heap of "setkeycodes" commands, but I've found that the version of "setkeycodes" in the console-tools package does not work with kernel version 2.2+. The one with the "kbd" package works fine, but "kbd" seems to be a somewhat deprecated package. I found a bug relating to this; entitled "console-tools: setkeycode does not use right IOCTL", which could be it: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=53977&repeatmerged=yes And one entitled "setkeycodes completely broken"; http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=71768&repeatmerged=yes Is there any news on a fix? Happy New Year, -- Sam Vilain, [EMAIL PROTECTED]WWW: http://sam.vilain.net/ GPG public key: http://sam.vilain.net/sam.asc
Re: console-tools vs kbd, fix listed in bug report?!
As pointed out to me, bug #71768 has a workaround to my problem listed in it. However, I find it quite appalling that a fix involving a one character change to a source file, where the fix has been listed in the bug report has not been pushed through, given its age. The console-tools package has a huge number of bugs associated with it, and given that it's a base package, that's not so good. Is it possible for maintainers to share packages, perhaps? Then these small things can get sorted quicker. Personally, I'd just love it if the entire Debian core system was in CVS, with upstream updates being merged in. This would also give the advantage of being able to do OpenBSD style fixing of code; eg, one person finds a particular type of bad coding practice, looks for it and fixes it throughout the code base, and mails the authors to let them know about that bad coding practice. Or perhaps the bug tracking system just needs some rework to be a bit nicer. I'm sure some people here have used Remedy's Action Response System in a reasonably large environment, so why not pick it's best features and stick them in; like some decent querying capabilities and borrow some of its interface. Stick in the ability to assign bugs to other people. Perhaps even a standalone client. Maybe even get some "customer service" going, and draw up some guidelines for how long it should take for bugs to be responded to, etc. Cheers, Sam. On Sun, 31 Dec 2000 17:54:56 + Sam Vilain <[EMAIL PROTECTED]> wrote: > Hi there, > > I'm using a non-standard keyboard that sends odd scancodes. I wrote > a startup script that does a whole heap of "setkeycodes" commands, > but I've found that the version of "setkeycodes" in the > console-tools package does not work with kernel version 2.2+. The > one with the "kbd" package works fine, but "kbd" seems to be a > somewhat deprecated package. > > I found a bug relating to this; entitled "console-tools: setkeycode > does not use right IOCTL", which could be it: > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=53977 > > And one entitled "setkeycodes completely broken"; > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=71768 > > Is there any news on a fix? -- Sam Vilain, [EMAIL PROTECTED]WWW: http://sam.vilain.net/ GPG public key: http://sam.vilain.net/sam.asc
Re: package pool and big Packages.gz file
On Fri, 5 Jan 2001 09:33:05 -0700 (MST) Jason Gunthorpe <[EMAIL PROTECTED]> wrote: > > If that suits your needs, feel free to write a bugreport on apt about > > this. > Yes, I enjoy closing such bug reports with a terse response. > Hint: Read the bug page for APT to discover why! >From bug report #76118: No. Debian can not support the use of rsync for anything other than mirroring, APT will never support it. Why? Because if everyone used rsync, the loads on the servers that supported rsync would be too high? Or something else? -- Sam Vilain, [EMAIL PROTECTED]WWW: http://sam.vilain.net/ GPG public key: http://sam.vilain.net/sam.asc
Re: package pool and big Packages.gz file
On Fri, 5 Jan 2001 19:08:38 +0200 [EMAIL PROTECTED] (Sami Haahtinen) wrote: > Or, can rsync sync binary files? > hmm.. this sounds like something worth implementing.. rsync can, but the problem is with a compressed stream if you insert or alter data early on in the stream, the data after that change is radically different. But... you could use it successfully against the .tar files inside the .deb, which are normally compressed. This would probably require some special implementation of rsync, or to have the uncompressed packages on the server and put the magic in apt. Or perhaps the program "apt-mirror" is called for, which talks its own protocol to other copies of itself, and will do a magic job of selectively updating mirror copies of the debian archive using the rsync algorithm. This would be similar to the apt-get and apt-move pair, but actually sticking it into a directory structure that looks like the debian mirror. Then, if you want to enable it, turn on the server version and share your mirror with your friends inside your corporate network! Or an authenticated version, so that a person with their own permanent internet connection could share their archive with a handful of friends - having an entire mirror would be too costly for them. I think this has some potential to be quite useful and reduce bandwidth requirements. It could use GPG signatures to check that nothing funny is going on, too. Either that or keep a number of patch files or .xd files for a couple of old revs per packages against the uncompressed contents of packages to allow small changes to packages to be quick. Or perhaps implement this as patch packages, which are a special .deb that only contain the changed files and upgrade the package. -- Sam Vilain, [EMAIL PROTECTED]WWW: http://sam.vilain.net/ GPG public key: http://sam.vilain.net/sam.asc
Re: How to cope with patches sanely
Manoj Srivastava wrote: >> Feature branches don't magically allow you to avoid merge conflicts >> either, so this is a red herring. Once you've resolved the conflict, >> then it becomes just another change. This change can become a diff in >> a stack of diffs. > > This whole message is a red herring, since hte feature branches > do not attempt to handle merge conflicts -- that is not their purpose. > They capture one single feature, independently from every other > feature, and thumb their collective noses at merge conflicts. Yes. Feature branches are effectively forking a particular version of a project - this is not a problem, and is essential for efficient development. People jumbling together changes in "trunk" branches is perhaps one of the worst upshots of the 2002-2006 or so obsession with poorly designed centralised systems and in my opinion sank many projects. > The history of the integration branch captures the integration > effort; and the integration branch makes no effort to keep the > integration work up to date with current upstream and feature > branches. Initially perhaps. However, once a feature is considered ready for inclusion, it is important that it contains merges FROM the branch they are targetting. They mean that a later merge back the other way, to merge the feature branch into the target branch, can happen painlessly. ASSUMING that you're using a system which has commutative merge characteristics, such as git or mercurial. > If you think you can extract an up to date integration patch > from the entrails of the integration branch -- feel free o smack > me down. But please provide some substance to the assertion that it is > doable. Perhaps I missed the context to this discussion - certainly expressing a history containing merge nodes in patches is non-trivial and can't be done with standard patch format - but I believe that this is certainly possible. Can you express this problem with reference to a particular history of an integration branch? I will provide some short git commands to extract the information in the form you are after. Sam -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: How to cope with patches sanely
Manoj Srivastava wrote: >> Yes. Feature branches are effectively forking a particular version of >> a project - this is not a problem, and is essential for efficient >> development. People jumbling together changes in "trunk" branches is >> perhaps one of the worst upshots of the 2002-2006 or so obsession with >> poorly designed centralised systems and in my opinion sank many >> projects. > > Err. If you go back and read this thread in the archive, You'll > note that I have stated that my feature branches are always kept up to > date with the latest upstream branch I am basing my Debian package > on. This technique is also called rebasing the patch set; it's fine, but it's just one approach. > When I have been creating patches for inclusion with upstream, I > essentially feed them the source patch and a changelog entry -- > essentially, creating a single patch series; squashing the underlying > history. Most upstream do not care about the messy history of my > development; and most do not grok arch well enough to pull directly. This is sometimes worthwhile and sometimes a bad idea. The driving motive, if you want to aim for patches to be easily reviewed, is that each patch should introduce a single change, which is well explained. I agree that the upstream will not want a messy history; which is why you reshape the individual changes using a tool such as Quilt, Stacked Git, Guilt, Mercurial Queues, etc, so that they are more easily reviewed. >> They mean that a later merge back the other way, to merge the feature >> branch into the target branch, can happen painlessly. ASSUMING that >> you're using a system which has commutative merge characteristics, >> such as git or mercurial. > > I use Arch. Arch is critically deficient in this respect; it doesn't really have a concept of tracking branches, and merging is not commutative; if you merge a branch that just merged from your branch, an unnecessary new changeset is made. But if you are rebasing then you don't need to worry about that. As I said, it's just more work. >> Can you express this problem with reference to a particular history of >> an integration branch? I will provide some short git commands to >> extract the information in the form you are after. > > http://arch.debian.org/cgi-bin/archzoom.cgi/[EMAIL PROTECTED] > > Take any package. Say, flex. Or flex-old. You have all my > feature branches there. The --devo branch is the integration branch. > Please show me an automated way you can grab the feature branches and > generate a quilt series that gives you the devo branch. The diff.gz is > how we get from upstream to the devo branch (modulo ./debian); if you > can break that down nicely for the folks who want each feature > separate, that would work as well. Thanks for restating the problem clearly. While the underlying problem is easily approached and I would still call it trivial, the details of what you are asking for make it impossible - because quilt series cannot contain merges (someone correct me here if it can and I can go forward). Shipping changes for upstream inclusion as a *single* set of quilt patches is not possible if you are including merges, but if you allow the patches to be grouped, and introduce a new type of patch which encapsulates a merge (gitk has one example of this; it uses different identifiers to represent which file's lines are included), then it can be done. The apply-patches script would need extending to support this, but I don't think that's particularly show-stopping. However, ignoring the merges, so far we're not that far away from the "script" being 'git-log -p' or 'git format-patch upstreamrev' Also having never really used arch, if you can provide me with the commands to get a copy of those branches (the man page is sadly not very forthcoming), and I'll give the git-archimport script a whorl and see if I can get it imported and show how this can work in practice. If someone with git-archimport experience can perform this and publish the repositories somewhere, I'd be very grateful. > If you code works well enough every single time a new upstream > comes around and I release a new version of flex or whatever, I'll > throw in the generated quilt patches. I think what is required is a rethink of the problem. What is being tried to be achieved, and are there any other ways to achieve it which will solve the problem in a vastly more effective way. Version control systems that have content-addressable filesystems (essentially, git and Monotone) are inherently efficient to distribute; as only the changes between versions need be distributed. The notion of stream compressing tarballs is archaic compared with being able to search for deltas anywhere in the source tree. You can see this in effect with git, which is capable of very quickly identifying which objects are new, and sending them all in impressively small packs on the network.
Let's kill this part of the discussion right away [was: Re: How to cope with patches sanely]
Manoj Srivastava wrote: > And no, I can do this using plain old arch, and I don't really > have to change my SCM. > But not all Debian maintainers are using git; >> Version control systems that have content-addressable filesystems >> (essentially, git and Monotone) are inherently efficient to >> distribute; [...] > Which is great, but I fear it will not fly as a the one and > only Ok, I apologise for leaving my note to this effect to the end of my e-mail. But let me repeat. I'm not asking that you change the SCM that you use. I'll say it again, this time for the other subscribers. I'm NOT asking that you change the SCM that you use or the way that you work. All I'm talking about is using git as a replacement for the *source archive* format. How the files are archived and distributed. There are compelling reasons to want to do this; not least getting around the fact that shipping patch series in .diff.gz doesn't handle cases like integration branches without adding a new patch format (which is also something I think is probably useful, and a complementary approach). Thanks for the rest of your e-mail, which contains constructive feedback which I will now digest and respond to! Cheers, Sam. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: track bugs in VCS, not the other way around
martin f krafft wrote: > Let's assume for a minute that we accept that VCSs are the way > forward and start to consider how we could track bugs in the VCS, > alongside the code. > > Start to think about it this way, and stuff suddenly neatly aligns, > at least in my world. > > Suddenly you can commit a patch and mark the bug fixed atomically. > > Suddenly, bug reports become commits in a branch, and keeping the > branch empty is your goal. > > Divergence from upstream is represented in topic branches, and you > want to keep the number of those minimal to not go insane. Martin, I think this is an extremely sensible plan. IME all bug trackers ever do is fill with old stale rubbish. I would also add, that a certain number of "best practices" be adopted; this will assist in upstream maintainers reviewing and adopting patches, no matter what VCS or system of development they use: - all patches should be clearly described by: - a one-line subject description of the change - an introduction to the patch; why it is necessary, what the code base did before it was introduced, and an outline of the approach to be taken. For trivial changes, this might be one sentence. - authorship and reviewer information ("From", "Signed-off by", "Acked-by" lines or even "Cc" in some cases) - no reference should be made in the patch introcuction to things which become useless after the patch is applied, such as which version it is for. - optionally, a comment about this revision of the patch, information that will not be useful to a maintainer of the patch, clearly delineated from the long-term description of the patch. - no changes that are not a part of the description are allowed. - Patches that require further minor changes later for acceptance should be rolled together, so that the upstream has less work to review and you don't get "this patch fixes that patch" etc. The question about what to do with revised/dropped patches would likely be solved on a per-VCS basis - Ideally, no single patch should break the product's test suite. Individual projects may be more lax about this requirement when applied to patch series. - Where the upstream source control provides an easy way to do so - such as repositories hosted in repo.or.cz, github, etc - a new branch should be created which contains the above patches, so that people visiting the source control of the product can see the outstanding changes requested by the debian project as an explicit fork. - Patches may simply update an ERRATA file, if they are a simple confirmed bug report. Note that many of these are really just a restatement of best common practices since people started sending patches to mailing lists in the mid-80's. They have since been explicitly codified and rigidly enforced in the Git community, so people will see that these are essentially the Linux Kernel Development guidelines, just without any specification of the exact format of the patch. While it is tempting to standardise on git-format-patch format, I don't think it is productive to bring that detail in right now. Yes, I realise that those who are frequently cross-merging will have a hard time turning their feature branches into this form. However, the patches do not necessarily have to be applicable to the head version, as long as the above requirements are met. Sam. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: chroot administration
Shaya Potter <[EMAIL PROTECTED]> wrote: > > > I have written SE Linux policy for administration of a chroot > > > environment. That allows me to give full root administration > > > access (ability to create/delete users, kill processes running > > > under different UIDs, ptrace, etc) to a chroot environment > > > without giving any access to the rest of the system. > > Since no one else has apparently said it explictly yet, I have to say > > that's extremely cool :) > argh. its so cool that you essentially stole my summer research. :(. > Does this allow you to create any amount of chroot jails? We are also > working on making "virtual IPs" that each jail would get. We are also > working on being able to move the processes while running (w/ network > connections) from machine to machine w/o needing any state on initial > machine. You might want to investiage `security contexts', a new kernel feature that can be used for virtual IP roots as well as making processes in one context (even root) not able to see other contexts' processes. The userland utilities also offer a way to remove Linux's capabilities (eg, to disallow raw sockets or bypassing filesystem permissions). http://www.solucorp.qc.ca/miscprj/s_context.hc -- Sam Vilain, [EMAIL PROTECTED] Easyspace: an accredited ICANN GPG: http://sam.vilain.net/sam.ascregistrar & web hosting company 7D74 2A09 B2D3 C30F F78E Have your domain run by techies 278A A425 30A9 05B5 2F13 with a clue. www.easyspace.com Ambition is the curse of the political class. - anon.
Sandboxing Debian [was: Re: chroot administration]
Russell Coker <[EMAIL PROTECTED]> wrote: > > http://www.solucorp.qc.ca/miscprj/s_context.hc > Is someone going to package this for Debian? One person has announced that he is going to try on the list, though they are not an official debian developer. I have made a package, too, and will make it available soon. > There are some limitations with it. The biggest limitation when > compared to my SE Linux work is it's lack of flexibility. I can > setup a SE Linux chroot, then do a bind mount of /home/www, and > grant read-only access to the files and directories of user_home_t > and search access to directories of type user_home_dir_t. This stuff is accomplished through file immutability and Linux capabilities. It's not as flexible as the system you're describing sounds, but it does work with standard Linux filesystem features and represents a smaller departure from UNIX conventions. > The advantage of the "security contexts" system described on that > web page is a comprehensive solution to the IP address issue (I've > got a design but no working code so far). I don't expect to ever > get a solution that works as well as their solution unless/until new > features are added to SE Linux. It's not perfect, though - due to a shocking case of C programmer's disease. One big problem is what to do with the `localhost' interface. Currently, you can't have a guaranteed private interface and expect applications to work. This is because the IP jailing works by intercepting the `bind' call, and remapping binds to 0.0.0.0 to the first IP address listed in our `ip chroot', as well as binds to 127.0.0.1. I tried adding an extra IP address - 127.0.0.X - to the IP chroot and defining that as `localhost' in /etc/hosts, and eventually after finding that SSH local port forwarding (to pick on an application for which it didn't work) was always trying to bind to 127.0.0.1, I found this gem in glibc: /* Network number for local host loopback. */ #define IN_LOOPBACKNET 127 /* Address to loopback in software to local host. */ #ifndef INADDR_LOOPBACK # define INADDR_LOOPBACK((in_addr_t) 0x7f01) /* Inet 127.0.0.1. */ #endif so the getaddrinfo() call will always return 127.0.0.1 for the local host. Which is a bit of an arse really, but I think I'd probably just get laughed at or ignored if I logged a bug against it. But if you don't mind `localhost' being the same as your external IP address there's no problem.