Re: Debian's progress inspite of events (was Re: Dunk-Tank and the DD strike)

2007-03-18 Thread D G Teed

On 3/17/07, Andy Smith <[EMAIL PROTECTED]> wrote:



A lot of hardware resellers are currently saying "Debian doesn't
work on this hardware" but when you investigate it turns out that
they heard that the default Sarge install does not support the SATA
controller, and they don't care to find out more.  In turn some
hosting companies pass that (not strictly true) message on.

I'll be glad when it goes from not strictly true to completely
untrue.

Cheers,
Andy



I agree that the kernel within the installer is something
needing to be updated more often.  I regard Debian as serious
production class server OS, but there are others who weigh
everything on the installer experience.  We know it is possible to
update the kernel after the install, but if the installer doesn't
support the mobo chipset and disk controllers, it presents
a catch-22 because you can't install it to update the kernel.
There are workarounds, but they are very time consuming, and
a major strike against adopting Debian versus the commercial
brand Linuxes.

We need to keep Debian from appearing to be a basement
hacker's work.  It makes less difference to me than to
managers who evaluate it and don't know (and refuse to learn)
the difference between the kernel and the OS.  Such managers
are already freaked out about open source and the
high number of one-developer Linux distros out there.  They don't
see Debian mentioned in many press announcements, so
it is difficult to demonstrate how prevalent and robust
Debian really is.

Making things work for current hardware is one of the main
things that will differentiate between a "works for me" type of
distro, and a "works for everybody" well supported distro.

--Donald


Re: Debian's progress inspite of events (was Re: Dunk-Tank and the DD strike)

2007-03-18 Thread D G Teed

On 3/18/07, Roberto C. Sánchez <[EMAIL PROTECTED]> wrote:


On Sun, Mar 18, 2007 at 07:51:07AM -0300, D G Teed wrote:
>
> I agree that the kernel within the installer is something
> needing to be updated more often.

Except that this means that the kernel in the installer needs to be
installable as a default kernel, which means that it must also be
supported by the kernel team and security.  There is a lot of work
involved in that.



There was another project by a small group which seemed to get through
this issue, at least for x86.  I don't think it is that bad.  The Debian
releases
that come out almost quarterly could include such an update to the
installer's kernel.


I regard Debian as serious
> production class server OS, but there are others who weigh
> everything on the installer experience.

I've heard it said that the reason Debian users generally don't care
about the installer is because it is something you only use once :-)



It sounds easy if you have one server.  Suppose you have 21 that you
want to migrate from FreeBSD.  If it takes a day to find and do a
workaround,
versus the usual  30 minutes to complete a direct install, then I think this
is more significant.  Anyway, anything that impacts a manager's
perception is something that impacts the possibility of adopting Debian at
all.

That is, the difficulty is enough for them to think, or even
decide "forget it, do Redhat".  If you want my bosses to conclude
that, then the status quo of sarge's installer is the thing to keep.

Except, that the commercial brand Linuxes suffer from the same problem.

They just tend to update their installers more frequently.  Windows XP
has the same problem, only many people never noticed once the installer
was 5 years old, since people get windows preinstalled or get some
customized restore CD from the OEM.



Yep. That's all I'm asking to see happen.

Why are they freaked out by one-developer Linux distros out there?  They

don't even have to pay attention to such distros if they don't want to.



The appearance of the multitudes is an immediate indication that just about
anyone can make a distro.  If that is the case, then there are obviously
poor ones.  It just doesn't make the scene look professional.  They
associate
that type of outcome with Tu-Cows and that sort of crap-shot of trying
dozens of software until you find one that doesn't suck.


They don't
> see Debian mentioned in many press announcements, so
> it is difficult to demonstrate how prevalent and robust
> Debian really is.
>
Netcraft is helpful in this respect :-)



Already done.  HP's announcement of Debian support is the sort of thing
we need more of.   That is visible to such managers without pointing them
to some resource - unknown to them - claiming Debian has some
sort of stats in production.


Making things work for current hardware is one of the main
> things that will differentiate between a "works for me" type of
> distro, and a "works for everybody" well supported distro.
>
Why?  As I said, other distros have the same problem, they just tend to
update more frequently so it is not as visible.



It's more than just visible.  If you can't install to disk due to the kernel
being 2 years old, then you need to use some non-standard installation
method.  This doesn't look good.  With a commercial distro we wouldn't
have to go to a third party release - the vendor would be providing an
updated installer a few times per year.  A standard installer with an
updated kernel solves everything.  I can't see how it is so difficult to
accomplish.  The kernel isn't going to effect many applications.
And as I pointed out, there is a small project that accomplished this.
Several kernels could be made available on the same boot CDROM
to keep it a risk free change to the installer.

In a world where you decide everything, these things are simple.

In an environment where IT managers with half a clue want to
help make decisions, the road has to look good before they
want to go down it.  Seeking installers from outside of the
Debian project doesn't look good to them.  They are used to
a world where the good quality vendor takes care of everything.

--Donald


Re: Debian's progress inspite of events (was Re: Dunk-Tank and the DD strike)

2007-03-19 Thread D G Teed

On 3/19/07, Greg Folkert <[EMAIL PROTECTED]> wrote:




Of course, you could point out that about 60% of existing popular
distributions are originally derived and modified from Debian.



I spent all of last summer trying to educate the managers.  I've given
up.  They won't read or listen.  They have heard that Linux users tend to
be emotional fans of their particular distro and can present any
information to back up their favorite.  One particular person has the
personality of Captain Kirk.  But he is one that won't listen to his Spock.

It is pointless to say something like "it is straightforward if you know
how", when the there is nothing hinted within the installer to tell
that user of the alternates as they smash into the problem.
(Hint: put a message into the installer scenario where there is no
hard drive to install onto).  My manager didn't want any hand holding to
evaluate Debian.  He installed it on a notebook, had problems and
thought it proved what he had read about Debian being old.

You are wrong about Redhat not updating the kernel.  We had a new 64bit
Intel machine come in a few months back.  A techie tried to install his
standard RH 4 on it and no go.  He got the "update 4" version of the
installer from Redhat, and away he went.

For the sake of the discussion, the hardware is not uniform and the servers
are uniquely roled (cyrus, tomcat, custom web apps, MX, postgres DB, etc.).

The first step to solving a problem is recognition that there is one, but
I don't think we are getting that far in these exchanges.  I feel that
Debian has more resources than any commercial distribution.  The number
of supported platforms is one evidence of that. The number of developers and
users (including downstream distros) is another.  It is just a matter of
making
things a priority and deciding how and where to make this happen.  I'm also
hinting strongly here, that fixing this issue would go a long way to
improving
Debian's adoption in heavy IT centres where management has too many
thumbs in the pie.  It might even cut out the losses from people going to
Ubuntu and the like.

--Donald


Re: Debian's progress inspite of events (was Re: Dunk-Tank and the DD strike)

2007-03-19 Thread D G Teed

On 3/19/07, Greg Folkert <[EMAIL PROTECTED]> wrote:



You gave him Sarge, right? Have him do a straight install of WindowsXP
with no other CD. Watch him crash and burn as well. Or if he "excuses"
the additional drivers disk(s) required to install WindowsXP then he is
not at all "unbiased".



Win XP installer prompts "F6 to add hardware drivers" or some such.
It makes the opportunity known.  It isn't impossible for the Debian
installer to do something to introduce the possibility as I mentioned
before.

Also obvious, you don't know Debian well enough to realize the other

methods. The other methods I speak of, are NOT for the "unwashed masses"
coming from Pre-installed Windows. They are for people who understand
what a chroot is, or howto extract a tar file onto a new system and
properly update it and re-run a tasksel.



Perhaps you don't know Debian well enough to describe the "other methods"
you repeatedly refer to in vagueness.  Did you come here to insult
people or be truly helpful?

I wrote a Gentoo how-to for using netboot to clone disks on the sparc
platform, rolling udpcast and dependancies into an installer image I made
available.  Don't assume too much.  Netboot would be a bit of a hassle to
accomplish in our environment, but it would be possible.

I don't think you get it.  I like Debian.  My manager doesn't think it is
serious and that is the person who needs to see the light.  He would
never install Windows by netboot, so he doesn't think it is a standard
installation method.

I see, update 4 has extra drivers available because of backports to the

SAME version kernel.  Wait, RHEL4 uses which Kernel?



It doesn't matter.  Redhat and Suse are always doing backports of
whatever drivers they want to inject in there.  Even in 2.4 days they
were taking stuff from 2.5.

I don't care, Debian can be easily cleaned and re-deployed *WITHOUT*

being re-installed. Easily. You only have to know this. There are tools
for this.



Well, lets see.  One is a postgres database server with hardware raid 5
and 6 disks, while another example is a cyrus server with software raid.
They have different partitioning scheme requirements, and inodes for
thousands of email accounts will need to be higher than an average
filesystem.
For the time it takes to install the net based installer with base and
pick out packages I want, I think I am further ahead with that approach
than figuring out how to reinvent a truck into a beetle and such.

Again, why do hyarge deployments choose Debian? Scalability and 19,000

easily available packages. Also because of cost reductions, when they
don't have the money to spend on "One Vendor Solutions" and rely on the
employees they have to do the job. Finding out that these Vendors are
just taking money.



You don't have to convince me Debian is great.  But anyway, Redhat has
very cheap licensing for campuses, so money isn't really an issue.

Debian's adoption is everywhere, for a list of Debian based  distros

here is non-comprehensive list I just used on Ben Hmeda:

Admanix, APLINUX, ASLinux, AbulEdu, Formerly Demudi now ANGULA,
ANTEMIUM Linux, Arrabix, Augustux, Backtrack, B2D Linux, BenHUr,
BEERnix, Biadix, BIG LINUX, Bioknoppix, BlackRhino, BRLSpeak,
Bonzai Linux, ClusterKnoppix, Catix, CensorNet, Clusterix,
Condorux, Corel Linux, Danix, Demolinux, DebXPde, Dizinha Linux,
Debian JP, Debian-BR-CDD, DeveLinux, Damn Small Linux(DSL), DCC,
ESware Linux, eduKnoppix, ERPOSS, Evinux, Euronode, Engarde,
emdebian, Ebuntu, FAMELIX, FeatherLinux, FoRK (Vital Data
Forensic or Rescue Kit), Freeduc-cd, Freeduc-Sup, Finnix,
Familiar, GEOLivre Linux, Gibraltar, GNIX-Vivo, Kinneret,
GNUstep Live, grml, GuadaLinex, Gnoppix, Hiweed Linux, Helix,
Hikarunix, IndLinux,  Impi Linux, Julex, K-DeMar, Kaella,
Knoppix Linux Azur, Kanotix, KlusTriX, knopILS, Knoppel,
Knoppix64, KnoppixSTD, KNOPPIX, KnoppiXMAME, KNOSciences,
Kurumin, Kalango Linux, Kunbuntu, KnoppMyth, LAMPPIX, LIIS
Linux, Libranet, LinEspa, Linspire, Linux-YeS, Linux Live Game
Project, Linux Loco, LinuxDefender Live! CD, Linux Router
Project, LiVix, Local Area Security Linux (L.A.S.), Luinux, Luit
Linux, Linex, Linuxin, Libranet(though now part of Mandriva),
MAX: Madrid Linux, MediainLinux, MEPIS, Metadistro-Pequelin,
MIKO GNYO/Linux, MoLinux, Munjoy Linux, Morphix, MeNTOPPIX,
Nature's Linux, NordisKnoppix, NepaLinux, NUbuntu,
OpenGroupware.org Knoppix CD, OverclockIX, Oralux, PAIPIX,
ParallelKnoppix, Parsix GNU/Linux, Penguin Sleuth Bootable CD,
PHLAK, PilotLinux, PingOO, Progeny Linux, Prosa, Quantian, RAYS
LX, Salvare, Santa Fe Linux, Slavix, Slix, Slo-Tech Linux,
Soyombo Mongolian Linux, SphinxOS, Stonegate, Stromix
Tecnologies' Storm Linux, Symphony OS, Skolelinux, Tablix on
Morphix, TelemetryBox, Tilix Linux,

Re: "I do consider Ubuntu to be Debian" , Ian Murdock

2007-03-20 Thread D G Teed

On 3/20/07, Carl Fink <[EMAIL PROTECTED]> wrote:


On Mon, Mar 19, 2007 at 10:37:34PM -0500, Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 03/19/07 22:11, Carl Fink wrote:

> > I just can't handle the absurdly-long release cycle any more.
>
> Sid?

Using.  Not developing.

I run Etch on my home box (the one I'm typing on now) but for servers it
isn't always practical to use Testing, and that means you can almost never
use a currently-in-production server with Debian, unless you want to
hand-compile at the very least a kernel.



Some people seem confused about what and why this was said.  He said
"currently -in-production" which I believe means new off the shelf servers.

The struggle is installing Debian while the kernel on the installer is over
2 years old.  Yes, there are non-CD install methods, but they are not seen
as a standard way to install, and it adds more complexity to training
others on how to install the OS.  If the installer can't see the disk
because the chipset drivers or disk controller drivers are not available,
then people don't get very far.  Some would rather switch Debian for
another distro at this point than spend their time and research to adopt
something like a net install.  The 19,000 available packages and all
that is of little interest to someone running a single purpose server.

As I mentioned in another thread, other distros/vendors do update their
kernel
on the installer, possibly 4 times per version of their distro.
Perhaps this is needed on certain platforms where the hardware
related to getting Debian on the disk (only this hardware) has
changed to the point that the Debian CD installer will fail to see
a disk target.

I'm a little amazed to be encountering some of the same level of "huh?"
within
Debian users as I did on the Gentoo forums.  It seems to be a gap of
understanding between people who run hobby or single production servers
and those who have server rooms of a variety of hardware, services,
and OSes to maintain.  The later group need much of what Debian offers.

The installer's old kernel has been a thorn in the side for arguing
in favour of Debian in my shop.

It is one thing for a user who controls their environment completely
to start up non-standard solutions for installation.  It is another
thing for people who have managers that question everything,
want their thumbs in the pie, and prefer to see conservative
actions taken for something as typical as an OS install.
In such environments, even taking the time to set up a DHCP
server to support the PXE/netboot could be questioned.  As well,
some of the steps for supporting a network boot are
not the sort of thing I'd give to a junior admin that is already
capable of doing a normal install.

Some things to consider.

If you are interested in seeing Debian deployment go up, installer
developers should take heed. If you just want to argue that Debian
is better than brand X, don't bother - I already know it.

--Donald


A market perspective on the impact of dunc-tanc

2006-12-14 Thread D G Teed

Howdy,

I'm a sysadmin of the Unix half of a small University
main server room.  Recently we have been trying to
decide on a replacement for FreeBSD for 14 servers.

I favor Debian, however I can't make that decision on
my own.  I found it was a challenge to convince
others in the decision making process that Debian
is solid and here to stay when the Dunc Tanc causes
the Weekly News to drop out of consistent appearance.
I know there are alternate sources of information, but
one must consider that non-Linux users are amongst
the visitors of the Debian project web site.

It is small things about the web site for Debian which
make Debian look less maintained than it really is.
I understand the political tug of war the DWN editor
is involved in, but in the end, holding a gun to the head
of what you like isn't helping anything.  The missing DWN
is one missing piece of "product" continuity, and reading
the "why" just makes things worse.  The people involved
are shown to be struggling for their individual rights
on the same level as teenagers refusing to return
someones possessions until the other person returns
something they are missing (regardless of whether
they really need it).  Principled self-righteousness is
something that even 6 year olds can master (I have one).
Its absence in mature people is sometimes mistaken for
lack of awareness or apathy.

It would be great if snarls between perspectives of developers
had no impact on DWM and other aspects of Debian.  If a person
developing Debian truly loves what they are doing, the Dunc Tanc
should have no impact on what they are contributing either way.
One way to protest it is to ignore it and stick to the essentials.
It might seem insane, but there are people in the world
who plant crops while bullets and mines are real threats.
There is no point protesting when what you need to do
is ensure you have food to live on in the future.

It is the same with Debian.  It will only grow stronger with continued
efforts of volunteers.  If it woobles and appears like the project web
site of something much smaller, decision makers will not
trust Debian as a mature, robust and trustworthy source of
Linux and Linux applications.

So far, I have failed to convince other decision makers that
Debian deserves more roles in our server room, and we
are headed to adoption of Redhat.  Yes, it is insane that
decisions like this are made by someone with 20 minutes
of experience installing Linux, but that it how it is.  They
might have been more willing to consider my opinion
if Debian's web appearance, newsletters, etc. demonstrated
that Debian is backed by a "thousand plus" highly talented developers.

Typically when I have criticism of something open source, I hear
back retorts of "why don't you volunteer to fix it?".   I will answer
that right now.  It takes all kinds of people to make Debian a
success, not only people writing code and documentation.
I have contributed to open source projects where I've
had the time and talent to do so.  At the current stage
of my life I don't have the time to do more.  So my main role in
supporting it will be advocate, user, product demonstrator
and perhaps once in awhile, commenter.

If there are other users who also feel this issue has degraded the
appearance of the Debian project and its web site, you might share
your view.

--Donald


Debian Hardware Compatibility list for 3.1 r2

2006-06-29 Thread D G Teed

Hi,

I've seen a reference to the Debian Hardware
Compatibility list in a Meta manual, but could
not locate it in places that made sense to me
under:

http://www.debian.org/doc/manuals/

Does anyone know where I can see the same type
of reference that FreeBSD provides for
hardware support from the installer
for each release?

I know that with a new kernel I can get anything
supported in Linux working, but the issue is
with hardware such as installing directly to
hardware raid.  I need to know if a RAID controller
is supported before dishing out $1000 for 2
of them.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




ps showing 0:00 TIME for bind9 with -t chroot

2006-07-25 Thread D G Teed
Hi,Here is a scenario...Two servers: both Debian 3.1 stable.  Both running bind 9.2.4.1installed by apt-get.One bind runs with -t /var/lib/named (bind's chroot option) while the other does not.
Both name servers are working properly and are performing fine.The chrooted bind will show 0:00 for processing time from ps -aux , whilethe non-chrooted case will show some processing time has elapsed.
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMANDbind 28356  0.0  1.8 44548 16944 ?   Ss   10:02   0:00 /usr/sbin/named -u bind -t /var/lib/named(The chrooted named server is far more busy as well, so it isn't simply a case
of an idle service.)I checked the bind chroot howto and don't see anything I've missed.googling hasn't shown anything related to it thus far and up tonow my LUG hasn't suggested a solution.My feeling is that I'm missing something in named's dev
(I've got null, random and log) or similar.--Donald Teed


dpkg --force-depends causes problems for apt-get later

2006-08-18 Thread D G Teed

Howdy,

I have a package I've installed by alien for
legato networker backup client.  It comes
with X versions of the client, which I
don't need. Therefore I want the install to
ignore the xlibs and other dependancies.

# dpkg --force-depends -i lgtoclnt_6.1-2_i386.deb
(Reading database ... 12716 files and directories currently installed.)
Preparing to replace lgtoclnt 6.1-2 (using lgtoclnt_6.1-2_i386.deb) ...
Unpacking replacement lgtoclnt ...
dpkg: lgtoclnt: dependency problems, but configuring anyway as you request:
lgtoclnt depends on libx11-6 | xlibs (>> 4.1.0); however:
 Package libx11-6 is not installed.
 Package xlibs is not installed.
lgtoclnt depends on libxext6 | xlibs (>> 4.1.0); however:
 Package libxext6 is not installed.
 Package xlibs is not installed.
lgtoclnt depends on libxt6 | xlibs (>> 4.1.0); however:
 Package libxt6 is not installed.
 Package xlibs is not installed.
Setting up lgtoclnt (6.1-2) ...

That method works, as does doing it with --ignore-depends=xlibs

However, later when I want to install anything with apt-get,
I get complaints about the previous installation status.
e.g.:

# apt-get install libncurses4
Reading Package Lists... Done
Building Dependency Tree... Done
You might want to run `apt-get -f install' to correct these:
The following packages have unmet dependencies:
 lgtoclnt: Depends: libx11-6 but it is not going to be installed or
xlibs (> 4.1.0) but it is not going to be installed
   Depends: libxext6 but it is not going to be installed or
xlibs (> 4.1.0) but it is not going to be installed
   Depends: libxt6 but it is not going to be installed or
xlibs (> 4.1.0) but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or
specify a solution).

Of course apt-get -f install would install the whole bloody X-Window
system, which I don't want to do on a nice minimal headless server.

I'm sure there is a solution to this by hand messaging /etc/apt files
or something like it.

Anyone with a suggestion other than abusing force and ignore
depends switches or installing X?

--Donald


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: dpkg --force-depends causes problems for apt-get later

2006-08-20 Thread D G Teed

Hey, thanks a bunch, Dwayne.

equivs was exactly the sort of solution I hoped
existed.  I knew Debian had a rational fix
for this but had never used equivs before.

First I had to uninstall lgtoclnt so that apt-get
would let me do things.

Then I installed equivs: apt-get install equivs

Then I build a simple controlfile:

# cat xlibs.ctl
Section: X11
Package: xlibs
Version: 4.1.1
Provides: xlibs
Description: Xlibs dummy package
This package provides dpkg with the information that
there is a xlibs package installed.
.
Now installing lgtoclnt will not push for X support


Then I build a dummy package:  equivs-build xlibs.ctl
And finally: dpkg -i xlibs_4.1.1_all.deb
dpkg -i lgtoclnt_6.1-2_i386.deb

dpkg -l shows that the package is not real:

# dpkg -l | grep xlib
ii  xlibs  4.1.1  Xlibs dummy package


--Donald Teed


On 8/19/06, Dwayne C. Litzenberger <[EMAIL PROTECTED]> wrote:

The --patch option for alien might also help you build a lgtoclnt package
that doesn't include the X clients.

Alternatively, the 'equivs' package might help.  It will let you build and
install a fake package that Provides: xlibs.  However, keep in mind that if
you *do* decide to install X later, you'll have to install the real
versions of whatever packages you faked with equivs.

On Fri, Aug 18, 2006 at 11:40:58PM -0300, D G Teed wrote:
>Howdy,
>
>I have a package I've installed by alien for
>legato networker backup client.  It comes
>with X versions of the client, which I
>don't need. Therefore I want the install to
>ignore the xlibs and other dependancies.
>
># dpkg --force-depends -i lgtoclnt_6.1-2_i386.deb
>(Reading database ... 12716 files and directories currently installed.)
>Preparing to replace lgtoclnt 6.1-2 (using lgtoclnt_6.1-2_i386.deb) ...
>Unpacking replacement lgtoclnt ...
>dpkg: lgtoclnt: dependency problems, but configuring anyway as you request:
>lgtoclnt depends on libx11-6 | xlibs (>> 4.1.0); however:
>  Package libx11-6 is not installed.
>  Package xlibs is not installed.
>lgtoclnt depends on libxext6 | xlibs (>> 4.1.0); however:
>  Package libxext6 is not installed.
>  Package xlibs is not installed.
>lgtoclnt depends on libxt6 | xlibs (>> 4.1.0); however:
>  Package libxt6 is not installed.
>  Package xlibs is not installed.
>Setting up lgtoclnt (6.1-2) ...
>
>That method works, as does doing it with --ignore-depends=xlibs
>
>However, later when I want to install anything with apt-get,
>I get complaints about the previous installation status.
>e.g.:
>
># apt-get install libncurses4
>Reading Package Lists... Done
>Building Dependency Tree... Done
>You might want to run `apt-get -f install' to correct these:
>The following packages have unmet dependencies:
>  lgtoclnt: Depends: libx11-6 but it is not going to be installed or
> xlibs (> 4.1.0) but it is not going to be installed
>Depends: libxext6 but it is not going to be installed or
> xlibs (> 4.1.0) but it is not going to be installed
>Depends: libxt6 but it is not going to be installed or
> xlibs (> 4.1.0) but it is not going to be installed
>E: Unmet dependencies. Try 'apt-get -f install' with no packages (or
>specify a solution).
>
>Of course apt-get -f install would install the whole bloody X-Window
>system, which I don't want to do on a nice minimal headless server.
>
>I'm sure there is a solution to this by hand messaging /etc/apt files
>or something like it.
>
>Anyone with a suggestion other than abusing force and ignore
>depends switches or installing X?
>
>--Donald
>
>
>--
>To UNSUBSCRIBE, email to [EMAIL PROTECTED]
>with a subject of "unsubscribe". Trouble? Contact
>[EMAIL PROTECTED]
>

--
Dwayne C. Litzenberger <[EMAIL PROTECTED]>


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




mga error after upgrade to xfree86 4.3.0.dfsg.1-14sarge2

2006-10-22 Thread D G Teed

Hello, Debian Users of the World...

I did have XFree86 working well with dual monitors, 2.6.16.18 kernel and
the prior xfree86 (stable sources) from the spring and summer of 2006.

A recent apt-get upgrade brought in newer xfree86 server, and
when I recently rebooted, I found X would not start.
I've tried making the kernel again, with no DRM/MGA modules
in case they were conflicting with that from XFree86 4.3.0.dfsg.1-14sarge2
No change.

Here are the types of errors in the XFree86.log file:


Required symbol MGAValidateMode from module
/usr/X11R6/lib/modules/drivers/mga_drv.o is unresolved!
Required symbol MGASetMode from module
/usr/X11R6/lib/modules/drivers/mga_drv.o is unresolved!
Required symbol MGASetMode from module
/usr/X11R6/lib/modules/drivers/mga_drv.o is unresolved!
{...}
Fatal server error:
Some required symbols were unresolved   

Searching in google, I see others hit by this bug, but their solution
is to compile
xorg based drivers for mesa and such.  Given that this XFree86 is debian
custom stuff, where do I go to do similar to the information on this web site:

http://dri.freedesktop.org/wiki/Building

Is there an apt based way to build this from Debian sources and
resolve my conflicts?

Of course, I can run X with vesa, but that doesn't give me the
beauty and productivity of dual head display.

--Donald


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: debian on xserve?

2009-05-21 Thread D G Teed
On Sat, Jan 12, 2008 at 2:29 PM, Jose Luis Rivas Contreras <
ghostba...@gmail.com> wrote:

>
> Mark Quitoriano wrote:
> > anyone tried to install debian on xserve? what architecture do i need to
> > use? x86_64?
> >
> > --
> > Regards,
> > Mark Quitoriano
> > http://asterisk.org.ph
> >
> > Fan the flame...
> > http://www.spreadfirefox.com/?q=user/register&r=19441
> > 
> According to [1] there seems to be an issue because XServe's running EFI
> instead of a BIOS actually. But seems is only an issue with x86_64, so
> if it's a Intel Core 2 Duo you can run i386 there which seems to have
> EFI support enabled.
>
> [1]
>
> http://forums.debian.net/viewtopic.php?p=121061&sid=ccc77e165b7101b9d0ad2d52371e1204
>
> Howdy,

Some time has passed.  I google and don't see much newer
than this about the progress with booting with EFI.  I see
one reference to something making it into the newer kernels.

Has this made it into an installer and is there a grub/lilo boot loader
which will work on the Apple-Intel Xserve?  Ours is Xeon 64 bit, so
no chance on the i386 solution.

Also we have a nice XServe RAID storage system with it - given to us by
a department which is finished with it or unable to use it.  I wonder if
the HBA (looks like near line SAS connections) will be supported in Linux
kernel.

Regards,

--Donald


Re: power-efficiency & non-x86 desktop systems

2008-11-14 Thread D G Teed
On Fri, Nov 14, 2008 at 6:49 PM, elijah rutschman <[EMAIL PROTECTED]> wrote:

> Hello,
>
> I have recently heard that ARM CPU's tend to be more power efficient
> than x86 CPU's.
> I know that several free operating systems, Debian GNU/Linux included,
> support some non-x86 architectures, such as ARM and MIPS.
>
> So, this brought 2 questions to mind:
> Which processor architecture, or specifically, which CPU lines are
> specifically designed to be power-efficient?
>
> Are there any non-x86 motherboards with PCI, SATA, USB, etc. that
> would be powerful enough for moderate desktop use, i.e. web surfing,
> compiling software packages, and playing movies?
>

You can build a desktop from an Intel Atom based motherboard
and CPU.  40 Watts maximum for the CPU and mainboard.
With a high efficiency power supply, I'm measuring 42 watts idle,
47 watts under load.  That is with twin 320 GB SATA drives and 2 GB
of RAM and a DVD drive.

I have one running Debian for web server and mail server (light duty) and I
also
run a desktop on it.  Low cost hardware, low cost power consumption.
I am predicting this kind of product will be big the future,
as energy costs rise.


Re: power-efficiency & non-x86 desktop systems

2008-11-15 Thread D G Teed
On Fri, Nov 14, 2008 at 11:30 PM, Ron Johnson <[EMAIL PROTECTED]> wrote:

> On 11/14/08 21:22, D G Teed wrote:
> [snip]
>
>>
>> You can build a desktop from an Intel Atom based motherboard
>> and CPU.  40 Watts maximum for the CPU and mainboard.
>> With a high efficiency power supply, I'm measuring 42 watts idle,
>> 47 watts under load.  That is with twin 320 GB SATA drives and 2 GB
>> of RAM and a DVD drive.
>>
>> I have one running Debian for web server and mail server (light duty) and
>> I also
>> run a desktop on it.  Low cost hardware, low cost power consumption.
>> I am predicting this kind of product will be big the future,
>> as energy costs rise.
>>
>
> What's it's performance when watching a Flash-heavy website, or playing a
> movie?
>

It is surprisingly good. It does have some sort of hyperthreading as
two processors show in top.  Many say it is comparible to the
Pentium 4 Celeron at 1400 Mhz.  I was coming from a dual PIII 550
system and noticed dramatic improvement, partly with the FSB
and memory speed upgrade.  I have not tried quicktime movie trailers,
just typical movie playback sites like youtube, and it has no hiccups.

Mine is the Atom 230.  There is another called the 330 which is
dual core.  Someone benckmarked the Atom 270, 330 and regular
Intel core duo here:

http://forum.eeeuser.com/viewtopic.php?id=43085

If you were really concerned about multimedia, get the 330.
On the Windows world, people are building home theatre PCs
around these.

For me, the 230 is fine for light web site and email server.
The only reason to get a traditional CPU is for heavy server
or heavy gaming use.

I figure mine will pay for itself in $400 of energy savings over 4 years.
Our electricity is going up by 10% in January.  That never seems to go down.


Re: How to create an image of HD?

2007-12-28 Thread D G Teed
udpcast or g4u use a dd method.  Bootable from cdrom,
PXE, ether, floppy, etc.


On Dec 28, 2007 10:43 PM, Raj Kiran Grandhi <[EMAIL PROTECTED]> wrote:

> Max Hyre wrote:
> >> Amit Uttamchandani wrote:
> >>
> >> If you are planning on having the same partition size for your root
> >> partition, then you can simply use dd to clone the entire parition.
> >>
> >> Actually, the dd method will work even if you want to have a larger
> >> partition on your new drive.
> >
> >Couldn't you simply dd the entire drive (dd if=/dev/hda of=hdb), then
> > use parted to enlarge and adjust things to fit the new size?
>
> You might be able to, but it involves more effort, consumes more time
> and is not as reliable as cloning partitions. Say you want to clone an
> 80GB disk with four paritions of 20GB each onto a 160GB disk with four
> partitions of 40GB each. If you were to clone the entire disk, you will
> end up with four partitions on the 160GB disk which use only the first
> 80GB. The remaining 80GB may be inaccessible if the existing partitions
> are all primary. Even otherwise, there is no point in mucking about with
> moving the data around after cloning.
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>


Re: ClamAV update to 0.97

2011-03-15 Thread D G Teed
On Thu, Feb 17, 2011 at 1:53 PM, Camaleón  wrote:

> On Thu, 17 Feb 2011 11:20:44 -0600, Boyd Stephen Smith Jr. wrote:
>
> > On Thursday 17 February 2011 11:05:42 Camaleón wrote:
>
> >> > From what I understand, the clamav binaries are only updated in
> >> > stable (even in stable/volatile or stable-updates) when a new version
> >> > is needed in order to use the updated virus definitions, or for the
> >> > normal stable update criteria.
> >>
> >> Uh? Is that true? I thought the whole volatile repo was also handling
> >> "oldstable" packages? :-?
> >
> > I wasn't clear.  I mean that just because there is a new upstream
> > version of ClamAV, that doesn't mean it will get included in volatile.
> > It might be appropriate for volatile, but not all new upstream versions
> > are.
>
> Yes, I know that and I'm fine with that policy. What made me getting a
> bit nervous was not seeing much activity in volatile's mailing list.
>
> >> > However clamav (and more and more software) starts getting noisy as
> >> > soon as upstream provides a new version, for whatever reason.  Even
> >> > in A/V software, not every upgrade is appropriate for stable.
> >>
> >> Well, I don't read all and each of the ClamAV new released changelogs
> >> to see what has been patched, but being an AV I'd expect a new version
> >> corrects some severe bugs and not just "cosmetic" errors.
> >
> > While I don't think your expectation is well-founded, if it is the case
> > that the new version corrects some severe bugs, I would expect it not
> > only in lenny-volatile but also lenny-proposed-updates.  Maybe not
> > lenny-proposed- updates, but I think the RC-level bug fix policy in
> > oldstable is roughly the same as stable.
>
> Here is the changelog... you finally made me to read it ;-)
>
>
> http://git.clamav.net/gitweb?p=clamav-devel.git;a=blob_plain;f=ChangeLog;hb=clamav-0.97
>
> >From 0.96.5 (released on Tue Nov 30) to 0.97 (released on Mon Feb 7) I
> can't see any pacth that can be considered dangerous or remotely
> exploitable, so all seems okay. I'll patiently wait and see.
>
> Greetings,
>
> --
> Camaleón
>

What I'm seeing now puzzles me.  I'm considering the same situation, except
for squeeze.

The packages site says .97 is available in lenny-volatile.
But .97 is not showing up in squeeze-updates, which is supposed to replace
volatile.

I can understand the conservative path, but the whole point of the fork
in the tree is to give people the choice to run the more cutting edge
releases
of volatile style packages.

This should not require compiling source to achieve.  We choose Debian
over Slackware et. al. because we prefer to work within a package management
system.  Some of us are not maintaining hobby boxes.

When it comes to virus scanning, there is little point in getting an update
which
now supports last year's viruses.  We need to be current with this one for
the
package to have any value at all.

I'm OK with seeing the warning from ClamAV for 30 days or so, but if there
are
any massive glitches to be concerned about, they should show up within that
Window
and we should be safe to upgrade.

I question whether the squeeze-updates really works as a replacement for
volatile.
I don't see mention of it in the debian packages reports.  e.g.:

http://packages.debian.org/search?keywords=clamav

--Donald


Why policyd from 2007 in squeeze?

2011-04-01 Thread D G Teed
The version of policyd in squeeze is showing a version number 1.82.
The files in the tar from that version of the project show dates of 2007.

Does anyone know of reasons Debian Squeeze is holding back on
the move to the 2.0 version?  I understand it is a re-write - is the upgrade
path the only concern, or are there reasons to expect 2.0 needs time
to mature or become more stable?

In many other cases like this, Debian would provide both versions,
and allow the legacy cases to be supported, while new deployments
could opt for the newer version.

--Donald


Re: Why policyd from 2007 in squeeze?

2011-04-01 Thread D G Teed
On Fri, Apr 1, 2011 at 2:18 PM, Camaleón  wrote:

>
> Hum... there was a wishlist bug:
>
> postfix-policyd: Please package version 2.x
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=561085
>
> Maybe you can add yourself to it and ask for additional information on
> the matter (foreseen date for the package to be included, if they need
> help/volunteers to test the new packages, etc...)
>
> Greetings,
>
> --
> Camaleón
>
>
OK thanks, it is now in sid and wheezy as package postfix-cluebringer

Strange name for something expected to be deployed in
this role.  Policyd was better.

--Donald


Way to have terminal/console application stack in gnome like in KDE 3.5's kicker?

2011-02-16 Thread D G Teed
Having lost KDE 3.5 in the squeeze update, and not being satisfied with the
new KDE 4.* (frankly, I think it is very poorly designed), I am looking for
a desktop
which can stack running terminal sessions.

Let's say I have 50 Konsole or gnome-terminal windows open, each to a
different remote box.  I want to click on the panel area and select one by
name which is already open.  I could do that in KDE 3.5.  Firefox and other
apps could do this too.  How is this done in gnome or what options are there
for managing many open sessions of something?

--Donald


Re: Way to have terminal/console application stack in gnome like in KDE 3.5's kicker?

2011-02-16 Thread D G Teed
On Wed, Feb 16, 2011 at 9:13 PM, Dr. Ed Morbius wrote:

> on 20:51 Wed 16 Feb, D G Teed (donald.t...@gmail.com) wrote:
> > Having lost KDE 3.5 in the squeeze update, and not being satisfied with
> the
> > new KDE 4.* (frankly, I think it is very poorly designed), I am looking
> for
> > a desktop
> > which can stack running terminal sessions.
> >
> > Let's say I have 50 Konsole or gnome-terminal windows open, each to a
> > different remote box.  I want to click on the panel area and select one
> by
> > name which is already open.  I could do that in KDE 3.5.  Firefox and
> other
> > apps could do this too.  How is this done in gnome or what options are
> there
> > for managing many open sessions of something?
>
> For simply managing windows, I find WindowMaker vastly superior to KDE,
> GNOME, or XFCE4.  There's a window list by default (middle-mouse on
> desktop, or ).  This is pinnable, and it's pretty easy to select
> and walk through a set of windows quickly (though you can't, say,
> text-search through a list of names, which would be sort of cool).
>
>http://main.linuxfocus.org/~georges.t/menu.html
>
> You might even find a "mouseless" tiling/tabing WM (e.g.: ionwm) to be
> useful in this context.  You can designate sections of your desktop to
> specific apps, and stack up multiple instances of an app in one spot.
>
>http://en.wikipedia.org/wiki/Ion_(window_manager)
>
> Sadly, development on numerous good but older WMs has stagnated (ion's
> in stasis since late 2009, WindowMaker's last upstream commits were in
> 2005).
>
> I'd also suggest you look at your workflow if it requires you to keep 50
> open remote shell sessions:
>
>  - Generally: scripting remote interactions.
>  - Use 'dsh' or other tools to run similar commands on multiple
>systems.
>  - Manage systems via puppet, monit, etc., rather than interactively.
>  - Use the KDE Terminal / GNOME Terminal built-in multiplexing
>features.
>  - Use another terminal multiplexer such as screen or tmux.
>
>
> What are you doing that requires 50 terminal sessions?  How do you plan
> on managing this when your server count doubles?  Increases by an order
> of magnitude?
>
>
This is at a University, so each system is pretty much unique in purpose,
packages, etc.
There are for example roughly 10 Solaris Sparc.  One is financial system,
another
library management, another an Oracle DB, another the student system, etc.
Most others are Linux.  Two of those are cyrus mail servers, another two are
MX,
then one moodle system, 5 different systems for Computer Science, one for
icecast streaming, lon-capa, and many specialized boxes, some for research
grants, etc.
There are not really more than 2 of the same thing except when you get into
the Math
Cluster, and usually I work on only one system from the cluster.

Anyway, this may be partially misunderstood.  I'm not looking for a solution
to manage the remote systems.  I'm not doing something on all 50 terminals
at once.  But over the course of a few days, I end up having up to 50
terminals
open from work recently done, and it makes sense to use the terminal
sessions again.

I merely want to pick one terminal session that is already open to the
system
I want to work on, if it exists.  Likewise to pick from one of my web
browser
windows from a stack of open windows.  Thus, the stacking in KDE 3.5's
kicker was just the thing.

--Donald


Re: Way to have terminal/console application stack in gnome like in KDE 3.5's kicker?

2011-02-16 Thread D G Teed
On my LUG, someone provided a clue... there is a solution
which works for gnome.  It causes open windows to be grouped
in the gnome panel.

It isn't obvious where this is.  In the bottom left corner of the screen,
you've got the widget to "hide all windows and show the desktop".
To the right of that, before your first open task, is three vertical dots.
Right click on this small region and it has "Preferences" as an option.
Now I select "Always group windows".  That is exactly what I wanted.
Unbelievable they bury this and don't include it in the Gnome Control Panel.

Thanks to all for reading, and your suggestions.

--Donald


Has anyone built postfix 2.8.0 with TLS support from source?

2011-02-22 Thread D G Teed
I have a good postfix set up for TLS - to support secure SMTP with sasl auth
from roaming users.  It works fine on an earlier prerelease of Postfix 2.8,
and it was compiled against dev libraries in squeeze around November 2010
prior to squeeze release.

I have built the fully released Postfix 2.8.0 from source
with the same build options, against current dev packages in the fully
released Debian Squeeze.

If I update the binaries (make upgrade from 2.8.0 source) it causes
the smtpd daemon to be killed with signal 11 when connecting over TLS.

I can simply cd to my earlier postfix 2.8 source from fall 2010,
do a make upgrade and the problem goes away.

One of the variables here is new openssl-dev package since 2010.  I have not
chanced doing a plain 'make' in the older postfix 2.8 source for fear
I will lose my good binaries, although I suppose I could copy it away
for safekeeping and try it.

I've already been on the postfix mailing list to ask about the smtpd killing
off.
The only suggestion so far is to try a postfix 2.8.1 RC

So I thought I'd ask here if anyone has compiled postfix 2.8.0 from
source with working TLS support on squeeze.


Re: Has anyone built postfix 2.8.0 with TLS support from source?

2011-02-22 Thread D G Teed
On Tue, Feb 22, 2011 at 8:54 PM, D G Teed  wrote:

>
> So I thought I'd ask here if anyone has compiled postfix 2.8.0 from
> source with working TLS support on squeeze.
>
>
>
I've since compiled postfix 2.8.1 from source, which also happened
to be released tonight.  It fixes the problem.

--Donald


problems booting IBM servers with 2.6.32 kernel

2011-06-17 Thread D G Teed
Thought I would share this solution/workaround.

I had a kernel dump happening following the kernel update
to 2.6.32 for Debian squeeze on an IBM x345.  It looked like
this problem:

https://bugzilla.kernel.org/show_bug.cgi?id=26692

I tried removing 'quiet' from the kernel args but it simply changed
where the kernel oops happened.  The old 2.6.24 was
still booting fine from Debian 5.  Eventually I came across this page:

http://wiki.debian.org/InitramfsDebug

On another old IBM xSeries server some weeks ago I had
found rootdelay helped.

In the case for the x345, I used both of these:

rootdelay=9 scsi_mod.scan=sync

After this 2.6.32 boots and I can continue to apt-get dist-upgrade
as per the Debian upgrade instructions.

I hope this helps someone.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=mwkjpthx3vsnwdy1m-ff717j...@mail.gmail.com



Fwd: clamav 0.97.1 not coming to squeeze-updates ?

2011-06-27 Thread D G Teed
On Mon, Jun 27, 2011 at 8:58 AM, Eric Viseur  wrote:

> Hi list,
>
> In my understanding, the stable-updates repo was esthablished in order to
> replace the volatile repo.  Thus, updated clamav should be pushed in it, but
> I note it's only available in the testing branch.
> Did I misunderstood the role of the stable-updates repo, or is clam 0.97.1
> just coming in late ?  I have a server complaining about clam not being up
> to date every night, so it's getting a little annoying, and I'd like to
> avoid apt-pinning if possible.
>
>
I agree this is precisely the sort of thing the volatile/squeeze-updates
was designed for.  Unfortunately we sometimes see the warning from
clamav in the logs for a month or more before the package is updated.

Our fall back would be to reinstall from base repos if the updates version
caused issues.  With clamav, it is important to detect current viruses
so I'd think this is one we error on the side of being more fresh
to fulfill its usefulness.  I'd rather have an update which occasionally
had issues than have a daemon which is solid as a rock but is
old and doesn't detect current malware.


Debian safe-upgrade to 6.0.2 - don't run within X session

2011-06-27 Thread D G Teed
If you run Debian on the desktop, note that the current updates
coming down the pipe for 6.0.2 with safe-upgrades
may include an xserver package update (did for me, and
mine was up to date before).

If you run your safe-upgrade from within an X windows
session, it will cause X to restart, interrupting the update.
If this happens, only some of your package updates
will have completed and some will be incomplete.
Nothing tragic will happen, you just have to run safe-upgrade
again outside of X environment (not in xterm, etc.)

I suspect for the same reasons use of Synaptic
to upgrade will also fail, but I did not verify its
behavior.

What is a good way to upgrade desktop systems
for this case, someone might be asking?

If you run desktop, use Ctrl+Alt+F2 to get into
a virtual terminal, login, and do your upgrade from
in there.  During upgrade to 6.0.2 packages, X will
restart, shutting down any X applications open
at the time without warning.  Use Ctrl+Alt+F8
to return to the X console once again.


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-06-27 Thread D G Teed
On Mon, Jun 27, 2011 at 11:52 AM, Camaleón  wrote:

> On Mon, 27 Jun 2011 13:58:43 +0200, Eric Viseur wrote:
>
> > In my understanding, the stable-updates repo was esthablished in order
> > to replace the volatile repo.  Thus, updated clamav should be pushed in
> > it, but I note it's only available in the testing branch. Did I
> > misunderstood the role of the stable-updates repo, or is clam 0.97.1
> > just coming in late ?
>
> Nope, your understaing is totally correct, but...
>
> > I have a server complaining about clam not being up to date every
> > night, so it's getting a little annoying, and I'd like to avoid apt
> > -pinning if possible.
>
> ... for the stable/olstable branch is my understanding that only security
> bugfixes are corrected, so if the clamav update does not closes any
> serious flaw you will keep seeing the clamav warning at the logs. But
> don't worry, your clients are still protected, your AV firms updated and
> your files analyzed for any treat.
>

This is incorrect. Here are the announcement of squeeze-updates,
with a list of reasons why squeeze-updates will push ahead a release...
http://lists.debian.org/debian-volatile-announce/2011/msg0.html

It even mentions clamav as one which needs to be current to be useful.

Our expectations for squeeze-updates to release clamav ahead of stable
merely to be current are correct.

Note you don't need to use squeeze-updates, so we are opting into something
which can be a little more bleeding edge.

If you only want security and bug fixes that is handled by security repo
and standard stable repo.


Re: Debian safe-upgrade to 6.0.2 - don't run within X session

2011-06-28 Thread D G Teed
On Tue, Jun 28, 2011 at 12:25 AM, Scott Ferguson <
prettyfly.producti...@gmail.com> wrote:

>
> Opening a vt will do nothing to "protect" any running x-apps. If
> concerned about x-apps whilst doing an upgrade - logout of x and login
> to a console then shutdown x.
>

I'm afraid users could be confused by this and other statements.
There is a point to the original post which seems to be missed.

Logging out of X can be done, but optional as long as you
don't have important work to save within X applications.
What is "login to a console" in your description?
It is basically a VT or ssh in remotely.  A VT is very
valuable and we should avoid referencing it negatively
so everyone understands regardless of their usual language.

I don't think some people "get it" in terms of why I posted this.
Please read and consider fully...

Doing a dist-upgrade is a major upgrade.  It is done not so often and
done with the Debian upgrade guide nearby.  It tells
us very clearly to do dist-upgrade within a VT console or over ssh
session, and not within X.  End of debate on that.

Doing safe-upgrade (I refer to aptitude command line args here)
is done very often, and many people like myself have done it
within a Desktop X terminal window, for many years, quickly and routinely.
For 6.0.2, xserver is upgraded for the first time in a long time.
If users are desktop users, and don't realize what has happened,
the X restart could take them by surprise and they don't end up
truly upgrading all packages.  I can imagine this confusing
some users.

Personally, I don't want to close out all my Xsession windows each time
I do aptitude safe-upgrade, and I will continue to run it within X,
unless I see another case like 6.0.2 where part of X is getting
updated.

6.0.2 safe-upgrade was atypical, and thus the purpose of the post.


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-06-28 Thread D G Teed
On Tue, Jun 28, 2011 at 8:59 AM, Camaleón  wrote:

>
> I'm still with lenny (now oldstable) but I was even told that not all
> security flaws reached votatile, just some, depending of the nature of
> the flaw...
>
> And again, if this policy has recently changed is more than very welcome,
> my clamav is also claiming for an update and oldstable is still supported.
>

If you are running oldstable, I don't know where you'll find a
current statement on what goes into volatile.  It is really
a different repo than squeeze-updates.

The statement for squeeze-updates repo
purpose makes it very clear it isn't for security:

http://lists.debian.org/debian-volatile-announce/2011/msg0.html

Quote:

 * The update is urgent and not of a security nature.  Security updates
   will continue to be pushed through the security archive.


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-06-28 Thread D G Teed
On Tue, Jun 28, 2011 at 10:31 AM, Camaleón  wrote:

> On Tue, 28 Jun 2011 09:54:23 -0300, D G Teed wrote:
> > The statement for squeeze-updates repo purpose makes it very clear it
> > isn't for security:
> >
> > http://lists.debian.org/debian-volatile-announce/2011/msg0.html
> >
> > Quote:
> >
> >  * The update is urgent and not of a security nature.  Security updates
> >will continue to be pushed through the security archive.
>
> The key point here is discerning if the update is "urgent" enough, and
> that's what I was referring to :-)
>
>
The precedent shows clamav was updated in volatile once since
squeeze came out, even with urgency set to low,
so you are probably in luck.  But you gotta
upgrade to squeeze sometime this year anyway.


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-06-29 Thread D G Teed
On Wed, Jun 29, 2011 at 11:52 AM, Camaleón  wrote:

> On Wed, 29 Jun 2011 15:04:28 +0200, Jochen Schulz wrote:
>
> > Camaleón:
>
> (...)
>
> >>> Lenny will reach its EOL in January 2012.
> >>
> >> Hey, but that was not my understanding for lenny. I know that was how
> >> it used to be but now aren't we based on a 2-year of release fixed
> >> cycle? :-?
> >>
> >> http://www.debian.org/News/2009/20090729
> >>
> >> In that announcement it can be read:
>
> (...)
>
> > Interesting, I don't remember that at all.
>
> Before installing a system, I carefully read what is the estimated/
> foreseen EOL for it. It's a must for me because I have servers to
> maintain and I can't go reinstalling every year.
>
> > I can only speculate about this, but I don't think this announcement is
> > relevant any more. The document is from July 2009 and predicted/promised
> > a squeeze release in early 2010. For that case only the authors promised
> > that you could skip the squeeze release. What actually happened is that
> > it took another whole year to release squeeze.
>
> Dunno, but I hope the comittment is stil valid.
>

On the wiki for Lenny, it says it is one year supported
before EOL after a new stable comes out was the norm but
this could change...
<%20http://wiki.debian.org/DebianLenny#Debian.2BAC8-Lenny_Life_cycle>

http://wiki.debian.org/DebianLenny#Debian.2BAC8-Lenny_Life_cycle<%20http://wiki.debian.org/DebianLenny#Debian.2BAC8-Lenny_Life_cycle>

That wiki was updated last on Feb 7, 2011.

You cannot skip a version in upgrades, but of course
you could in re-installs.  A debian version doesn't have
to be upgraded a major version every year - it is more like
once every 2 or 3 years depending on the release times.


Re: Debian safe-upgrade to 6.0.2 - don't run within X session

2011-06-29 Thread D G Teed
On Wed, Jun 29, 2011 at 12:30 AM, Scott Ferguson <
prettyfly.producti...@gmail.com> wrote:

>
> Yes.
>
> You are the only person I'm aware of reporting an incomplete upgrade.
> (I've just checked again this morning)
>
> Are you absolutely certain the update of the xserver package caused your
> upgrade to fail??
>

Yes I believe it was during the configuration stage
it had the X or gdm restart.   One poster on here referenced
a prompt asking whether it was OK to restart gdm.
This isn't a production system so I wasn't paying careful
attention to whether it asked about that.  I was working
on something else in another window at the time.
It may have warned gdm would be restarted and I just
said go ahead without reading it - as it usually prompts for
this for services impacted by for example pam updates
and I'm never concerned about restarting those services.

Unfortunately I can't reproduce this and watch more carefully.

Later when X was restarted and I redid aptitude safe-upgrade,
it showed about a dozen packages awaiting configuration and
another dozen or so to install.  I would have taken better notes
if I thought this was a bug but I thought it was just me
not taking the precautions I should.  All I have as a record
is /var/log/aptitude and it doesn't show failures or aborts.


> Despite testing on a number of different Gnome and KDE desktops we saw
> no problems with the upgrade - with the exception of a lack of a hint in
> the Samba upgrade message on how to exit the message. Users were advised
> how to exit that screen.


I did see the samba notification and read it.  I remember
that happening.  Less is already known to me as the pager
so no surprises there.

Yes, I was watching the list to see if anyone else was bitten and saw none.
Perhaps my system was an oddball.  It is an Atom 230
based system.


Re: Debian safe-upgrade to 6.0.2 - don't run within X session

2011-06-30 Thread D G Teed
On Thu, Jun 30, 2011 at 12:35 AM, Scott Ferguson <
prettyfly.producti...@gmail.com> wrote:

> On 30/06/11 02:41, D G Teed wrote:
> >
> >
> > On Wed, Jun 29, 2011 at 12:30 AM, Scott Ferguson
> >  > <mailto:prettyfly.producti...@gmail.com>> wrote:
> >
> 
> > Yes I believe it was during the configuration stage
> > it had the X or gdm restart.�� One poster on here referenced
> > a prompt asking whether it was OK to restart gdm.
>
> Both gdm and kdm "should" save app states
>
>
I tried this on a second desktop system (aptitude run of update
followed by safe-upgrade) and it did not cause X to restart.

I wonder if my previous problem was triggered by the
time of day as it was just around midnight.

I noticed this in the aptitude output:

Setting up gdm3 (2.30.5-6squeeze3) ...
Scheduling reload of GNOME Display Manager configuration: gdm3.

I don't know how it does that, but perhaps it happened right away
and triggered a restart of X?

Anyway, I guess I've proved to myself that the problem wasn't
as generic as I had believed from the first experience.

--Donald Teed


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-07-08 Thread D G Teed
Back to the initial topic...

I did get an updated clamav yesterday on my home system.

Checking at work with the same repos (except Debian multimedia)
I don't see updates to clamav packages as of 14:58:23 UTC 2011.

0.97.1+dfsg-1~squeeze1 is the package version I have at home.

The site :

http://packages.qa.debian.org/c/clamav.html

shows stable-updates should have the newer version.

Perhaps the repos are taking awhile to get updated.

--Donald


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-07-08 Thread D G Teed
On Fri, Jul 8, 2011 at 12:18 PM, Camaleón  wrote:
>
> Packages are in there:
>
> http://ftp.debian.org/debian/pool/main/c/clamav/?C=M;O=D
>
> Did you refresh your repos (apt-get update)?
>

It is finicky.  I played with various repo sources and once did
see clamav package group appear as possible.  I said 'n' to abort
because I wanted to understand exactly where the package was
coming from.

If I use only:

 deb http://ftp.debian.org/debian squeeze-updates main
 deb-src http://ftp.debian.org/debian squeeze-updates main

in my sources.list, then I can see the updates available.

--Donald


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/camnr8_nsoqsbbwepudpj85zpv+_ty4cysxe5puejmopynt5...@mail.gmail.com



Re: Looking for an alternative to mysql

2011-07-08 Thread D G Teed
On Fri, Jul 8, 2011 at 4:02 PM, Wayne Topa  wrote:

> On 07/08/2011 12:57 PM, Roger Leigh wrote:
> > On Fri, Jul 08, 2011 at 12:50:36PM -0400, Wayne Topa wrote:
> >> Is postgresql more reliable then mysql
> >
> > Yes, without a shadow of doubt.
> >
> >> or are there other viable DB's?
> >
> > Yes, though I haven't used them myself, PostgreSQL serving my
> > needs quite nicely.  There are quite a number of alternatives;
> > google can help here.
> >
> Thanks for the reply Roger.
>
> I tried installing postgres-8.4 on Sid but it too has a problem :-(
>
> grave bugs of postgresql-8.4 (-> ) 
>  #632028 - postgresql 8.4.8 regression - failure to handle char(4) =
> bpchar (Fixed: postgresql-8.4/8.4.8-0squeeze2)
> serious bugs of postgresql-8.4 (-> ) 
>  #630569 - php5-pgsql and postgresql >= 8.4 seem to collude never to
> close idle persistent connections
> Summary:
>  postgresql-8.4(2 bugs)
>
> I can't win for losing.
>
> Wayne
>

I don't understand. Is your requirement to install a DB
package with zero bugs?  It doesn't exist anywhere.


Re: Looking for an alternative to mysql

2011-07-08 Thread D G Teed
On Fri, Jul 8, 2011 at 6:05 PM, Wayne Topa  wrote:

> On 07/08/2011 03:42 PM, D G Teed wrote:
> > On Fri, Jul 8, 2011 at 4:02 PM, Wayne Topa  wrote:
> >
> >> On 07/08/2011 12:57 PM, Roger Leigh wrote:
> >>> On Fri, Jul 08, 2011 at 12:50:36PM -0400, Wayne Topa wrote:
> >>>> Is postgresql more reliable then mysql
> >>>
> >>> Yes, without a shadow of doubt.
> >>>
> >>>> or are there other viable DB's?
> >>>
> >>> Yes, though I haven't used them myself, PostgreSQL serving my
> >>> needs quite nicely.  There are quite a number of alternatives;
> >>> google can help here.
> >>>
> >> Thanks for the reply Roger.
> >>
> >> I tried installing postgres-8.4 on Sid but it too has a problem :-(
> >>
> >> grave bugs of postgresql-8.4 (-> ) 
> >>  #632028 - postgresql 8.4.8 regression - failure to handle char(4) =
> >> bpchar (Fixed: postgresql-8.4/8.4.8-0squeeze2)
> >> serious bugs of postgresql-8.4 (-> ) 
> >>  #630569 - php5-pgsql and postgresql >= 8.4 seem to collude never to
> >> close idle persistent connections
> >> Summary:
> >>  postgresql-8.4(2 bugs)
> >>
> >> I can't win for losing.
> >>
> >> Wayne
> >>
> >
> > I don't understand. Is your requirement to install a DB
> > package with zero bugs?  It doesn't exist anywhere.
> >
>
> I am not in the habit of installing packages that known to not work.
>
> Thanks for your enlightening comment.  I look forward to your request
> for help in the future.
>
>
Sorry, but your question seemed naive and still seems so.  I was actually
hoping to hear my understanding of your concern was wrong.

There are thousands of web sites, big and small, using mysql and postgresql.

They all have bugs, just as any Linux or another other OS we choose has
bugs, some big, some small.

Oracle has bugs as well - some as serious as found in any major open source
database project.

This inability to install a database is not a real inability, but rather
an irrational choice.  If mysql was seriously flawed, then it should surely
fall
apart under the weight of its use to power facebook.

If the facebook example seems like a one hit wonder, take a look at the
other
"customers" of mysql:

http://www.mysql.com/customers/

If stability is your concern, don't run with Sid.  It is for development
work and
testing changes.  People might run Sid on the desktop for getting slightly
later and greater packages, but Sid is not for production use.


Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-07-11 Thread D G Teed
On Fri, Jul 8, 2011 at 6:01 PM, Jochen Schulz  wrote:
> D G Teed:
>>>
>>
>> It is finicky.  I played with various repo sources and once did
>> see clamav package group appear as possible.  I said 'n' to abort
>> because I wanted to understand exactly where the package was
>> coming from.
>
> You can use 'apt-cache policy $package' to see which source apt chooses
> for a given package.

Thanks for that tip.

I waited for awhile to see how this worked out in case there was something
odd with a repo mirror not being in sync.  But the problem I've seen remains.

If I disable the main squeeze repo mirror in source.list:

#deb http://mirror.its.dal.ca/debian/ squeeze main contrib non-free

then the update can be found in squeeze-updates:

# apt-cache policy clamav
clamav:
  Installed: 0.97+dfsg-2~squeeze1
  Candidate: 0.97.1+dfsg-1~squeeze1
  Version table:
 0.97.1+dfsg-1~squeeze1 0
500 http://mirror.its.dal.ca/debian/ squeeze-updates/main amd64 Packages
 *** 0.97+dfsg-2~squeeze1 0
100 /var/lib/dpkg/status

If I leave the main squeeze mirror (there are none other for plain
squeeze) in sources.list:

deb http://mirror.its.dal.ca/debian/ squeeze main contrib non-free

then the clamav update does not show as available:

# apt-cache policy clamav
clamav:
  Installed: 0.97+dfsg-2~squeeze1
  Candidate: 0.97+dfsg-2~squeeze1
  Version table:
 0.97.1+dfsg-1~squeeze1 0
500 http://mirror.its.dal.ca/debian/ squeeze-updates/main amd64 Packages
 *** 0.97+dfsg-2~squeeze1 0
990 http://mirror.its.dal.ca/debian/ squeeze/main amd64 Packages
100 /var/lib/dpkg/status

This jives with the report I've seen with apt-get update; apt-get upgrade
and what packages it would update.

I've never seen a case like this before.  It has also been seen
if I change base squeeze mirror repos to yorku.ca:

http://debian.yorku.ca/debian/ squeeze main contrib non-free

I wouldn't expect to have to comment out the core squeeze mirror entry
in order to use updates available in squeeze-updates


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/camnr8_mtaah8ermldd2qsxasxovgxcjjnwme0f8bqy9_wxo...@mail.gmail.com



Re: clamav 0.97.1 not coming to squeeze-updates ?

2011-07-11 Thread D G Teed
On Mon, Jul 11, 2011 at 2:50 PM, Camaleón  wrote:
> On Mon, 11 Jul 2011 14:25:07 -0300, D G Teed wrote:

>>   Version table:
>>      0.97.1+dfsg-1~squeeze1 0
>>         500 http://mirror.its.dal.ca/debian/ squeeze-updates/main amd64
>>         Packages
>>  *** 0.97+dfsg-2~squeeze1 0
>>         990 http://mirror.its.dal.ca/debian/ squeeze/main amd64 Packages
>>         100 /var/lib/dpkg/status
>
> (...)
>
> 990 has more weight than 500.
>
> JFYI, my volatile repo (I updated clamav just a few days ago) shares the
> same priority that the main repo, that is, 500.
>

I have not pinned anything.

Where are these weights coming from?  Here is what I get when using
ftp.debian.org - the same story as with dal.ca or york.ca with
squeeze-updates getting a lower weight:

# apt-cache policy
Package files:
 100 /var/lib/dpkg/status
 release a=now
 500 http://ftp.debian.org/debian/ squeeze-updates/non-free amd64 Packages
 release o=Debian,a=stable-updates,n=squeeze-updates,l=Debian,c=non-free
 origin ftp.debian.org
 500 http://ftp.debian.org/debian/ squeeze-updates/contrib amd64 Packages
 release o=Debian,a=stable-updates,n=squeeze-updates,l=Debian,c=contrib
 origin ftp.debian.org
 500 http://ftp.debian.org/debian/ squeeze-updates/main amd64 Packages
 release o=Debian,a=stable-updates,n=squeeze-updates,l=Debian,c=main
 origin ftp.debian.org
 990 http://ftp.debian.org/debian/ squeeze/non-free amd64 Packages
 release v=6.0.2.1,o=Debian,a=stable,n=squeeze,l=Debian,c=non-free
 origin ftp.debian.org
 990 http://ftp.debian.org/debian/ squeeze/contrib amd64 Packages
 release v=6.0.2.1,o=Debian,a=stable,n=squeeze,l=Debian,c=contrib
 origin ftp.debian.org
 990 http://ftp.debian.org/debian/ squeeze/main amd64 Packages
 release v=6.0.2.1,o=Debian,a=stable,n=squeeze,l=Debian,c=main
 origin ftp.debian.org
 990 http://security.debian.org/ squeeze/updates/non-free amd64 Packages
 release v=6.0,o=Debian,a=stable,n=squeeze,l=Debian-Security,c=non-free
 origin security.debian.org
 990 http://security.debian.org/ squeeze/updates/contrib amd64 Packages
 release v=6.0,o=Debian,a=stable,n=squeeze,l=Debian-Security,c=contrib
 origin security.debian.org
 990 http://security.debian.org/ squeeze/updates/main amd64 Packages
 release v=6.0,o=Debian,a=stable,n=squeeze,l=Debian-Security,c=main
 origin security.debian.org
Pinned packages:

Another system I checked does not have these weights - it is 500 for everything.
Neither system has anything under /etc/apt/preferences.d nor
/etc/apt/sources.list.d
Does anyone know where the 990 weight is being set?


--Donald


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAMNR8_M6_ZBCahhpbMWySQpm4Vd-FCXWV+CTUD9OHcnp=7o...@mail.gmail.com



Re: Best linux Distro 2011

2011-08-11 Thread D G Teed
On Sun, Aug 7, 2011 at 3:06 PM, Nico Kadel-Garcia  wrote:

> On Sun, Aug 7, 2011 at 9:45 AM, Anirudh Parui 
> wrote:
> > Hi Friends,
> >
> > The comparison between Linux Distros is a big matter of discussion.
> > And when it comes to finding out what is the best everyone has his own
> > point of view.
> > Well i found this link which does a good comparison in all domains and
> > want to share with you all.
> > http://www.tuxradar.com/content/best-distro-2011
> >
> > And Well Debian wins over all the Distros :)
>
> It also gets the origin of Linux as an operating system wrong. (The
> core GNU application swuite, of the compiler, compilation tools, core
> libraries, and core system utilities came first, not the kernel: the
> kernel simply completed the suite and led to the newly published OS's
> being called "Linux".) And it completely ignores the commercially
> supported Linux distributions, such as RHEL, OEL, and the (recently
> defunct) commercial SuSE. So while patting oneself on the back for the
> popularity of your favorite distro, take it with a grain of salt.
>
>
Two other aspects it misses are measurements of time taken
to get a service-crippling bug report addressed, and time taken
to patch a zero day exploit.  These aspects are critical to know
when choosing an OS for production systems.

I agree they should have compared to commercial Linux varieties as well.
In addition, hardware compatibility for stuff you don't have in a desktop
system should be a consideration.  Who supports that recently released
SAS RAID card from Dell, or installing to an iSCSI device on the NAS?
Some info for people who are running data centre equipment, not just
hobby boxes.  Redhat could be the winner here, but I'd be curious to
see if Debian is catching up in this category.

In my recent experience with identical bugs in Redhat and Debian, and
comparing the security update to address the ssh exploit from fall 2010,
Debian beats Redhat in rapid response.

Another strange category, which could be useful for those of us forced
to use old hardware, is how well does the latest distro handle installation
on older stuff like IBM xSeries?  Many people might assume this is supported
but you'd be surprised.


Installing debian package independent from system

2011-08-23 Thread D G Teed
A user would like the latest and greatest zsh and we have
a deb package for it.  For security purposes I want to
keep the slightly older version of zsh obtained and maintained
from debian packages as the system default zsh.

I'm willing to install the later version of zsh in an alternate directory,
say under their home or in /usr/local for the one user.

I thought perhaps dpkg --root /usr/local/zsh with a copy of
/var/lib/dpkg placed under /usr/local/zsh would do the trick,
but it isn't happy as some part of this still believes we
are working on the main system dpkg path:

dpkg --root /usr/local/zsh  -i ~username/zsh_4.3.12-1_i386.deb
(Reading database ... 73404 files and directories currently installed.)
Preparing to replace zsh 4.3.10-14 (using
.../username/zsh_4.3.12-1_i386.deb) ...
dpkg (subprocess): unable to execute old pre-removal script
(/var/lib/dpkg/info/zsh.prerm): No such file or directory
dpkg: warning: subprocess old pre-removal script returned error exit status 2
dpkg - trying script from the new package instead ...
dpkg (subprocess): unable to execute new pre-removal script
(/var/lib/dpkg/tmp.ci/prerm): No such file or directory
dpkg: error processing /home/username/zsh_4.3.12-1_i386.deb (--install):
 subprocess new pre-removal script returned error exit status 2
dpkg (subprocess): unable to execute installed post-installation
script (/var/lib/dpkg/info/zsh.postinst): No such file or directory
dpkg: error while cleaning up:
 subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
 /home/username/zsh_4.3.12-1_i386.deb

Building from source would work too, but typically has care and feeding steps
just to get all the deps in line.

What is the best way to use a deb package and not have it as part
of the system's knowledge of installed packages?  It is OK if at runtime
zsh has dependancy on system libs.

--Donald


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/camnr8_ndmkbr0pbfhs1nmfqjc5qvy4sb7ww-kgk9v9jepg+...@mail.gmail.com



Re: Installing debian package independent from system

2011-08-23 Thread D G Teed
On Tue, Aug 23, 2011 at 11:33 AM, Darac Marjal wrote:

> On Tue, Aug 23, 2011 at 11:24:38AM -0300, D G Teed wrote:
> > A user would like the latest and greatest zsh and we have
> > a deb package for it.  For security purposes I want to
> > keep the slightly older version of zsh obtained and maintained
> > from debian packages as the system default zsh.
> >
> > I'm willing to install the later version of zsh in an alternate
> directory,
> > say under their home or in /usr/local for the one user.
> >
> > I thought perhaps dpkg --root /usr/local/zsh with a copy of
> > /var/lib/dpkg placed under /usr/local/zsh would do the trick,
> > but it isn't happy as some part of this still believes we
> > are working on the main system dpkg path:
> >
> [cut: errors]
> >
> > Building from source would work too, but typically has care and feeding
> steps
> > just to get all the deps in line.
> >
> > What is the best way to use a deb package and not have it as part
> > of the system's knowledge of installed packages?  It is OK if at runtime
> > zsh has dependancy on system libs.
>
> Well, I can see this, at least, being a problem. What if, for example,
> the latest version of zsh depends on a version of a system library
> that's incompatible with your current libraries (i.e. an ABI change)?
>

We would probably keep updating the zsh installed in the alternate root.
I just want to have the system default zsh updated in the usual manner
and rest assured that the system default is patched often enough.

The alternate zsh can be updated, perhaps by the user, whenever they
want a later and greater version of zsh.  (Assuming I can get this
working from dpkg, otherwise we'll be building from tarball - but
I was hoping Debian wouldn't force me into that).


Re: Installing debian package independent from system

2011-08-23 Thread D G Teed
On Tue, Aug 23, 2011 at 12:25 PM, D G Teed  wrote:

>
>
> On Tue, Aug 23, 2011 at 11:33 AM, Darac Marjal 
> wrote:
>
>> On Tue, Aug 23, 2011 at 11:24:38AM -0300, D G Teed wrote:
>> > A user would like the latest and greatest zsh and we have
>> > a deb package for it.  For security purposes I want to
>> > keep the slightly older version of zsh obtained and maintained
>> > from debian packages as the system default zsh.
>> >
>> > I'm willing to install the later version of zsh in an alternate
>> directory,
>> > say under their home or in /usr/local for the one user.
>> >
>> > I thought perhaps dpkg --root /usr/local/zsh with a copy of
>> > /var/lib/dpkg placed under /usr/local/zsh would do the trick,
>> > but it isn't happy as some part of this still believes we
>> > are working on the main system dpkg path:
>> >
>> [cut: errors]
>> >
>> > Building from source would work too, but typically has care and feeding
>> steps
>> > just to get all the deps in line.
>> >
>> > What is the best way to use a deb package and not have it as part
>> > of the system's knowledge of installed packages?  It is OK if at runtime
>> > zsh has dependancy on system libs.
>>
>> Well, I can see this, at least, being a problem. What if, for example,
>> the latest version of zsh depends on a version of a system library
>> that's incompatible with your current libraries (i.e. an ABI change)?
>>
>
> We would probably keep updating the zsh installed in the alternate root.
> I just want to have the system default zsh updated in the usual manner
> and rest assured that the system default is patched often enough.
>
> The alternate zsh can be updated, perhaps by the user, whenever they
> want a later and greater version of zsh.  (Assuming I can get this
> working from dpkg, otherwise we'll be building from tarball - but
> I was hoping Debian wouldn't force me into that).
>

Searching more for how dpkg can handle something like a relocate, it
appears this is not an option.  The solution for me was to download
the tarball, configure, make and make install, which placed an
alternate version of zsh under /usr/local as desired.   Not many
dependencies so it wasn't as painful as some packages to
install this way.


Re: Installing debian package independent from system

2011-08-24 Thread D G Teed
On Tue, Aug 23, 2011 at 5:20 PM, Walter Hurry wrote:

> On Tue, 23 Aug 2011 11:24:38 -0300, D G Teed wrote:
>
> > A user would like the latest and greatest zsh and we have a deb package
> > for it.  For security purposes I want to keep the slightly older version
> > of zsh obtained and maintained from debian packages as the system
> > default zsh.
>
> Your reasoning does not seem logical to me. If you need to stick to an
> older version of a given package for "security purposes", then why allow
> one user access to an allegedly insecure version?
>
> On the other hand, if it is considered safe for that user to have access
> to the latest version, then why not just make it standard for everyone?
>
>
The user has a shell account and access to a compiler.  If they want
to, they can compile and create zsh or other software and run it
under their own home area.  There is no policy blocking that.
I'm merely helping them out a little, and gaining a bit of
organization in contrast to letting users create their own solution.

If there was a security issue against zsh, chances are that script kiddies
would be looking at the one in the default location, not the hand compiled
one.

There is also a small risk that the hand compiled one becomes unsupported
temporarily due to lib updates, so it can't hurt to carry the supported
version
as a fall back.


Re: Fwd: Billion 7800N

2011-08-25 Thread D G Teed
On a re-read of what I wrote earlier, it might be a little confusing where I
say you
need static network set up and then instruct on how to do DHCP.  Perhaps I
can just make an assumption or two and give you some simple steps.

Assuming your ISP does use PPPoE, and you are using the router device
to connect your system to the ISP/Internet:

1.  Disable pppd.  This is important.  I think the command would be:
update-rc.d disable pppd

2. Ensure eth0 network device will be able to get an IP from the router
via DHCP.  This would be the entry in /etc/network/interfaces I mentioned
before:

allow-hotplug eth0
iface eth0 inet dhcp

3. Reboot to allow pppd to go away and dhcp client to kick in.

4. Verify your IP and routing are good:

route -n

(Here is mine:

route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
192.168.0.0 0.0.0.0 255.255.0.0 U 0  00 eth1
0.0.0.0 192.168.0.1 0.0.0.0 UG0  00 eth1

)

Yours would be similar except 192.168.1 everywhere and likely eth0.

ifconfig -a

I think the output of that is covered previously.

5. Start up a web browser and visit your Billion router at whatever IP the
router documentation says it is running on.

6. Configure the router for Internet (WAN) access according to the ISP's
information.

7. The Internet should now work from the Debian system.


Re: Fwd: Billion 7800N

2011-08-26 Thread D G Teed
I started another reply, and it had lots of steps to try to repair
this situation, but then I rethought.

If this is a fresh install, and you have no data to keep on the Debian
system,
here is a bulletproof solution:

Reinstall.

When you reinstall, don't do anything fancy with network.  Just let
it do the default with DHCP.  Plug it into the router LAN jack as you
do the install.

Once Debian is installed, run a web browser and go to the webpage
on your router as documented in the router manual.  In the router's
web site (something like 192.168.1.1), set up the Internet connection
to login via PPPoE.  If you have already set up the router from
your Mac or Windows system and a web browser, you don't need to do
it again from Linux, it will just work.

If the Debian system can see the router website, but not the Internet after
setting up the router with the ISP info, try one reboot of Debian to give
it a chance to load the networking since the router had its configuration
done.

This is the shortest and simplest path to fixing up the botched Linux
networking setup you have.

Remember, the idea is to allow the router to be your path to the Internet.
It
will handle everything, and the Linux system only needs to get on the LAN,
behind the firewall on the router.


Debian  <> Billion Router <> ISP Modem <--> Internet


Re: Fwd: Billion 7800N

2011-08-28 Thread D G Teed
On Sun, Aug 28, 2011 at 10:14 PM, Heddle Weaver wrote:

>
>
> On 27 August 2011 11:41, D G Teed  wrote:
>
>>
>> I started another reply, and it had lots of steps to try to repair
>> this situation, but then I rethought.
>>
>> If this is a fresh install, and you have no data to keep on the Debian
>> system,
>> here is a bulletproof solution:
>>
>> Reinstall.
>>
>
> This is the latest fashion.
> It's not a new install, but I have my /home partition on an external 1TB
> expansion drive, so inconvenience is minimal and the revision factor won't
> hurt.
>

I suggested reinstall as it would be the quickest way to get rid
of ppp daemon if you didn't know how to disable the service.
But now that that mystery is resolved, no need to reinstall.

Once ppp is gone, then set up DHCP to
get your IP from the router on Debian system.

This would be the entry in /etc/network/interfaces I mentioned before:

allow-hotplug eth0
iface eth0 inet dhcp

Then reboot.  You are really not that far off from getting this up.


Re: Fwd: Billion 7800N

2011-08-30 Thread D G Teed
On Mon, Aug 29, 2011 at 9:49 PM, Heddle Weaver 
wrote:
>
>
> On 29 August 2011 13:05, D G Teed  wrote:
>> This would be the entry in /etc/network/interfaces I mentioned before:
>> allow-hotplug eth0
>> iface eth0 inet dhcp
>> Then reboot.  You are really not that far off from getting this up.
>
>
> Well, I've actually done this, but I didn't have too much success.
> Of course, I didn't give up trying.
> Unfortunately, I think I tried too much and too far and that's about all I
> have left in /etc/network/interfaces.

Oh.  Well, the only other thing you need is the loopback.
Typically listed first in the interfaces file:

auto lo
iface lo inet loopback

> I thought I'd wait until I installed the new Debian config on the new,
> secondhand PC.
> I'm actually posting from that now.
> So easy when you can access the interface.
> Once I get a successful config on the new Debian install, I was going to
try
> copying the configuration over to the laptop.
> I know I've deleted something on the laptop in the 'network' config that I
> shouldn't have.

If you only mean the interfaces file, then the addition above may help.

If you mean unknown deleted files from the system, it might be
advisable to reinstall.  I think you mentioned there was a home
directory to preserve. If that is a different partition, there is
nothing written over on /home by the installer, as long as you
do not select to reformat the /home partition device.

> Doing it this way is the most constructive way to become more familiar
with
> things, I think.

To a point, this is true, but often learning is easier on something
that works.  If the system is mortally wounded in an unknown
way, and you are driving into town to use webmail to get help, and driving
back to try a couple more things each time, I think there
would be less greenhouse gas produced with a reinstall.


Re: Fwd: Billion 7800N

2011-08-30 Thread D G Teed
On Tue, Aug 30, 2011 at 11:16 AM, Andrew McGlashan <
andrew.mcglas...@affinityvision.com.au> wrote:

>
> I cannot believe this thread is still going -- it is way beyond funny now
> it's ludicrous to say the least
>
> Your Ethernet device is broken and if it is not broken, then get someone
> else to fix this problem for you as you are only going around in circles and
> getting nowhere.
>


There is actually some progress and it might be resolved soon.  We have
no indication there is a hardware issue, but every indication networking
was not set up appropriately in Linux.  We finally got past the
assumptions that there is something wrong with the router
or that ppp daemon was required on the Linux notebook, and now
we are making progress.  Everyone had to learn the basics some way.


Re: Fwd: Billion 7800N

2011-08-30 Thread D G Teed
On Tue, Aug 30, 2011 at 11:16 AM, Andrew McGlashan
>
>
> I cannot believe this thread is still going -- it is way beyond funny now
> it's ludicrous to say the least
>
> Your Ethernet device is broken and if it is not broken, then get someone
> else to fix this problem for you as you are only going around in circles and
> getting nowhere.
>
>
>
I should add, your earlier suggestion to boot from a Live CD is a very good
one.

It would provide a very quick method to determine if the hardware
or the OS install was faulty.  If the Live CD works to get on the 'net,
then the notebook hardware is fine and the install/configuration is botched.


Re: How to reduce number of loaded kernel modules?

2011-09-01 Thread D G Teed
On Thu, Sep 1, 2011 at 2:53 AM, Csanyi Pal  wrote:

> Darac Marjal  writes:
>
> > On Wed, Aug 31, 2011 at 03:39:41PM +0200, Csanyi Pal wrote:
> >> I have a rather impressive list of loaded modules. I'm not shure whether
> >> are they really needed?
> >>
> >> How can I know which modules I don't need so I can have those
> >> blacklisted?
> >
> > Generally speaking, the kernel only loads modules it needs. Typical
> > methods for this include udev discovering hardware (so the kernel loads
> > the driver for it) or modules or user-space software depending on other
> > modules (such as how the wireless system depends on some of the hashing
> > modules).
> >
> > So, in a normal system, the modules are loaded because they are needed.
> > (The corollary to this is that when modules are not needed, such as
> > removing a device, they are unloaded).
> >
> > Blacklisting is usually only needed if you have a broken modules or
> > there are two modules that service your needs and you need to use the
> > other one (for example, a USB device might be detected as needed
> > cdc-ether, but you know that actually it doesn't, so you blacklist
> > cdc-ether).
>
> I have an usb ethernet adapter that sometimes freezes my Debian SID
> system. I have mailed this problem to the Bugzilla Kernel org here:
> https://bugzilla.kernel.org/show_bug.cgi?id=40372
>
> The developers advices me to reduce loaded modules.
> How can I do that?


As I read it, they are trying to reduce the number of modules for
diagnostic purposes.  It isn't intended to fix the problem, but perhaps
allow a kernel dump to appear so there are some real details to
bite into for this bug report.

This Debian wiki page might help for disabling the auto load
of modules.  The update-initramfs step is important, as many
modules load from within initfs, before your root partition is
even mounted.

http://wiki.debian.org/KernelModuleBlacklisting

Spend some time getting to know what they are, as disabling some can
cause the system to not boot up, or not very well.  I suspect you have
some sort of firewall package which is loading a lot of extra unnecessary
modules.  There is also a chance the bug is one specific to the specialized
iptables features provided by extra modules and your USB ethernet adapter
driver.
Some iptables modules are marked experimental and should be avoided.
Modules such as nf_conntrack_amanda are not standard for basic rules set
up in iptables, so there is likely something bringing that along.


Re: DO NOT BUY Western Digital "Green" Drives (also present in WD "Elements" external USB cases)

2011-09-04 Thread D G Teed
On Sun, Sep 4, 2011 at 4:41 AM, shawn wilson  wrote:

>
> On Sep 4, 2011 3:23 AM, "Miles Bader"  wrote:
> >
> > lina  writes:
> > > just guess ...  might be wrong, might lots of people coming for WD,
> > > so the stores only sold WD.
> >
> > Dunno, but I've had extremely good experiences with WD drives in the
> > past, so I'd definitely favor them when I buy a new one...
>
> I have absolutely no oppinion. I was merely pointing out that the OP was
> presenting his oppinion as fact and I thought that pretty messed up.
>
> Grented, due to the inciteful the subject was, I'm sure this thread will
> keep going for at least a week and most of us will remember something bad
> about WD the next time we go buy a disc. Oh well. The OP probably  got his
> wish :)
>
Well, actually, you have all helped that to happen by broadening the topic.
 It wasn't about a brand, but about
a brand and model type specifically.  This is about Green WD drives, not all
WD drives.

Before buying anything important to you, or something you don't want to buy
all over again
because you are careful with money, you should research the consumer
reaction or solicit opinions on it.

I researched the green drives from all makers and learned they are
engineered to do one thing well: save
power while in a desktop not in use.  They are not designed for servers, nor
even for intense computing
use like gaming or RAID.  I also noticed consumer backlash on all green
drives, and cheaper pricing on
green drives than any other kind (fire sale at some retailers).  So I did
not buy green drives.

I have used WD blue SATA 1TB drives in a couple of servers with RAID 1 and
there is no problem. The most intense server using them is running cyrus
mail,
and horde webmail, for about 3000 mailboxes and probably 500 users
visiting their mailboxes every day.  The horde webmail on there also serves
users of another cyrus system with 4000 more active mailboxes on it.
The system is backed up nightly with EMC networker, and runs
in a room with air conditioning.  Right now, early on a Sunday morning, the
load is
only .35 and the hard drive internal temperature from smartctl is 37 C.  I
imagine
it does go above 40 when loaded or under full backup duty.


Re: MTU and Postfix

2011-09-04 Thread D G Teed
On Wed, Aug 31, 2011 at 1:09 PM, Camaleón  wrote:

> Hello,
>
> I've been busy on these days trying to solve a problem with Postfix that
> drove me nuts.
>
> Sporadically (let's say one in hundred e-mails) my Postfix had problems
> for delivering messages with ~3 MiB of attachment to some e-mail hosts.
> DSN service returned the final notice of delivery to the user and logs
> displayed an error like "timed out while sending message body".
>
> These hosts were not of those "difficult" ones like Hotmail, Gmail,
> Yahoo! or the like that because to their high volume of traffic implement
> additional (and sometimes strambotic) measures to prevent spam and such
> "anti-all" systems that may require a different transport definiton in
> Postfix to get e-mails delivered.
>
> Moreover, these hosts were not e-mail servers that are behind Cisco PIX
> devices or using MS Exchange servers that are also well-known to be
> conflictive to "dialogue" with.
>
> Nope, I was having problems for delivering to common, small hosts of mid-
> size companies, one of the hosts running a Debian system, like mine. So I
> had to run some tests to find out what could be the problem here.
>
> I first tried to define a less conservative values (by increasing the
> time) for "smtp_data_done_timeout", "smtp_data_xfer_timeout" and
> "smtp_data_init_timeout" but this had no effect at all and again, some e-
> mails were still undelivered.
>
> Googling around I found some posts and articles¹ pointing to the MTU
> value (my bonded interface was set by default to 1500) and as I had
> nothing to lose, I changed this and lowered to 1400.
>
> This turned out to work wonders and since then (that's more than a week
> ago) I still had no other DSN delivery errors. Besides, e-mails in
> deferred queue that could not be sent in that time, after lowering the
> MTU value were also delivered with no apparent problems.
>
> I'm still monitoring this but if this is the "cure" to prevent such
> errors, are there any expected drawbacks for lowering MTU "system-wide"?
>
> The server has dual gigabit NIC which are bonded (in backup mode) and
> server itself is behind a FTTH gigabit router. The server also hosts a
> web server.
>
> Any comments or experiences on this are welcome :-)
>
> ¹http://www.hsc.fr/ressources/cours/postfix/doc/faq.html#timeouts
>
>
You might want to try the postfix mailing list and see if they have any
ideas.
Be prepared for cold, hard, terse answers.  They don't chat much - busy I
guess.

Was the previous MTU of 1500, a value you had set, or the default when
queried?

I'm wondering because of a recent experience I had tweaking MTU.  I wanted
to try jumbo frames
to improve samba throughput on large video files.  With MTU set to 9000 on
Linux and Windows,
throughput increased about 8 times.  But it caused problems with the web
service on Linux,
which was running a domain under dyndns.  I set the MTU on Linux back to
unspecified, but
left the jumbo frame active on Windows side.  The performance was still very
good in large
samba file transfers.  I might remember this wrong, but it seemed it was
worse performance
when Linux side specified 1500, so I left it as unspecified and it has
worked well.
I still get high transfer speeds in samba with unspecified MTU on Linux but
jumbo
of 9000 MTU on Windows side.

BTW, 1492 is a common MTU seen in FAQs.  You might get just as good with
that.


Re: DO NOT BUY Western Digital "Green" Drives (also present in WD "Elements" external USB cases)

2011-09-04 Thread D G Teed
On Sun, Sep 4, 2011 at 2:27 PM, Doug  wrote:

> **
> On 09/04/2011 03:41 AM, shawn wilson wrote:
>
>
> On Sep 4, 2011 3:23 AM, "Miles Bader"  wrote:
> >
> > lina  writes:
> > > just guess ...  might be wrong, might lots of people coming for WD,
> > > so the stores only sold WD.
> >
> > Dunno, but I've had extremely good experiences with WD drives in the
> > past, so I'd definitely favor them when I buy a new one...
>
> I have absolutely no oppinion. I was merely pointing out that the OP was
> presenting his oppinion as fact and I thought that pretty messed up.
>
> Grented, due to the inciteful the subject was, I'm sure this thread will
> keep going for at least a week and most of us will remember something bad
> about WD the next time we go buy a disc. Oh well. The OP probably  got his
> wish :)
>
>
> It's been a few years since I retired, but I remember the IT guys replacing
> a _lot_ of Western Digital drives.  I guess the
> company bought them because they were cheaper, but I don't think they saved
> any money.  For my own use, I have been using Seagate and
> Hitachi, and have had no trouble in quite some time.  Obviously, YMMV, but
> what I saw sould not encourage me to buy WD.
>
> --doug
>

There are sometimes bad batches, in any brand.  I remember WD having a bad
batch back in the mid-90's
which was said to be due to painting in the plant causing contamination.
 I'd expect that sort of thing
would be a lesson learned and avoided.

If you google it, you can find people swearing off seagate, and saying they
are safe with
western digital, or swearing off western digital and saying they are
safe with hitachi,
and every possible combination, all due to their own personal experiences,
even
in quantities greater than one or two.

But again, this thread is about WD green drives, not all WD drives, which is
a specific engineering
to fit question.


Re: DO NOT BUY Western Digital "Green" Drives (also present in WD "Elements" external USB cases)

2011-09-04 Thread D G Teed
On Sun, Sep 4, 2011 at 5:19 PM, Nicolas Bercher  wrote:

> On 03/09/2011 23:03, shawn wilson wrote:
>
>> So, I can understand your frustration but, 4 discs out of how many
>> thousands they make
>> every day? That's not that conclusive. That said, iirc the reviews about a
>> year ago did
>> say that this was a very consumer drive. I don't remember hearing them
>> break but...
>>
>
> Right.  Moreover, it is well known that a good raid1 array is built upon
> disks from different batches.  This means you actually had, for example, to
> order WD hard drives from various resellers.
>

Um, I don't think there is anyone doing this as standard practise in
industry.
Typically data centres buy systems from Dell and the like and they
provide the drives, which are always from the same model in one system.
Maybe in hand built lab machines you'd do this, but that is for someone
repurposing old hardware or some specialized purpose, not standard
computing production platform.


> In my lab, we used several WD15EARS-32M (WD Green 1.5TB) and a few died
> early, as well as other HD from other brands.  So, really I think the issue
> is almost of the time a batch issue than a brand or design issue.
> Maybe these WD HD have design issues, but for sure the experience depicted
> here is not really conclusive.
>

If Western Digital themselves are telling us not to use green drives in RAID
1
doesn't this mean something?  It isn't really a "design issue", but more
of an engineering purpose.  You don't put a fridge on the trunk of a VW
Jetta,
you get a van or truck for this purpose.  Similarly you don't use a green
drive,
engineered for desktop users with lots of downtime (typical green drive
user: check mail, make toast, check the web for news, make coffee,
answer a phone call, answer email - lots of spurts of minor activity with
sleep
cycles between) and purpose this for a server or other purposes running
the hard drive hard.  Its like anything else...  There are washing machines
for home and commercial use - if you use the home version in commercial
use it will destroy itself.  Likewise with printers, copiers, etc. - a $40
consumer
printer is not designed to print off large documents in an office setting.


Drive failure rates (was Re: DO NOT BUY Western Digital "Green" Drives (also present in WD "Elements" external USB cases))

2011-09-05 Thread D G Teed
On Sun, Sep 4, 2011 at 8:41 PM,  wrote:
>
>  - Original Message -
> *From: *Brad Rogers 
> *To: *Debian Users ML 
> *Sent: *9/4/2011 6:26:48 PM
> *Subject: *Re: DO NOT BUY Western Digital "Green" Drives (also present in
> WD "Elements" external USB cases)
>
> On Sun, 04 Sep 2011 13:27:51 -0400
> Doug  wrote:
>
> Hello Doug,
>
> > It's been a few years since I retired, but I remember the IT guys
> > replacing a _lot_ of Western Digital drives. I guess the
>
> In the same vein, I remember lots of Seagate drives being replaced. For
> a while the company had a nickname of Seacrate. Possibly because that's
> what most of their gear was worth at the time; Crating up, and chucking
> in the sea.
>
> At various times, products from certain companies go through a bad
> time. Usually, it can be attributed to some factor or other. For
> example, one drive manufacturer's drives started failing prematurely
> because the wrong type of bearing oil had been used. Such issues often
> go unnoticed until quite large numbers of faulty products are in use.
> The offending company earns a bad reputation until the next company comes
> along and makes a cock-up and everyone forgets about the first one.
>
> WD, Seagate, and just about every other drive manufacturer has gone
> through these cycles. It's nothing new, and will continue for years to
> come.
>
> --
> Regards _
> / ) "The blindingly obvious is
> / _)rad never immediately apparent"
> The man in a tracksuit attacks me
> I Predict A Riot - Kaiser Chiefs
>
> The two most recent studies (one based on Google hardware and one from
> Carnegie-Mellon) provide two interesting insights:
>
> 1.  While there does not appear to be a strong correlation between failures
> and manufacturers there is a strong correlation between drive models from a
> manufacturer and failures.  The inference is that WD may not be
> failure-prone but some WD products are failure-prone.
>
> 2.  There is not a strong dependency between drive temperature and
> failures.
>
>
> This is now a different topic than the original one on green drives.

Here is a study by google on their drive failure rates.
This is from a few years ago, but I believe the correlations would hold true
today:

http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en//papers/disk_failures.pdf

The failure rate rises at the three year mark under higher temperatures.
 This is
consistent with how I understood the relationship between temperature and
electronics.  It doesn't kill it immediately, but stresses the components
and
shortens the lifespan.

For some reason, there is more failure in low temperatures and young drives.
I suspect there is another variable in there they have not isolated, such as
low humidity and static electricity, or vibration, etc. - something that
would
have been common to the drives operating in a colder data centre.

There is also this 2007 study, showing the failure rates are much higher
than the theoretical
number thrown out by drive manufacturers:

http://www.pcworld.com/article/129558/study_hard_drive_failure_rates_much_higher_than_makers_estimate.html

However it is based on consumer returns, not actual verified bad disks.
Some of the responses from manufacturers in that article seem to want
the blame offloaded to the customer.

A comment from 2010 following the article says that at the current low
prices for
drives, they are at a commodity level.  Essentially he is saying there is no
room
for quality to be built in with $60 hard drives, and you should stock drives
the way bakers stock bags of flour.

Personally, I've seen an overall increase in electronics failures straight
off the shelf.
Indications are that motherboard makers, flash memory makers, etc.,
do not test or burn in the newly manufactured equipment to pull out the
usual
small percentage of manufacturing flaws.  With cheap electronics, they can't
afford to QA the final product - it is cheaper for the customer to test it
for
them and RMA it.


Re: What happens to kernel.org ?

2011-09-09 Thread D G Teed
On Fri, Sep 9, 2011 at 6:12 AM, Jerome BENOIT  wrote:
> Hello List:
>
> Thanks for your replies:
> indeed it is `down for maintenance' according to itself.
>
> Jerome
> On 09/09/11 02:02, Tom H wrote:
>>
>> On Thu, Sep 8, 2011 at 6:12 PM, Jerome BENOIT
>>  wrote:
>>>
>>> does anyone know why `www.kernel.org' can't be found ?
>>
>> The servers are probably still being rebuilt after the recent hack.
>>
>> You can get the latest kernel(s) from github.

Wow, still down 18 hours later.  I think they have broken getting three nines.
But since their service is only providing kernels or other code, I guess this
isn't really the system availability but the hosted code availability.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/camnr8_pkbt-zwccdavbuh4nftgcec6_cz8azr4nosefxl-n...@mail.gmail.com



Re: Worst Admin Mistake? was --> Re: /usr broken, will the machine reboot ?

2011-09-15 Thread D G Teed
>
> On Wed, Sep 7, 2011 at 1:00 PM, Bob Proulx  wrote:
> > jacques wrote:
> >> by error most of the binaries in /usr are erased (killing rm :-(
> >
> > Everyone has made that mistake at some point.  I know I have!
>
>
I was hunting for the disk hog using the curses based ncdu utility.
I found a large tar file which could be deleted without issue.
It was in an oracle production directory area.  Due to a bug in ncdu,
(
http://sourceforge.net/tracker/?func=detail&aid=2829950&group_id=200175&atid=972449)
it didn't delete the highlighted item, but the one next to it,
which was the oracle production database.  Recovery was easy
to do from snapshot files from the database.  I'll never use
ncdu or a similar UI to delete anything ever again - it is only rm for me.


Re: Worst Admin Mistake? was --> Re: /usr broken, will the machine reboot ?

2011-09-15 Thread D G Teed
On Thu, Sep 15, 2011 at 8:57 PM, Walter Hurry wrote:

> On Thu, 15 Sep 2011 20:34:38 -0300, D G Teed wrote:
>
> > I was hunting for the disk hog using the curses based ncdu utility. I
> > found a large tar file which could be deleted without issue. It was in
> > an oracle production directory area.  Due to a bug in ncdu, (
> > http://sourceforge.net/tracker/?
> func=detail&aid=2829950&group_id=200175&atid=972449)
> > it didn't delete the highlighted item, but the one next to it, which was
> > the oracle production database.  Recovery was easy to do from snapshot
> > files from the database.
>
> I don't think that is the full story. Oracle databases do not consist of
> a single file; there are many. Control files, tablespace files, undo
> segments, redo logs, etc, etc. And if the database is properly organised,
> loss of a single file should not present any problem at all.
>
>
It was a directory which was deleted by the bug in ncdu.


Re: Moving to Debian server. (Re)Visiting the Postfix or Exim decision. Asking for Debian-ites' opinions.

2011-09-17 Thread D G Teed
On Fri, Sep 16, 2011 at 7:00 PM,   wrote:
> Hi,
>
> We're evaluating our company's future server platform, and are pretty
> much decided on Debian.

At some companies, this would be regarded a miracle to achieve.
I'm glad it worked out for you.

> I notice that Debian has settled on Exim as the default MTA, unlike many
> (most?) other distros which seem to use Postfix.

Most systems need some sort of mailer, at least to get the system
messages relayed to another address the sysadmins use.  For this
minimal purpose, sendmail would do just as well.  I'd imagine
this is what most people using the default Exim are happy
with - not mail MX or SMTP duties, but simple relay or
local mail duties.  Of course there are sites using Exim fully,
but the majority of exim use in Debian will be minimal features.

At our shop, we always install postfix immediately after Debian install.
It triggers an uninstall of exim4 and we're happy with this.  Perhaps
if it wasn't this easy to substitute, there would be more of an outcry,
but Debian is all about choices, so you pick what you want.

In Debian you are driving a car - you can get to many
destinations easily.  In other distros you are riding a train
(quickly goes only to destinations they lay track to access)
+ walking (can't easily get package from limited repository,
compile from source).  It can take a little getting used to
the difference of finding your own way versus going along
with what the distro provided.

Postfix is what we use on our MX and SMTP systems, so we know it.
Like many, use the well trodden path of clamav, amavisd-new and postfix.

I don't know of anyone who was sorry they went with Postfix.

The responses on the postfix mailing list can be terse and
pithy, but they do fully address your questions if you ask
well documented questions, and the RTFM references are
always very precise to one's needs.  When Wietse answers,
it is like Linus Torvalds answered your questions
on kernel compiling.  You don't expect him to spend too
much of his day on you, but you're glad he did, even if it was
to smack some sense into you.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAMNR8_PQX0kR5Ov=r0gdK=syv68cgdme6symtz4+0wc8t2f...@mail.gmail.com



Re: Wiping hard drives - Re: debian-user-digest Digest V2011 #1704

2011-09-19 Thread D G Teed
On Mon, Sep 19, 2011 at 1:21 PM, Lee Winter wrote:

> On Mon, Sep 19, 2011 at 10:27 AM, Aaron Toponce 
> wrote:
> > On Sat, Sep 17, 2011 at 08:59:14AM +0200, Ralf Mardorf wrote:
> >> If you want to be safe, you need to overwrite the data several times,
> >
> > Have anything to back that up? If you're using drives that used the old
> MFM
> > or RLL encoding schemes, and had massive space for bits per linear inch,
> > then sure, but on today's drives, with perpindicular encoding, and the
> > extremely dense bit capacity, going more than once is silly.
>
> I perform this service for commercial recyclers.


Or in other words, it must be true because the service provided
depends on this being true.

It remains an urban legend as long as there is no proof offered otherwise.

I'm not saying it is true or not, but just that there has never been
a demonstration made public of getting data off drives after
a complete zeroing.  So it remains an unknown, and
never demonstrated.

Perhaps if you have military grade secrets to to protect
you'll want any and all methods done to it.  They will buy
the whole protection package out of paranoia, the same way
the U.S. has many times overestimated the capabilities
of the USSR in the cold war.  It doesn't make it real, it just
makes it a solution to follow in the vein of "better safe
than sorry".  You don't want what seems like a solution
today to be overrun by tomorrow's technology, so
go for total destruction.

Personally, my drives are so old when discarded they have
no purpose in reuse, so I don't zero.  I use physical
destruction, and it takes only a minute.  But out of
paranoia, I can't say publicly how I do it.


Re: Wiping hard drives - Re: debian-user-digest Digest V2011 #1704

2011-09-19 Thread D G Teed
On Mon, Sep 19, 2011 at 3:08 PM, Lee Winter wrote:

>
> You also failed to consider the asymmetry between the possible
> outcomes once the "truth" becomes known.  If one-pass overwrite is
> sufficient, but one uses multiple passes, then one has lost a small
> increment of time.  If one pass overwrite is not sufficient and you
> use only one pass, then you have a disaster on your hands.
>
> The way to resolve uncertainty is not to guess or flip a coin.  It is
> to carefully evaluate the risk vs. cost tradeoff.  People who perform
> that evaluation tend to be conservative about assessing unknown
> potential risks against known, fixed, and minor costs.
>

That is what I said.  I called it "better safe than sorry" rather
than giving it a business speak spin.

>
> Paranoia is whole 'nother story.  I suspect you use the term for
> dramatic purposes rather than for the purpose of clarity.  It devalues
> all of your comments.
>
> I don't mean clinical paranoia.  Just political.  In other words,
an overly cautious over reaction to the unknown capabilities
of an adversary.  It is widely mentioned in history.  It is never
realized at the time, but usually some decades later in hind sight.

If the data is military or similar, it probably makes sense to
terminate hard drives with prejudice, because capabilities could
change in the future.   But for most people, DBAN is
probably appropriate (if the drive still works, if not, try
some power tools or hammer until the deformation is to
your satisfaction).

To make the flip side of your argument of "you don't know 'cause
it would be a secret": if the NSA/FBI/CIA had no way to recover
data from a simply wiped drive, would they let the public know?


Re: Wiping hard drives - Re: debian-user-digest Digest V2011 #1704

2011-09-20 Thread D G Teed
On Mon, Sep 19, 2011 at 11:07 PM, Scott Ferguson
 wrote:
> > On Mon, Sep 19, 2011 at 12:57 PM, D G Teed  wrote:
> >
> >> It remains an urban legend as long as there is no proof offered otherwise.
>
> No - *that's* piss poor logic - the sort epoused by TV talk show hosts
> and radio shock jocks.
>
> I use Newtonian physics around the farm - does that disprove Quantum
> physics?
>
> Never confuse a neat sounding argument with evidence - it just makes you
> sound like a pompous moron (which you're not). But that's the difference
> between something untested that confirms beliefs, and a fact.
>
> I thought the sophists were long dead... :-)

You like listening to yourself type.  None of this debates the issue.

> >> I'm not saying it is true or not, but just that there has never been
> >> a demonstration�made public of getting data off drives after
> >> a complete zeroing.
> >
> > That you know of.  I suspect I read much more of this literature than you 
> > do.
>
> Gutmman (et al)
>
> >
> >> �So it remains an unknown, and never demonstrated.
>
> Unknow to D. G. Teed *may* simply mean "not shown on Discovery Channel"
> - "never demonstrated"... to who?
>
> The plural of anecdote is not evidence.

Does that mean anything in this context or were you talking
to someone else on the phone at the time?

> For non-military/investigative (sensitive) evidence - how do you think
> data *has* (and is still) been recovered from fragments of shattered,
> partially melted, hard drive platters from September 11? Yes - much of
> the procedures are classified or considered proprietary secrets - but
> some data reconstruction algorithims, and 3D magnetic field
> visualization papers, have been published... could be that current
> technology is based on them.

So you are saying just before the towers fell, someone initiated
a drive zeroing application?  Amazing.  Do try to compare apples
to apples.

You like to hear the name Gutmann.  Here is an article which questions
his paper of 1996 and references him many times:

http://www.nber.org/sys-admin/overwritten-data-gutmann.html

I think it is healthy to have a dose of skepticism with these things.

> Agreed - the lack of convincing evidence on-hand (sufficient to overcome
> dogmatic belief) is *not* evidence to support a belief - it's the "risk
> management" of morons.

If you read over what I actually wrote again, carefully, with full reading
and comprehension bits enabled, you'll see I do recommend drive
destruction if the data on them is military grade or equivalent, to
counter the case that a future technology is developed.

For those of us who don't expect NSA or equivalent to spend
a few million in efforts to read a discarded drive, a simple zeroing
will do.  It has been demonstrated that a zeroed drive cannot be
recovered at the typical data recovery service business.

There is also this outstanding challenge for someone to recover
data from a zeroed drive:

http://hostjury.com/blog/view/195/the-great-zero-challenge-remains-unaccepted

Kinda like the reward put up for psychics to prove themselves.
The only caveat which could be getting in the way is to not
disassemble.  This would get in the way of using a more
powerful read head or other methods, but at the same time
this does demonstrate if it is simply a personal drive you
want to zero and sell on eBay, zeroing will cover the situation
well enough.

The skeptics here await links illustrating data has been
accurately recovered from zeroed drives.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAMNR8_M9RfRD5_jmpKjkDOscEBzT58FOUBL=no1qrxu5+al...@mail.gmail.com



saslauthd in squeeze requires restart once in awhile

2010-12-05 Thread D G Teed
Hello,

I'm using sasl support with postfix for TLS/SSL support.
saslauthd is set for pam authentication, and this is configured
to use winbind.  It all works!

Once in awhile - twice a month I think it has been - users
report logins to SMTP is failing.  When I test saslauthd,
with testsaslauthd -s smtp, it fails.  If I restart only saslauthd
and run the same testsaslauthd from my command line history,
it works.  The test command is run local and is using authentication
which would rely on the chain of : saslauthd-> pam -> winbind .

I don't see any easy way to control the logging from saslauthd - it seems
to be run it in debug mode and therefore in the foreground or nothing.

Are there any suggestions on how I can trace what is happening
while at the same time not causing too much disruption for users?
Generally I have to get the service back up for them quickly,
but I might have a minute or so to gather some sort
of information when the thing fails.

The information I see in the mail logs looks like normal authentication
failures, which happen once in awhile for the normal reasons,
so there isn't much to see there.

--Donald


Re: saslauthd in squeeze requires restart once in awhile

2010-12-06 Thread D G Teed
On Sun, Dec 5, 2010 at 10:00 PM, D G Teed  wrote:

> Hello,
>
> I'm using sasl support with postfix for TLS/SSL support.
> saslauthd is set for pam authentication, and this is configured
> to use winbind.  It all works!
>
> Once in awhile - twice a month I think it has been - users
> report logins to SMTP is failing.  When I test saslauthd,
> with testsaslauthd -s smtp, it fails.  If I restart only saslauthd
> and run the same testsaslauthd from my command line history,
> it works.  The test command is run local and is using authentication
> which would rely on the chain of : saslauthd-> pam -> winbind .
>
>
I was looking for errors in the maillog file, but there are actually errors
in
the authentication log:  auth.log

The first error is:

Dec  4 17:32:04 myhostname saslauthd[32590]: PAM unable to
dlopen(/lib/security/pam_unix.so): /lib/security/pam_unix.so: cannot open
shared object file: Too many open files

I have a feeling this is a symptom, not the cause of the problem.  SMTP with
sasl
points to pam.  The config at pam.d/smtp doesn't even list pam_unix ,
but the same error is showing for multiple pam modules.

--Donald