As explained in:
https://mariadb.org/mariadb-dump-file-compatibility-change/
Later versions of MariaDB than Bookworm's
0.5.25, 10.6.18, 10.11.8, 11.0.6, 11.1.5, 11.2.4 and 11.4.2
introduce a breaking change to mariadb-dump (mysqldump) in order to prevent
shell commands being executed vi
On 15/08/2023 04:56, kjohn...@eclypse.org wrote:
I have done additional research, and it now appears that programs that
do extensive disk writes run much slower (3-6x) in Bookworm than they
did in Bullseye.
Have you compared kernel IO schedulers? May it be a case of SSD vs HDD
optimizing?
gr
> I have done additional research, and it now appears that programs that do
> extensive disk writes run much slower (3-6x) in Bookworm than they did in
> Bullseye. The two cases I have observed are 'svnadmin dump', and extracting
> an SQL backup of a Bacula database from P
I have done additional research, and it now appears that programs that do
extensive disk writes run much slower (3-6x) in Bookworm than they did in
Bullseye. The two cases I have observed are 'svnadmin dump', and extracting an
SQL backup of a Bacula database from Postgresql (backup
A set of svnadmin dump commands that run as part of a backup procedure seem to
be _much_ slower in Bookworm than in Bullseye. Prior to the upgrade to
Bullseye, these commands took slightly less than one hour. After the upgrade,
similar commands (dumping a few more revisions) require more than
Greg Wooledge wrote:
...
> Well, that's interesting. You *can* specify an absolute directory by
> this mechanism. I guess I learned something today.
:)
> So, what exactly was the complaint? That songbird shot themselves in
> the foot by specifying an absolute directory for core dumps that w
everal dead end man pages (why the hell
don't they have comprehensive SEE ALSO sections?!). Then gave up, then
decided to try "man -k sysctl". This led me to discover that there is
a sysctl(2) page in addition to sysctl(8) (the latter does NOT link
to the former).
sysctl(2) mentions c
h defines the path and
> filename of the core file.
It can do that too. Quoting the relevant part of the kernel's
documentation (admin-guide/sysctl/kernel.rst.gz) :
If the first character of the pattern is a '|', the kernel will treat
the rest of the pattern as a comma
The Wanderer wrote:
> On 2022-02-28 at 11:35, Greg Wooledge wrote:
>> On Mon, Feb 28, 2022 at 11:25:13AM -0500, songbird wrote:
>>
>>> >> me@ant(14)~$ ulimit -a
>>> >> real-time non-blocking time (microseconds, -R) unlimited
>>> >> core file size (blocks, -c) unlimited
>>>=20
>>> i
On 2022-02-28 at 11:35, Greg Wooledge wrote:
> On Mon, Feb 28, 2022 at 11:25:13AM -0500, songbird wrote:
>
>> >> me@ant(14)~$ ulimit -a
>> >> real-time non-blocking time (microseconds, -R) unlimited
>> >> core file size (blocks, -c) unlimited
>>
>> i had accomplished the ulimit ch
On Mon, Feb 28, 2022 at 11:25:13AM -0500, songbird wrote:
> >> me@ant(14)~$ ulimit -a
> >> real-time non-blocking time (microseconds, -R) unlimited
> >> core file size (blocks, -c) unlimited
>
> i had accomplished the ulimit change already, but the lack of
> the proper permission
had accomplished the ulimit change already, but the lack of
the proper permission on the output directory meant that a core
file would not be generated.
also, i wanted to make sure that all programs running would
be able to dump core upon crashing because i have some kind of
strange lock up goin
On Mon, Feb 28, 2022 at 10:01:05AM -0500, songbird wrote:
>
> i had some fun trying to figure out why a regular user could not
> dump a core file
Just put 'ulimit -c unlimited' into the appropriate dot file to put
things back to how they used to be.
This changed a *really
i had some fun trying to figure out why a regular user could not
dump a core file and i had all the settings figured out. since it
was a silly and obvious thing but it stumped me for a bit i
figured it would be worth sharing. :)
the answer is at the end...
using Debian testing.
i have
On Wed, Sep 6, 2017 at 12:10 AM, John Conover wrote:
>
> Anytime mailx is envoked, it does a core dump:
>
> mail: mu_wordsplit failed: missing closing quote
> Segmentation fault
>
> Any suggestions?
>
It would be nice to document the problem first with full
Anytime mailx is envoked, it does a core dump:
mail: mu_wordsplit failed: missing closing quote
Segmentation fault
Any suggestions?
Thanks,
John
--
John Conover, cono...@rahul.net, http://www.johncon.com/
On Wed, 2014-04-23 at 22:13 +0100, Steve wrote:
> > I have a mediawiki site running on Mysql db; I just started getting this
> > message on my automated backup;
> >
> > mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as
> > crashed and last (automatic?) repair failed when using
> I have a mediawiki site running on Mysql db; I just started getting this
> message on my automated backup;
>
> mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as
> crashed and last (automatic?) repair failed when using LOCK TABLES
The table is marked as crashed, and an autom
On Wed, Apr 23, 2014 at 10:33:34AM -0500, John Foster wrote:
> I have a mediawiki site running on Mysql db; I just started getting this
> message on my automated backup;
>
> mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as
> crashed and last (automatic?) repair failed when usin
I have a mediawiki site running on Mysql db; I just started getting this
message on my automated backup;
mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as
crashed and last (automatic?) repair failed when using LOCK TABLES
Any ideas?
Thanks
John
--
To UNSUBSCRIBE, email to de
On Thu, Nov 22, 2012 at 12:14:19PM +0100, Ralf Mardorf wrote:
>
> All off-topic is caused by people who feel offended by harmless jokes,
> breaking threads, asking the wrong questions, carbon copy, brainstorming
> etc., not by the people who make harmless jokes, are breaking threads,
> asking the
] Outrageous sexism - dump questions -
carbon copy - ignoring hidden ids - HTML - top posting - brainstorming -
etc.
Date: Thu, 22 Nov 2012 12:06:26 +0100
Is there anything people are else allowed on Debian user mailing list,
besides dissing people for harmless jokes, breaking threads, asking the
wrong
Hi
On Fri, Oct 05, 2012 at 03:18:54PM +0100, Julien Groselle wrote:
> Hello Debian users,
>
> I want enable core dump on a debian squeeze server.
> So I have downloaded the source code of kernel (2.6.32) and i have activate
> this options :
>
> CONFIG_PROC_VMCORE=
Hello Debian users,
I want enable core dump on a debian squeeze server.
So I have downloaded the source code of kernel (2.6.32) and i have activate
this options :
CONFIG_PROC_VMCORE=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
Compilation was successful, but i can
On Fri, Oct 14, 2011 at 6:18 PM, David Sastre
wrote:
On Tue, Oct 04, 2011 at 09:36:59AM -0300, Roberto Scattini wrote:
...
it is a standard package installation, apache2, php5 and
libapache2-mod-php5. i also
installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg.
You need apache2-d
Thank you what should i do ?
did i do it in the right way in order to run a dump to backup the whole
system
On 20/07/12 19:50, Mostafa Hashemi wrote:
sorry because of last message, coincidentally i pressed send key
the complete message is :
hi guys
thank you all for your answers to my last questions.
i found how dump - restore works. but i have question :
i did this :
dump -0aj -f /tmp/1.bak
sorry because of last message, coincidentally i pressed send key
the complete message is :
hi guys
thank you all for your answers to my last questions.
i found how dump - restore works. but i have question :
i did this :
dump -0aj -f /tmp/1.bak /
in order to run a dump to back up the whole
hi guys
thank you all for your answers to my last questions.
i found how dump - restore works. but i have question :
i did this :
dump -0aj -f /tmp/1.bak
On Fri, Oct 14, 2011 at 6:18 PM, David Sastre wrote:
> On Tue, Oct 04, 2011 at 09:36:59AM -0300, Roberto Scattini wrote:
>> ...
>> it is a standard package installation, apache2, php5 and
>> libapache2-mod-php5. i also
>> installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg.
>>
>>
> Yo
arted to generate "segmentation faults" randomly.
> it is a standard package installation, apache2, php5 and
> libapache2-mod-php5. i also
> installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg.
>
> i generated an apache coredump file, but when i open dump file
n, apache2, php5 and
libapache2-mod-php5. i also
installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg.
i generated an apache coredump file, but when i open dump file with gdb the
only i get is this:
# gdb /usr/sbin/apache2 /var/cache/apache2/core
GNU gdb (GDB) 7.0.1-debian
Copyright (C
On Sun, 17 Jul 2011, Dom wrote:
On 17/07/11 21:00, Justin Piszcz wrote:
I have already submitted a bug report for this. It seems to have been caused
by the latest update of e2fslibs. If you downgrade e2fslibs to
1.41.12-4stable1 dump will work again. You should also downgrade e2fsprogs
On 17/07/11 21:00, Justin Piszcz wrote:
Hi,
Re-sending to correct list, per Mike:
I've been using dump for ~1-2+ years now and the syntax has not changed;
however, with the most recent update:
From: 0.4b43-1
To: 0.4b44-1
Dump no longer has the same behavior:
dump -0 -z9 -L 2011-07-
Hi,
Re-sending to correct list, per Mike:
I've been using dump for ~1-2+ years now and the syntax has not changed;
however, with the most recent update:
From: 0.4b43-1
To: 0.4b44-1
Dump no longer has the same behavior:
dump -0 -z9 -L 2011-07-16 -f file.ext4dump /
DUMP: Date of this
Andrei, Stan and Paul:
Thanks for the replies. I was unaware that "/dev/disk/*"
existed. I must have missed that lesson during the last upgrade.
I appreciate your assistance.
Regards from Calgary,
Dean
--
Dean Provins, P. Geoph.
dprov...@a
Andrei Popescu put forth on 4/17/2011 3:12 AM:
> /dev/disk/by-id/
> /dev/disk/by-label/ # assuming you defined labels
> /dev/disk/by-path/
> /dev/disk/by-uuid/
>
> I prefer labels since they can be set to something meaningful/mnemonic.
Yes, I use labels for partitions as well, more for organizat
On 20110417_111214, Andrei Popescu wrote:
> On Sb, 16 apr 11, 15:00:39, Dean Allen Provins, P. Geoph. wrote:
> >
> > This means that I must NOT rely on my automatic (crontab-based) dump
> > scripts, but interrogate the system manually, and if necessary, alter
> > /va
On Sb, 16 apr 11, 15:00:39, Dean Allen Provins, P. Geoph. wrote:
>
> This means that I must NOT rely on my automatic (crontab-based) dump
> scripts, but interrogate the system manually, and if necessary, alter
> /var/lib/dumpdates so that the script will run properly.
No, just adapt
Hello
I have used "dump" and "restore" to perform system backups for many years.
Since upgrading to Debian 6.x, I have not been able to obtain consistent and
reliable dumps for the following reason:
Sometimes. my single fixed disk is labeled as /dev/sda, but
At other
> "SP" == Stephen Powell writes:
SP> Perhaps the "--purge-unused" option of aptitude is what you are looking for.
Naw, that only controls the difference between purging and just removing.
OK, this helped:
# aptitude markauto linux-doc-2.6.37
The following packages will be REMOVED:
linux-doc-
On Sat, 02 Apr 2011 20:27:19 -0400 (EDT), jida...@jidanni.org wrote:
>
> Why do I always have to clean up older versions by hand?
>
> E.g., linux-doc-2.6 pulls in the latest version automatically,
> but if I don't want an ever growing number of older versions accruing, I
> have to remove them by
In <87d3l4tc0o@jidanni.org>, jida...@jidanni.org wrote:
>Why do I always have to clean up older versions by hand?
>
>E.g., linux-doc-2.6 pulls in the latest version automatically,
>but if I don't want an ever growing number of older versions accruing, I
>have to remove them by hand.
>
># apt-sh
Why do I always have to clean up older versions by hand?
E.g., linux-doc-2.6 pulls in the latest version automatically,
but if I don't want an ever growing number of older versions accruing, I
have to remove them by hand.
# apt-show-versions -r -p ^linux-doc
linux-doc-2.6/unstable uptodate 1:2.6.
On Ma, 07 dec 10, 14:11:14, Karl Vogel wrote:
>
>All filesystems are ext3, mounted like so:
>
> rw,nodev,noatime,nodiratime,data=journal
noatime implies nodiratime.
Regards,
Andrei
--
Offtopic discussions among Debian users and developers:
http://lists.alioth.debian.org/mailman/listi
Karl Vogel wrote:
>Here are some machine specifics for perspective.
Data! Excellent stuff.
>It's an IBM x3400, 2 Xeon 2GHz CPUs, 4Gb memory running RedHat.
> ...
> total used free sharedbuffers cached
>Mem: 19439481576708 367240
>> In an earlier message, I said:
K> This box has ~630,000 files using 640 Gbytes, but not many files change
K> hourly.
>> On Mon, 6 Dec 2010 21:33:01 -0700, Bob Proulx said:
B> Note that you must have sufficient ram to hold the inodes in buffer cache.
B> Otherwise I would guess that it would b
On Tue, Dec 7, 2010 at 1:29 AM, Peter Tenenbaum
wrote:
> On Sun, Dec 5, 2010 at 7:15 PM, Peter Tenenbaum
> wrote:
>> On Sat, Dec 4, 2010 at 4:14 PM, Peter Tenenbaum
>> wrote:
>>>
>>> In thinking this over, I think that the best approach is to simply have a
>>> daily rsync --archive from my main
Karl --
So on my first attempt, I realized that I need to exclude the /media
directory, or else the backup drive will attempt to back up itself. OK,
that's fine.
On the second attempt, the backup got into the /proc directory, complained
about some files disappearing, and then froze.
I don't hav
Karl Vogel wrote:
>I'm interested in seeing what kind of grief you're getting from rsync.
>I've had to argue with it in the past; feel free to reply privately if
>you'd rather.
Rsync has been a great performer for me as well.
>Don't rule out dumb and strong, it works great for me.
>> On Sun, 5 Dec 2010 19:15:25 -0800,
>> Peter Tenenbaum said:
P> After having some difficulty getting rsync to do exactly what I want,
P> I've become convinced to try rsnapshot. I'll let you know how it goes.
I'm interested in seeing what kind of grief you're getting from rsync.
I've ha
Well, after having some difficulty getting rsync to do exactly what I want,
I've become convinced to try rsnapshot. I'll let you know how it goes.
-PT
On Sat, Dec 4, 2010 at 4:14 PM, Peter Tenenbaum wrote:
> Jochen, Paul --
>
> In thinking this over, I think that the best approach is to simply
Peter Tenenbaum:
>
> In thinking this over, I think that the best approach is to simply have a
> daily rsync --archive from my main hard drive to the backup drive. While I
> understand that more sophisticated backup systems are often useful in a
> large system, the system in question is a home co
Jochen, Paul --
In thinking this over, I think that the best approach is to simply have a
daily rsync --archive from my main hard drive to the backup drive. While I
understand that more sophisticated backup systems are often useful in a
large system, the system in question is a home computer with
Peter Tenenbaum:
>
> Paul -- thanks for the suggestions. I guess that, since I am not using a
> tape drive for backup, there's no good reason to use dump rather than rsync,
> and the latter will leave me with a navigable file tree on the backup
> drive.
If you are going
Paul -- thanks for the suggestions. I guess that, since I am not using a
tape drive for backup, there's no good reason to use dump rather than rsync,
and the latter will leave me with a navigable file tree on the backup
drive. This is what I use at work to back up /home/ptenenbaum (a
On 20101201_215849, Peter Tenenbaum wrote:
> I've been using dump to perform backups of my home Debian workstation (I run
> squeeze, btw). I do a weekly level 0 dump and daily level 1 dumps.
>
> For some reason the level 1 backups are almost as large as the level 0 (the
> le
On Wed, 01 Dec 2010 21:58:49 -0800, Peter Tenenbaum wrote:
> I've been using dump to perform backups of my home Debian workstation (I
> run squeeze, btw). I do a weekly level 0 dump and daily level 1 dumps.
>
> For some reason the level 1 backups are almost as large as the lev
I've been using dump to perform backups of my home Debian workstation (I run
squeeze, btw). I do a weekly level 0 dump and daily level 1 dumps.
For some reason the level 1 backups are almost as large as the level 0 (the
level 0 is 57.9 GB and the level 1 is 51.6 GB), even though we clearly
On Sun, May 23, 2010 at 3:21 PM, Avinash H.M. wrote:
> Thanks !!! this worked.
> I did
> ulimit -c unlimited.
>
> I tried tracking ulimit. If i do
> which ulimit, i am not getting anything. [ I expect the path of this binary
> ]
>
> Is it a built in bash command or something like that
>
Ye
Chris Bannister earthlight.co.nz> writes:
>
> On Sat, May 22, 2010 at 10:40:53PM +0530, Avinash H.M. wrote:
> > Hi All,
> >
> > I am using DSL [ damn small linux ] which is branched from debain.
> > I am trying to use GCC, GDB. Able to install both of them.
>
> Although DSL is based on Debian
h -g, then take a look at man core. Not
>> every program that has received
>> a segfault signal dumps core. Look at gcore to see how to generate it.
>>
>>
>> --
>> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a
>> subject of "un
On Sat, May 22, 2010 at 10:40:53PM +0530, Avinash H.M. wrote:
> Hi All,
>
> I am using DSL [ damn small linux ] which is branched from debain.
> I am trying to use GCC, GDB. Able to install both of them.
Although DSL is based on Debian, it is not Debian. There are numerous
differences between th
dumps core. Look at gcore to see how to generate it.
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a
> subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: http://lists.debian.org/4bf81657.7060...@gmail.com
>
>
Normally core dump is disabled. You could find the maximum size of core
file created using "ulimit -a", normally that is 0.
Increase it using
ulimit -c
On 05/22/2010 08:10 PM, Avinash H.M. wrote:
Hi All,
I am using DSL [ damn small linux ] which is branched from debain.
I am trying to use GCC, GDB. Able to install both of them.
I am doing following
- run a helloworld.c program whic has a while loop. So while
running, its stuck in while
Hi All,
I am using DSL [ damn small linux ] which is branched from debain.
I am trying to use GCC, GDB. Able to install both of them.
I am doing following
- run a helloworld.c program whic has a while loop. So while
running, its stuck in while.
- another shell, "kill -11 PID" [ PID of
On Thu, Jan 29, 2009 at 06:23:42PM +0100, Johannes Wiedersich wrote:
> Martin McCormick wrote:
> > Nicolas KOWALSKI writes:
> >> Yes, dump is working well with ext3 filesystems, as long as they
> >> are not actively modified. LVM may come to help here.
> >
> &
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Martin McCormick wrote:
> Nicolas KOWALSKI writes:
>> Yes, dump is working well with ext3 filesystems, as long as they
>> are not actively modified. LVM may come to help here.
>
> Many thanks. That is a problem more or less w
Nicolas KOWALSKI writes:
> Yes, dump is working well with ext3 filesystems, as long as they are not
> actively modified. LVM may come to help here.
Many thanks. That is a problem more or less with any
backup method that doesn't unmount and convert to read-only a
given
On Thu, 29 Jan 2009, Martin McCormick wrote:
> While looking on dselect, I saw there is a port of the BSD
> package of dump and restore but it says it is for the ext2 file
> system. Is there a Linux dump and restore utility anywhere that
> is safe to use with ext3?
Yes, dump is worki
While looking on dselect, I saw there is a port of the BSD
package of dump and restore but it says it is for the ext2 file
system. Is there a Linux dump and restore utility anywhere that
is safe to use with ext3?
The issue is that we have a mixed Unix environment.
There is a number of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Somewhat shamefaced...
Permissions. The log / log directories on the client were owned by root:
an artifact of copying over the old logs (as root) as part of the new
install.
I'd set amandad to be run as root in inted.conf -- that made amcheck
work,
On 11/22/08 06:14, Girish Kulkarni wrote:
[snip]
Thanks for the replies Ron, Mike and others. I guess I understand use
of od now (although that du puzzle is still with me).
From the top of the man page DESCRIPTION:
"Summarize disk usage of each FILE, recursively for directories."
The 2nd arg
On Wed, Nov 19, 2008 at 7:32 AM, Ron Johnson wrote:
> In my work I collect a lot of data from the serial port, spit out by
> other machines. od is VERY useful when I collect data from a new
> machine. Are there carriage returns? Are there other strange
> characters? After those questions are
On 11/18/08 18:44, mike wrote:
Ron Johnson wrote:
On 11/18/08 05:50, mike wrote:
Girish Kulkarni wrote:
Hello,
I was trying to understand octal dumps today that can be obtained
using od. But two questions cropped up:
1. When would an octal dump be useful? Surely not in perusing text
Ron Johnson wrote:
On 11/18/08 05:50, mike wrote:
Girish Kulkarni wrote:
Hello,
I was trying to understand octal dumps today that can be obtained
using od. But two questions cropped up:
1. When would an octal dump be useful? Surely not in perusing text
files?! And when people say they
Girish Kulkarni <[EMAIL PROTECTED]>:
>
> I was trying to understand octal dumps today that can be obtained
> using od. But two questions cropped up:
>
> 1. When would an octal dump be useful? Surely not in perusing text
Sure. ftp any text file from a Windows or Mac u
On 11/18/08 05:50, mike wrote:
Girish Kulkarni wrote:
Hello,
I was trying to understand octal dumps today that can be obtained
using od. But two questions cropped up:
1. When would an octal dump be useful? Surely not in perusing text
files?! And when people say they use octal (or hex
Girish Kulkarni wrote:
Hello,
I was trying to understand octal dumps today that can be obtained
using od. But two questions cropped up:
1. When would an octal dump be useful? Surely not in perusing text
files?! And when people say they use octal (or hex) dump to check
and edit binary
>
>
>
> Original Message
>From: [EMAIL PROTECTED]
>To: debian-user@lists.debian.org
>Subject: RE: octal dump
>Date: Mon, 17 Nov 2008 22:24:49 +0530
>
>>Hello,
>>
>>I was trying to understand octal dumps today that can be obtained
>>us
On Monday 17 November 2008, "Girish Kulkarni" <[EMAIL PROTECTED]> wrote
about 'octal dump':
>1. When would an octal dump be useful? Surely not in perusing text
> files?! And when people say they use octal (or hex) dump to check
> and edit binary files, ho
On 11/17/08 10:54, Girish Kulkarni wrote:
Hello,
I was trying to understand octal dumps today that can be obtained
using od. But two questions cropped up:
1. When would an octal dump be useful? Surely not in perusing text
files?!
Nowadays? Hardly ever, if at all. But the PDP-7, which
On Mon, Nov 17, 2008 at 10:24:49PM +0530, Girish Kulkarni wrote:
>-Ad'. But I find that the offset of the last line does not match
>with the size of the file given by 'du -h'. Why is that?
du shows disk usage, which can be different from the size of a file
due to the block size of the fi
Hello,
I was trying to understand octal dumps today that can be obtained
using od. But two questions cropped up:
1. When would an octal dump be useful? Surely not in perusing text
files?! And when people say they use octal (or hex) dump to check
and edit binary files, how do they learn
On Tue, Sep 18, 2007 at 05:16:26PM +0530, Rajesh wrote:
> --
> With Regards
> From
> Rajesh
> Delhi 110 059
>
>
> Sir,
>
> I cann't able to install debian 4.0 iso image with the help of Grub.
Read the installation manual in your language of choice then complete
and submit an installation repor
--
With Regards
From
Rajesh
Delhi 110 059
Sir,
I cann't able to install debian 4.0 iso image with the help of Grub.
On Tue, 2007-01-23 at 21:08 -0300, Carlos Alberto Pereira Gomes wrote:
> I use wine in a debian/sid machine but in the last 2-3 weeks (after some
> apt-get updade/upgrade) it just does not run any application it used to
> run without problems.
A good start could be to file a bug report and see if
Hi,
I use wine in a debian/sid machine but in the last 2-3 weeks (after some
apt-get updade/upgrade) it just does not run any application it used to
run without problems. Following is its core dump:
wine: Unhandled page fault on read access to 0x7d481a6c at address 0xb7dc1141
(thread 0009
On Montag, Nov 14, 2005, at 11:42 Europe/Berlin, Marc Brünink wrote:
Hi,
I've a program which creates some subthreads. One of them hangs. I
need a stack dump of this very special thread. Any hints? I'm unable
to use gdb directly, because it just occurs on one machine. And it'
Hi,
I've a program which creates some subthreads. One of them hangs. I need
a stack dump of this very special thread. Any hints? I'm unable to use
gdb directly, because it just occurs on one machine. And it's a
production system. It's no problem if the thread gets ter
On Fri, Sep 16, 2005 at 11:13:31AM -0500, J French wrote:
> We are setting up Debian Linux on a new server for a PostGreSQL database. In
> the past, on FreeBSD, I used the dump utility with the live filesystem
> (snapshot) switch to backup the running database. Does dump on linu
general)? I need a robust backup because this will be a
> > production server. Advice is appreciated.
>
> You cannot use dump on a read/write mounted fs - the kernel does not keep
> writes from the fs coherent with the block device you read from (eg,
> /dev/hda0). You must at leas
On Fri, Sep 16, 2005 at 11:13:31AM -0500, J French wrote:
> Debian (or linux in general)? I need a robust backup because this will be a
> production server. Advice is appreciated.
You cannot use dump on a read/write mounted fs - the kernel does not keep
writes from the fs coherent with the
ne of those 200G tape
drives running off a NetApp fileserver. Since the main filesystem was
on Raid5 I only did a weekly tape dump and stored the tapes in my
apartment. It worked fine as long as we only had 200G of data, but
manually changing tapes is an enormous hassle! I can't imagine a
I'll throw in a suggestion for bacula:
http://bacula.org/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
On Fri, 2005-09-16 at 11:13 -0500, J French wrote:
> How are most people backing up to tape with Debian (or linux in
> general)? I need a robust backup because this will be a production
> server. Advice is appreciated.
I'm using Amanda, but Amanda uses dump or gtar, so the quest
On Fri, 16 Sep 2005, J French wrote:
> Hello,
> We are setting up Debian Linux on a new server for a PostGreSQL database. In
> the past, on FreeBSD, I used the dump utility with the live filesystem
> (snapshot) switch to backup the running database. Does dump on linux support
>
Hello,
We are setting up Debian Linux on a new server for a PostGreSQL database. In the past, on FreeBSD, I used the dump utility with the live filesystem (snapshot) switch to backup the running database. Does dump on linux support live filesystem backups as well? How are most people backing up
I have the smae problem. Do you have the solution for this problem now?
Thanks,
Charles
Charles Yuan
The CPS Groupt: (02) 9968 5710 f: (02) 9953 8083
Roozemond, D.A. wrote:
Hi Matthijs,
Example:
lynx -dump
http://www.ticketmaster.nl/html/searchResult.htmI?keyword=carlton&l=NL
| grep resultaten
No need to understand all the above - If you change the '&' in the
webpage address to '\&', it's working:
Alt
1 - 100 of 275 matches
Mail list logo