Newer mariadb-dump output breaks on import

2024-07-25 Thread Gareth Evans
As explained in: https://mariadb.org/mariadb-dump-file-compatibility-change/ Later versions of MariaDB than Bookworm's 0.5.25, 10.6.18, 10.11.8, 11.0.6, 11.1.5, 11.2.4 and 11.4.2 introduce a breaking change to mariadb-dump (mysqldump) in order to prevent shell commands being executed vi

Re: Disk writes much slower in Bookworm i386 [Was: svnadmin dump ...]

2023-08-14 Thread Max Nikulin
On 15/08/2023 04:56, kjohn...@eclypse.org wrote: I have done additional research, and it now appears that programs that do extensive disk writes run much slower (3-6x) in Bookworm than they did in Bullseye. Have you compared kernel IO schedulers? May it be a case of SSD vs HDD optimizing? gr

Re: Disk writes much slower in Bookworm i386 [Was: svnadmin dump ...]

2023-08-14 Thread Stefan Monnier
> I have done additional research, and it now appears that programs that do > extensive disk writes run much slower (3-6x) in Bookworm than they did in > Bullseye. The two cases I have observed are 'svnadmin dump', and extracting > an SQL backup of a Bacula database from P

Disk writes much slower in Bookworm i386 [Was: svnadmin dump ...]

2023-08-14 Thread kjohnson
I have done additional research, and it now appears that programs that do extensive disk writes run much slower (3-6x) in Bookworm than they did in Bullseye. The two cases I have observed are 'svnadmin dump', and extracting an SQL backup of a Bacula database from Postgresql (backup

svnadmin dump much slower in Bookworm

2023-08-10 Thread kjohnson
A set of svnadmin dump commands that run as part of a backup procedure seem to be _much_ slower in Bookworm than in Bullseye. Prior to the upgrade to Bullseye, these commands took slightly less than one hour. After the upgrade, similar commands (dumping a few more revisions) require more than

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread songbird
Greg Wooledge wrote: ... > Well, that's interesting. You *can* specify an absolute directory by > this mechanism. I guess I learned something today. :) > So, what exactly was the complaint? That songbird shot themselves in > the foot by specifying an absolute directory for core dumps that w

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread Greg Wooledge
everal dead end man pages (why the hell don't they have comprehensive SEE ALSO sections?!). Then gave up, then decided to try "man -k sysctl". This led me to discover that there is a sysctl(2) page in addition to sysctl(8) (the latter does NOT link to the former). sysctl(2) mentions c

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread Reco
h defines the path and > filename of the core file. It can do that too. Quoting the relevant part of the kernel's documentation (admin-guide/sysctl/kernel.rst.gz) : If the first character of the pattern is a '|', the kernel will treat the rest of the pattern as a comma

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread songbird
The Wanderer wrote: > On 2022-02-28 at 11:35, Greg Wooledge wrote: >> On Mon, Feb 28, 2022 at 11:25:13AM -0500, songbird wrote: >> >>> >> me@ant(14)~$ ulimit -a >>> >> real-time non-blocking time (microseconds, -R) unlimited >>> >> core file size (blocks, -c) unlimited >>>=20 >>> i

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread The Wanderer
On 2022-02-28 at 11:35, Greg Wooledge wrote: > On Mon, Feb 28, 2022 at 11:25:13AM -0500, songbird wrote: > >> >> me@ant(14)~$ ulimit -a >> >> real-time non-blocking time (microseconds, -R) unlimited >> >> core file size (blocks, -c) unlimited >> >> i had accomplished the ulimit ch

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread Greg Wooledge
On Mon, Feb 28, 2022 at 11:25:13AM -0500, songbird wrote: > >> me@ant(14)~$ ulimit -a > >> real-time non-blocking time (microseconds, -R) unlimited > >> core file size (blocks, -c) unlimited > > i had accomplished the ulimit change already, but the lack of > the proper permission

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread songbird
had accomplished the ulimit change already, but the lack of the proper permission on the output directory meant that a core file would not be generated. also, i wanted to make sure that all programs running would be able to dump core upon crashing because i have some kind of strange lock up goin

Re: getting a regular user to dump core when a program crashes

2022-02-28 Thread Greg Wooledge
On Mon, Feb 28, 2022 at 10:01:05AM -0500, songbird wrote: > > i had some fun trying to figure out why a regular user could not > dump a core file Just put 'ulimit -c unlimited' into the appropriate dot file to put things back to how they used to be. This changed a *really

getting a regular user to dump core when a program crashes

2022-02-28 Thread songbird
i had some fun trying to figure out why a regular user could not dump a core file and i had all the settings figured out. since it was a silly and obvious thing but it stumped me for a bit i figured it would be worth sharing. :) the answer is at the end... using Debian testing. i have

Re: mailx(1) core dump, Debian 8, amd64

2017-09-05 Thread kamaraju kusumanchi
On Wed, Sep 6, 2017 at 12:10 AM, John Conover wrote: > > Anytime mailx is envoked, it does a core dump: > > mail: mu_wordsplit failed: missing closing quote > Segmentation fault > > Any suggestions? > It would be nice to document the problem first with full

mailx(1) core dump, Debian 8, amd64

2017-09-05 Thread John Conover
Anytime mailx is envoked, it does a core dump: mail: mu_wordsplit failed: missing closing quote Segmentation fault Any suggestions? Thanks, John -- John Conover, cono...@rahul.net, http://www.johncon.com/

Re: Mysql database dump quit working

2014-04-24 Thread John W. Foster
On Wed, 2014-04-23 at 22:13 +0100, Steve wrote: > > I have a mediawiki site running on Mysql db; I just started getting this > > message on my automated backup; > > > > mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as > > crashed and last (automatic?) repair failed when using

Re: Mysql database dump quit working

2014-04-23 Thread Steve
> I have a mediawiki site running on Mysql db; I just started getting this > message on my automated backup; > > mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as > crashed and last (automatic?) repair failed when using LOCK TABLES The table is marked as crashed, and an autom

Re: Mysql database dump quit working

2014-04-23 Thread Henning Follmann
On Wed, Apr 23, 2014 at 10:33:34AM -0500, John Foster wrote: > I have a mediawiki site running on Mysql db; I just started getting this > message on my automated backup; > > mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as > crashed and last (automatic?) repair failed when usin

Mysql database dump quit working

2014-04-23 Thread John Foster
I have a mediawiki site running on Mysql db; I just started getting this message on my automated backup; mysqldump: Got error: 144: Table './mywiki/searchindex' is marked as crashed and last (automatic?) repair failed when using LOCK TABLES Any ideas? Thanks John -- To UNSUBSCRIBE, email to de

Re: [Fwd: [D-community-offtopic] Outrageous sexism - dump questions - carbon copy - ignoring hidden ids - HTML - top posting - brainstorming - etc.]

2012-11-22 Thread Mike McClain
On Thu, Nov 22, 2012 at 12:14:19PM +0100, Ralf Mardorf wrote: > > All off-topic is caused by people who feel offended by harmless jokes, > breaking threads, asking the wrong questions, carbon copy, brainstorming > etc., not by the people who make harmless jokes, are breaking threads, > asking the

[Fwd: [D-community-offtopic] Outrageous sexism - dump questions - carbon copy - ignoring hidden ids - HTML - top posting - brainstorming - etc.]

2012-11-22 Thread Ralf Mardorf
] Outrageous sexism - dump questions - carbon copy - ignoring hidden ids - HTML - top posting - brainstorming - etc. Date: Thu, 22 Nov 2012 12:06:26 +0100 Is there anything people are else allowed on Debian user mailing list, besides dissing people for harmless jokes, breaking threads, asking the wrong

Re: Core dump problem

2012-10-05 Thread Karl E. Jorgensen
Hi On Fri, Oct 05, 2012 at 03:18:54PM +0100, Julien Groselle wrote: > Hello Debian users, > > I want enable core dump on a debian squeeze server. > So I have downloaded the source code of kernel (2.6.32) and i have activate > this options : > > CONFIG_PROC_VMCORE=

Core dump problem

2012-10-05 Thread Julien Groselle
Hello Debian users, I want enable core dump on a debian squeeze server. So I have downloaded the source code of kernel (2.6.32) and i have activate this options : CONFIG_PROC_VMCORE=y CONFIG_KEXEC=y CONFIG_CRASH_DUMP=y CONFIG_KEXEC_JUMP=y Compilation was successful, but i can&#

Re: Re: debug apache-php dump

2012-09-04 Thread Glenn B. Jakobsen
On Fri, Oct 14, 2011 at 6:18 PM, David Sastre wrote: On Tue, Oct 04, 2011 at 09:36:59AM -0300, Roberto Scattini wrote: ... it is a standard package installation, apache2, php5 and libapache2-mod-php5. i also installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg. You need apache2-d

Re: dump - restore

2012-07-20 Thread Mostafa Hashemi
Thank you what should i do ? did i do it in the right way in order to run a dump to backup the whole system

Re: dump - restore

2012-07-20 Thread Dom
On 20/07/12 19:50, Mostafa Hashemi wrote: sorry because of last message, coincidentally i pressed send key the complete message is : hi guys thank you all for your answers to my last questions. i found how dump - restore works. but i have question : i did this : dump -0aj -f /tmp/1.bak

dump - restore

2012-07-20 Thread Mostafa Hashemi
sorry because of last message, coincidentally i pressed send key the complete message is : hi guys thank you all for your answers to my last questions. i found how dump - restore works. but i have question : i did this : dump -0aj -f /tmp/1.bak / in order to run a dump to back up the whole

dump - restore

2012-07-20 Thread Mostafa Hashemi
hi guys thank you all for your answers to my last questions. i found how dump - restore works. but i have question : i did this : dump -0aj -f /tmp/1.bak

Re: debug apache-php dump

2011-10-14 Thread Roberto Scattini
On Fri, Oct 14, 2011 at 6:18 PM, David Sastre wrote: > On Tue, Oct 04, 2011 at 09:36:59AM -0300, Roberto Scattini wrote: >> ... >> it is a standard package installation, apache2, php5 and >> libapache2-mod-php5. i also >> installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg. >> >> > Yo

Re: debug apache-php dump

2011-10-14 Thread David Sastre
arted to generate "segmentation faults" randomly. > it is a standard package installation, apache2, php5 and > libapache2-mod-php5. i also > installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg. > > i generated an apache coredump file, but when i open dump file

debug apache-php dump

2011-10-04 Thread Roberto Scattini
n, apache2, php5 and libapache2-mod-php5. i also installed apache2-dbg, libapr1-dbg, libaprutil1-dbg and php5-dbg. i generated an apache coredump file, but when i open dump file with gdb the only i get is this: # gdb /usr/sbin/apache2 /var/cache/apache2/core GNU gdb (GDB) 7.0.1-debian Copyright (C

Re: dump (for extX) broken in latest release? 0.4b44-1

2011-07-17 Thread Justin Piszcz
On Sun, 17 Jul 2011, Dom wrote: On 17/07/11 21:00, Justin Piszcz wrote: I have already submitted a bug report for this. It seems to have been caused by the latest update of e2fslibs. If you downgrade e2fslibs to 1.41.12-4stable1 dump will work again. You should also downgrade e2fsprogs

Re: dump (for extX) broken in latest release? 0.4b44-1

2011-07-17 Thread Dom
On 17/07/11 21:00, Justin Piszcz wrote: Hi, Re-sending to correct list, per Mike: I've been using dump for ~1-2+ years now and the syntax has not changed; however, with the most recent update: From: 0.4b43-1 To: 0.4b44-1 Dump no longer has the same behavior: dump -0 -z9 -L 2011-07-

dump (for extX) broken in latest release? 0.4b44-1

2011-07-17 Thread Justin Piszcz
Hi, Re-sending to correct list, per Mike: I've been using dump for ~1-2+ years now and the syntax has not changed; however, with the most recent update: From: 0.4b43-1 To: 0.4b44-1 Dump no longer has the same behavior: dump -0 -z9 -L 2011-07-16 -f file.ext4dump / DUMP: Date of this

arbitrary disk name assignment affects dump/restore

2011-04-18 Thread Dean Allen Provins, P. Geoph.
Andrei, Stan and Paul: Thanks for the replies. I was unaware that "/dev/disk/*" existed. I must have missed that lesson during the last upgrade. I appreciate your assistance. Regards from Calgary, Dean -- Dean Provins, P. Geoph. dprov...@a

Re: arbitrary disk name assignment affects dump/restore

2011-04-17 Thread Stan Hoeppner
Andrei Popescu put forth on 4/17/2011 3:12 AM: > /dev/disk/by-id/ > /dev/disk/by-label/ # assuming you defined labels > /dev/disk/by-path/ > /dev/disk/by-uuid/ > > I prefer labels since they can be set to something meaningful/mnemonic. Yes, I use labels for partitions as well, more for organizat

Re: arbitrary disk name assignment affects dump/restore

2011-04-17 Thread Paul E Condon
On 20110417_111214, Andrei Popescu wrote: > On Sb, 16 apr 11, 15:00:39, Dean Allen Provins, P. Geoph. wrote: > > > > This means that I must NOT rely on my automatic (crontab-based) dump > > scripts, but interrogate the system manually, and if necessary, alter > > /va

Re: arbitrary disk name assignment affects dump/restore

2011-04-17 Thread Andrei Popescu
On Sb, 16 apr 11, 15:00:39, Dean Allen Provins, P. Geoph. wrote: > > This means that I must NOT rely on my automatic (crontab-based) dump > scripts, but interrogate the system manually, and if necessary, alter > /var/lib/dumpdates so that the script will run properly. No, just adapt

arbitrary disk name assignment affects dump/restore

2011-04-16 Thread Dean Allen Provins, P. Geoph.
Hello I have used "dump" and "restore" to perform system backups for many years. Since upgrading to Debian 6.x, I have not been able to obtain consistent and reliable dumps for the following reason: Sometimes. my single fixed disk is labeled as /dev/sda, but At other

Re: apt makes it easy to track the new package but hard to dump the old

2011-04-04 Thread jidanni
> "SP" == Stephen Powell writes: SP> Perhaps the "--purge-unused" option of aptitude is what you are looking for. Naw, that only controls the difference between purging and just removing. OK, this helped: # aptitude markauto linux-doc-2.6.37 The following packages will be REMOVED: linux-doc-

Re: apt makes it easy to track the new package but hard to dump the old

2011-04-02 Thread Stephen Powell
On Sat, 02 Apr 2011 20:27:19 -0400 (EDT), jida...@jidanni.org wrote: > > Why do I always have to clean up older versions by hand? > > E.g., linux-doc-2.6 pulls in the latest version automatically, > but if I don't want an ever growing number of older versions accruing, I > have to remove them by

Re: apt makes it easy to track the new package but hard to dump the old

2011-04-02 Thread Boyd Stephen Smith Jr.
In <87d3l4tc0o@jidanni.org>, jida...@jidanni.org wrote: >Why do I always have to clean up older versions by hand? > >E.g., linux-doc-2.6 pulls in the latest version automatically, >but if I don't want an ever growing number of older versions accruing, I >have to remove them by hand. > ># apt-sh

apt makes it easy to track the new package but hard to dump the old

2011-04-02 Thread jidanni
Why do I always have to clean up older versions by hand? E.g., linux-doc-2.6 pulls in the latest version automatically, but if I don't want an ever growing number of older versions accruing, I have to remove them by hand. # apt-show-versions -r -p ^linux-doc linux-doc-2.6/unstable uptodate 1:2.6.

Re: Extremely large level 1 backups with dump

2010-12-12 Thread Andrei Popescu
On Ma, 07 dec 10, 14:11:14, Karl Vogel wrote: > >All filesystems are ext3, mounted like so: > > rw,nodev,noatime,nodiratime,data=journal noatime implies nodiratime. Regards, Andrei -- Offtopic discussions among Debian users and developers: http://lists.alioth.debian.org/mailman/listi

Re: Extremely large level 1 backups with dump

2010-12-07 Thread Bob Proulx
Karl Vogel wrote: >Here are some machine specifics for perspective. Data! Excellent stuff. >It's an IBM x3400, 2 Xeon 2GHz CPUs, 4Gb memory running RedHat. > ... > total used free sharedbuffers cached >Mem: 19439481576708 367240

Re: Extremely large level 1 backups with dump

2010-12-07 Thread Karl Vogel
>> In an earlier message, I said: K> This box has ~630,000 files using 640 Gbytes, but not many files change K> hourly. >> On Mon, 6 Dec 2010 21:33:01 -0700, Bob Proulx said: B> Note that you must have sufficient ram to hold the inodes in buffer cache. B> Otherwise I would guess that it would b

Re: Extremely large level 1 backups with dump

2010-12-06 Thread Tom H
On Tue, Dec 7, 2010 at 1:29 AM, Peter Tenenbaum wrote: > On Sun, Dec 5, 2010 at 7:15 PM, Peter Tenenbaum > wrote: >> On Sat, Dec 4, 2010 at 4:14 PM, Peter Tenenbaum >> wrote: >>> >>> In thinking this over, I think that the best approach is to simply have a >>> daily rsync --archive from my main

Re: Extremely large level 1 backups with dump

2010-12-06 Thread Peter Tenenbaum
Karl -- So on my first attempt, I realized that I need to exclude the /media directory, or else the backup drive will attempt to back up itself. OK, that's fine. On the second attempt, the backup got into the /proc directory, complained about some files disappearing, and then froze. I don't hav

Re: Extremely large level 1 backups with dump

2010-12-06 Thread Bob Proulx
Karl Vogel wrote: >I'm interested in seeing what kind of grief you're getting from rsync. >I've had to argue with it in the past; feel free to reply privately if >you'd rather. Rsync has been a great performer for me as well. >Don't rule out dumb and strong, it works great for me.

Re: Extremely large level 1 backups with dump

2010-12-06 Thread Karl Vogel
>> On Sun, 5 Dec 2010 19:15:25 -0800, >> Peter Tenenbaum said: P> After having some difficulty getting rsync to do exactly what I want, P> I've become convinced to try rsnapshot. I'll let you know how it goes. I'm interested in seeing what kind of grief you're getting from rsync. I've ha

Re: Extremely large level 1 backups with dump

2010-12-05 Thread Peter Tenenbaum
Well, after having some difficulty getting rsync to do exactly what I want, I've become convinced to try rsnapshot. I'll let you know how it goes. -PT On Sat, Dec 4, 2010 at 4:14 PM, Peter Tenenbaum wrote: > Jochen, Paul -- > > In thinking this over, I think that the best approach is to simply

Re: Extremely large level 1 backups with dump

2010-12-05 Thread Jochen Schulz
Peter Tenenbaum: > > In thinking this over, I think that the best approach is to simply have a > daily rsync --archive from my main hard drive to the backup drive. While I > understand that more sophisticated backup systems are often useful in a > large system, the system in question is a home co

Re: Extremely large level 1 backups with dump

2010-12-04 Thread Peter Tenenbaum
Jochen, Paul -- In thinking this over, I think that the best approach is to simply have a daily rsync --archive from my main hard drive to the backup drive. While I understand that more sophisticated backup systems are often useful in a large system, the system in question is a home computer with

Re: Extremely large level 1 backups with dump

2010-12-02 Thread Jochen Schulz
Peter Tenenbaum: > > Paul -- thanks for the suggestions. I guess that, since I am not using a > tape drive for backup, there's no good reason to use dump rather than rsync, > and the latter will leave me with a navigable file tree on the backup > drive. If you are going

Re: Extremely large level 1 backups with dump

2010-12-02 Thread Peter Tenenbaum
Paul -- thanks for the suggestions. I guess that, since I am not using a tape drive for backup, there's no good reason to use dump rather than rsync, and the latter will leave me with a navigable file tree on the backup drive. This is what I use at work to back up /home/ptenenbaum (a

Re: Extremely large level 1 backups with dump

2010-12-02 Thread Paul E Condon
On 20101201_215849, Peter Tenenbaum wrote: > I've been using dump to perform backups of my home Debian workstation (I run > squeeze, btw). I do a weekly level 0 dump and daily level 1 dumps. > > For some reason the level 1 backups are almost as large as the level 0 (the > le

Re: Extremely large level 1 backups with dump

2010-12-02 Thread Camaleón
On Wed, 01 Dec 2010 21:58:49 -0800, Peter Tenenbaum wrote: > I've been using dump to perform backups of my home Debian workstation (I > run squeeze, btw). I do a weekly level 0 dump and daily level 1 dumps. > > For some reason the level 1 backups are almost as large as the lev

Extremely large level 1 backups with dump

2010-12-01 Thread Peter Tenenbaum
I've been using dump to perform backups of my home Debian workstation (I run squeeze, btw). I do a weekly level 0 dump and daily level 1 dumps. For some reason the level 1 backups are almost as large as the level 0 (the level 0 is 57.9 GB and the level 1 is 51.6 GB), even though we clearly

Re: No CORE DUMP.

2010-05-23 Thread Javier Barroso
On Sun, May 23, 2010 at 3:21 PM, Avinash H.M. wrote: > Thanks !!! this worked. > I did > ulimit -c unlimited. > > I tried tracking ulimit. If i do > which ulimit, i am not getting anything. [ I expect the path of this binary > ] > > Is it a built in bash command or something like that > Ye

Re: No CORE DUMP.

2010-05-23 Thread Avinash
Chris Bannister earthlight.co.nz> writes: > > On Sat, May 22, 2010 at 10:40:53PM +0530, Avinash H.M. wrote: > > Hi All, > > > > I am using DSL [ damn small linux ] which is branched from debain. > > I am trying to use GCC, GDB. Able to install both of them. > > Although DSL is based on Debian

Re: No CORE DUMP.

2010-05-23 Thread Avinash H.M.
h -g, then take a look at man core. Not >> every program that has received >> a segfault signal dumps core.  Look at gcore to see how to generate it. >> >> >> -- >> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a >> subject of "un

Re: No CORE DUMP.

2010-05-23 Thread Chris Bannister
On Sat, May 22, 2010 at 10:40:53PM +0530, Avinash H.M. wrote: > Hi All, > > I am using DSL [ damn small linux ] which is branched from debain. > I am trying to use GCC, GDB. Able to install both of them. Although DSL is based on Debian, it is not Debian. There are numerous differences between th

Re: No CORE DUMP.

2010-05-22 Thread Anand Sivaram
dumps core. Look at gcore to see how to generate it. > > > > -- > To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a > subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org > Archive: http://lists.debian.org/4bf81657.7060...@gmail.com > > Normally core dump is disabled. You could find the maximum size of core file created using "ulimit -a", normally that is 0. Increase it using ulimit -c

Re: No CORE DUMP.

2010-05-22 Thread Aioanei Rares
On 05/22/2010 08:10 PM, Avinash H.M. wrote: Hi All, I am using DSL [ damn small linux ] which is branched from debain. I am trying to use GCC, GDB. Able to install both of them. I am doing following - run a helloworld.c program whic has a while loop. So while running, its stuck in while

No CORE DUMP.

2010-05-22 Thread Avinash H.M.
Hi All, I am using DSL [ damn small linux ] which is branched from debain. I am trying to use GCC, GDB. Able to install both of them. I am doing following - run a helloworld.c program whic has a while loop. So while running, its stuck in while. - another shell, "kill -11 PID" [ PID of

Re: Dump and Restore Utility

2009-01-30 Thread Douglas A. Tutty
On Thu, Jan 29, 2009 at 06:23:42PM +0100, Johannes Wiedersich wrote: > Martin McCormick wrote: > > Nicolas KOWALSKI writes: > >> Yes, dump is working well with ext3 filesystems, as long as they > >> are not actively modified. LVM may come to help here. > > > &

Re: Dump and Restore Utility

2009-01-29 Thread Johannes Wiedersich
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Martin McCormick wrote: > Nicolas KOWALSKI writes: >> Yes, dump is working well with ext3 filesystems, as long as they >> are not actively modified. LVM may come to help here. > > Many thanks. That is a problem more or less w

Re: Dump and Restore Utility

2009-01-29 Thread Martin McCormick
Nicolas KOWALSKI writes: > Yes, dump is working well with ext3 filesystems, as long as they are not > actively modified. LVM may come to help here. Many thanks. That is a problem more or less with any backup method that doesn't unmount and convert to read-only a given

Re: Dump and Restore Utility

2009-01-29 Thread Nicolas KOWALSKI
On Thu, 29 Jan 2009, Martin McCormick wrote: > While looking on dselect, I saw there is a port of the BSD > package of dump and restore but it says it is for the ext2 file > system. Is there a Linux dump and restore utility anywhere that > is safe to use with ext3? Yes, dump is worki

Dump and Restore Utility

2009-01-29 Thread Martin McCormick
While looking on dselect, I saw there is a port of the BSD package of dump and restore but it says it is for the ext2 file system. Is there a Linux dump and restore utility anywhere that is safe to use with ext3? The issue is that we have a mixed Unix environment. There is a number of

[SOLVED] am check / dump probs

2008-11-23 Thread ghe
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Somewhat shamefaced... Permissions. The log / log directories on the client were owned by root: an artifact of copying over the old logs (as root) as part of the new install. I'd set amandad to be run as root in inted.conf -- that made amcheck work,

du (was Re: octal dump)

2008-11-22 Thread Ron Johnson
On 11/22/08 06:14, Girish Kulkarni wrote: [snip] Thanks for the replies Ron, Mike and others. I guess I understand use of od now (although that du puzzle is still with me). From the top of the man page DESCRIPTION: "Summarize disk usage of each FILE, recursively for directories." The 2nd arg

octal dump

2008-11-22 Thread Girish Kulkarni
On Wed, Nov 19, 2008 at 7:32 AM, Ron Johnson wrote: > In my work I collect a lot of data from the serial port, spit out by > other machines. od is VERY useful when I collect data from a new > machine. Are there carriage returns? Are there other strange > characters? After those questions are

Re: octal dump

2008-11-18 Thread Ron Johnson
On 11/18/08 18:44, mike wrote: Ron Johnson wrote: On 11/18/08 05:50, mike wrote: Girish Kulkarni wrote: Hello, I was trying to understand octal dumps today that can be obtained using od. But two questions cropped up: 1. When would an octal dump be useful? Surely not in perusing text

Re: octal dump

2008-11-18 Thread mike
Ron Johnson wrote: On 11/18/08 05:50, mike wrote: Girish Kulkarni wrote: Hello, I was trying to understand octal dumps today that can be obtained using od. But two questions cropped up: 1. When would an octal dump be useful? Surely not in perusing text files?! And when people say they

Re: octal dump

2008-11-18 Thread s. keeling
Girish Kulkarni <[EMAIL PROTECTED]>: > > I was trying to understand octal dumps today that can be obtained > using od. But two questions cropped up: > > 1. When would an octal dump be useful? Surely not in perusing text Sure. ftp any text file from a Windows or Mac u

Re: octal dump

2008-11-18 Thread Ron Johnson
On 11/18/08 05:50, mike wrote: Girish Kulkarni wrote: Hello, I was trying to understand octal dumps today that can be obtained using od. But two questions cropped up: 1. When would an octal dump be useful? Surely not in perusing text files?! And when people say they use octal (or hex

Re: octal dump

2008-11-18 Thread mike
Girish Kulkarni wrote: Hello, I was trying to understand octal dumps today that can be obtained using od. But two questions cropped up: 1. When would an octal dump be useful? Surely not in perusing text files?! And when people say they use octal (or hex) dump to check and edit binary

RE: octal dump

2008-11-17 Thread owens
> > > > Original Message >From: [EMAIL PROTECTED] >To: debian-user@lists.debian.org >Subject: RE: octal dump >Date: Mon, 17 Nov 2008 22:24:49 +0530 > >>Hello, >> >>I was trying to understand octal dumps today that can be obtained >>us

Re: octal dump

2008-11-17 Thread Boyd Stephen Smith Jr.
On Monday 17 November 2008, "Girish Kulkarni" <[EMAIL PROTECTED]> wrote about 'octal dump': >1. When would an octal dump be useful? Surely not in perusing text > files?! And when people say they use octal (or hex) dump to check > and edit binary files, ho

Re: octal dump

2008-11-17 Thread Ron Johnson
On 11/17/08 10:54, Girish Kulkarni wrote: Hello, I was trying to understand octal dumps today that can be obtained using od. But two questions cropped up: 1. When would an octal dump be useful? Surely not in perusing text files?! Nowadays? Hardly ever, if at all. But the PDP-7, which

Re: octal dump

2008-11-17 Thread lee
On Mon, Nov 17, 2008 at 10:24:49PM +0530, Girish Kulkarni wrote: >-Ad'. But I find that the offset of the last line does not match >with the size of the file given by 'du -h'. Why is that? du shows disk usage, which can be different from the size of a file due to the block size of the fi

octal dump

2008-11-17 Thread Girish Kulkarni
Hello, I was trying to understand octal dumps today that can be obtained using od. But two questions cropped up: 1. When would an octal dump be useful? Surely not in perusing text files?! And when people say they use octal (or hex) dump to check and edit binary files, how do they learn

Re: Installation problem using iso dump on harddisk

2007-09-18 Thread Douglas A. Tutty
On Tue, Sep 18, 2007 at 05:16:26PM +0530, Rajesh wrote: > -- > With Regards > From > Rajesh > Delhi 110 059 > > > Sir, > > I cann't able to install debian 4.0 iso image with the help of Grub. Read the installation manual in your language of choice then complete and submit an installation repor

Installation problem using iso dump on harddisk

2007-09-18 Thread Rajesh
-- With Regards From Rajesh Delhi 110 059 Sir, I cann't able to install debian 4.0 iso image with the help of Grub.

Re: wine core dump

2007-01-24 Thread Sven Arvidsson
On Tue, 2007-01-23 at 21:08 -0300, Carlos Alberto Pereira Gomes wrote: > I use wine in a debian/sid machine but in the last 2-3 weeks (after some > apt-get updade/upgrade) it just does not run any application it used to > run without problems. A good start could be to file a bug report and see if

wine core dump

2007-01-23 Thread Carlos Alberto Pereira Gomes
Hi, I use wine in a debian/sid machine but in the last 2-3 weeks (after some apt-get updade/upgrade) it just does not run any application it used to run without problems. Following is its core dump: wine: Unhandled page fault on read access to 0x7d481a6c at address 0xb7dc1141 (thread 0009

Re: dump the stack of a running subthread (solved)

2005-11-14 Thread Marc Brünink
On Montag, Nov 14, 2005, at 11:42 Europe/Berlin, Marc Brünink wrote: Hi, I've a program which creates some subthreads. One of them hangs. I need a stack dump of this very special thread. Any hints? I'm unable to use gdb directly, because it just occurs on one machine. And it'

dump the stack of a running subthread

2005-11-14 Thread Marc Brünink
Hi, I've a program which creates some subthreads. One of them hangs. I need a stack dump of this very special thread. Any hints? I'm unable to use gdb directly, because it just occurs on one machine. And it's a production system. It's no problem if the thread gets ter

Re: Tape Backup advice needed - dump, tar etc.

2005-09-20 Thread Dave Carrigan
On Fri, Sep 16, 2005 at 11:13:31AM -0500, J French wrote: > We are setting up Debian Linux on a new server for a PostGreSQL database. In > the past, on FreeBSD, I used the dump utility with the live filesystem > (snapshot) switch to backup the running database. Does dump on linu

Re: Tape Backup advice needed - dump, tar etc.

2005-09-20 Thread wim
general)? I need a robust backup because this will be a > > production server. Advice is appreciated. > > You cannot use dump on a read/write mounted fs - the kernel does not keep > writes from the fs coherent with the block device you read from (eg, > /dev/hda0). You must at leas

Re: Tape Backup advice needed - dump, tar etc.

2005-09-19 Thread Tom Vier
On Fri, Sep 16, 2005 at 11:13:31AM -0500, J French wrote: > Debian (or linux in general)? I need a robust backup because this will be a > production server. Advice is appreciated. You cannot use dump on a read/write mounted fs - the kernel does not keep writes from the fs coherent with the

Re: Tape Backup advice needed - dump, tar etc.

2005-09-18 Thread Ben Pearre
ne of those 200G tape drives running off a NetApp fileserver. Since the main filesystem was on Raid5 I only did a weekly tape dump and stored the tapes in my apartment. It worked fine as long as we only had 200G of data, but manually changing tapes is an enormous hassle! I can't imagine a

Re: Tape Backup advice needed - dump, tar etc.

2005-09-16 Thread Angelo Bertolli
I'll throw in a suggestion for bacula: http://bacula.org/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Re: Tape Backup advice needed - dump, tar etc.

2005-09-16 Thread Glenn English
On Fri, 2005-09-16 at 11:13 -0500, J French wrote: > How are most people backing up to tape with Debian (or linux in > general)? I need a robust backup because this will be a production > server. Advice is appreciated. I'm using Amanda, but Amanda uses dump or gtar, so the quest

Re: Tape Backup advice needed - dump, tar etc.

2005-09-16 Thread Andrew Perrin
On Fri, 16 Sep 2005, J French wrote: > Hello, > We are setting up Debian Linux on a new server for a PostGreSQL database. In > the past, on FreeBSD, I used the dump utility with the live filesystem > (snapshot) switch to backup the running database. Does dump on linux support >

Tape Backup advice needed - dump, tar etc.

2005-09-16 Thread J French
Hello, We are setting up Debian Linux on a new server for a PostGreSQL database.  In the past, on FreeBSD, I used the dump utility with the live filesystem (snapshot) switch to backup the running database.  Does dump on linux support live filesystem backups as well?  How are most people backing up

Re: dump problems

2005-08-10 Thread Charles YUAN
I have the smae problem. Do you have the solution for this problem now?   Thanks,   Charles   Charles Yuan   The CPS Groupt: (02) 9968 5710  f: (02) 9953 8083

Re: Lynx doesn't dump to stdout?

2004-10-16 Thread Joost Witteveen
Roozemond, D.A. wrote: Hi Matthijs, Example: lynx -dump http://www.ticketmaster.nl/html/searchResult.htmI?keyword=carlton&l=NL | grep resultaten No need to understand all the above - If you change the '&' in the webpage address to '\&', it's working: Alt

  1   2   3   >