On Thu, 2003-07-31 at 19:48, Aly Dharshi wrote:
> Hi Folks,
>
> I hope that you are well, I need to find out if I can use the dump
> utility with ReiserFS or SGI's XFS. Man pages say for use with ext2 and
> by extension ext3 I guess.
>
> Secondly I was think
On Thu, 2003-07-31 at 18:48, Aly Dharshi wrote:
> Hi Folks,
>
> I hope that you are well, I need to find out if I can use the dump
> utility with ReiserFS or SGI's XFS. Man pages say for use with ext2 and
> by extension ext3 I guess.
XFS comes with it's own version
Hi Folks,
I hope that you are well, I need to find out if I can use the dump
utility with ReiserFS or SGI's XFS. Man pages say for use with ext2 and
by extension ext3 I guess.
Secondly I was thinking of installing XFS, the only question I have is
if I upgrade the kernel
Title: Message
Hi,
Can
some one help clarify the core dump file mechanism on linux 2.4.X
please?
a) By default where
is the core dump file written to? Is it written to the current working directory
or the location of teh executable?
b) Can the location
of the core dump file be
On 07:25 20 Feb 2003, Stamper, Steve <[EMAIL PROTECTED]> wrote:
| We have an application that is having a problem. I would like to get a core dump of
|the program (presumably as I kill it) so I can look around inside. Any ideas how to
|force a core dump??
Please post plain text, no
On Thu, Feb 20, 2003 at 07:25:16AM -0500, Stamper, Steve wrote:
> We have an application that is having a problem. I would like to get a
> core dump of the program (presumably as I kill it) so I can look
> around inside. Any ideas how to force a core dump??
>
> Stev
Title: How to core dump
We have an application that is having a problem. I would like to get a core dump of the program (presumably as I kill it) so I can look around inside. Any ideas how to force a core dump??
Steve
**Disclaimer**
This memo and any attachments may be confidential
> Is there a way I could dump all the information that is coming to a serial
> port into a file, so after that I could rearange it? And where I should
> modify the settings for the port? (baud rate, parity, etc...)
Minicom may be the easy way to do it.
> Also is there any one who bui
Hi!
Is there a way I could dump all the information that is coming to a serial
port into a file, so after that I could rearange it? And where I should
modify the settings for the port? (baud rate, parity, etc...)
Also is there any one who built a car mp3-player with a LCD display and
keypad
When i run printconf in a terminal (logged in as su), I get the following error messages:
(printconf:1786): GLib-GObject-WARNING **: invalid cast from (NULL) pointer to `GObject'
(printconf:1786): GLib-GObject-CRITICAL **: file gobject.c: line 972 (g_object_get): ass
Specifically, I was referring to XFS with ACL's, which I use. However,
any FS with extended attributes will not be backed up correctly by
anything other than the fs-specific dump.
Another FS with extended attributes is ext3, as patched by the group at
http://acl.bestbits.at/ (down at the m
>> > Yes, Amanda will make very reliable backups on your network, and yes
> >> > dump is broken.
> >> > However Amanda being the very powerful system it is, look to tar, cpio
> >> > as alternatives.
> >>
> >> Bitch
t; > Yes, Amanda will make very reliable backups on your network, and yes
> >> > dump is broken.
> >> > However Amanda being the very powerful system it is, look to tar, cpio
> >> > as alternatives.
> >>
> >> Bitch is that f
>>>>> "ms" == Matthew Saltzman <[EMAIL PROTECTED]> writes:
ms> On 2 Jul 2002, Gordon Messmer wrote:
>> On Tue, 2002-07-02 at 12:45, Ray Curtis wrote:
>> >
>> > Yes, Amanda will make very reliable backups on your network
On 2 Jul 2002, Gordon Messmer wrote:
> On Tue, 2002-07-02 at 12:45, Ray Curtis wrote:
> >
> > Yes, Amanda will make very reliable backups on your network, and yes
> > dump is broken.
> > However Amanda being the very powerful system it is, look to tar, cpio
> > a
On Tue, 2002-07-02 at 12:45, Ray Curtis wrote:
>
> Yes, Amanda will make very reliable backups on your network, and yes
> dump is broken.
> However Amanda being the very powerful system it is, look to tar, cpio
> as alternatives.
Bitch is that filesystems with extended attrib
>>>>> "ms" == Matthew Saltzman <[EMAIL PROTECTED]> writes:
ms> I am looking at instituting Amanda-based backups on a small network.
ms> Amanda uses dump to actually back up the files. I've seen various
ms> references to problems with t
I am looking at instituting Amanda-based backups on a small network.
Amanda uses dump to actually back up the files. I've seen various
references to problems with the reliability of dump with kernel 2.4 and/or
ext3. For example, I'ver heard tell that "Linus says that dump in kern
> Every so often, I get some bad mojo info appear my server console screen.
How have you done this ?
I'm interested to have on my redhat server something like FreeBSD.
I mean, FreeBSD (maybe the other ones as well) say when someone port scan
your computer.. or when someone get logged on the comp
Hello,
I did a fresh install of RH 7.2 recently. I decided to make the OS
partitions ext3 (Note mount command below). I had some disks / partitions
from the previous install that I left ext2. I want to migrate some data from
the ext2 partitions to the ext3 partitions using dump / restore
On Thu 29 November 2001 19:45, you (Trond Eivind Glomsr_d) wrote:
> > postfix instead of sendmail
> > proftpd instead of wu-ftpd
>
> proftpd isn't any better than wu-ftpd securitywise - vsftpd is, but
Why? What are the problems with proftpd?
> doesn't have all the features yet (virtual hosting m
Hello,
I have rsh working ok for sol8_host to lin6.2_host for dumping sol8_host "/"
to a tape on linux6.2_host:
ufsdump 0f linux6.2_host:/dev/nst0 /
but after starting the dump I get the error:
rmt:status: expected response size 24, got 28
Remote rmt daemon is not compatible.
Lost
On 29 Nov 2001, Trond Eivind Glomsrød wrote:
> Kevin MacNeil <[EMAIL PROTECTED]> writes:
>
> > I just came across the latest remote root exploit for wu-ftp, which I
> > dutifully installed on the small server I maintain. It's too bad
> > redhat released the patch early, but accidents happen an
Jason Costomiris <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 29, 2001 at 01:10:10PM -0500, Brian Ashe wrote:
> : Yes, but if you've read it [the postfix license], you
> : would see that it is much more Debian friendly then RH,
> : etc. friendly. The OSI rarely concerns itself with what
> : legal l
On Thu, Nov 29, 2001 at 01:10:10PM -0500, Brian Ashe wrote:
: I am quite aware of that. But, it proves that it is not the ultimate in
: programming as so many claim. I think it is excellent software, but if there
: are flaws in one place, should I assume that there can be no others?
And sendmail
My apologies for this message showing up twice. I originally posted
this last night and a weird bounce message showed up in my inbox
complaining about a mailbox being full, so I posted it again this
morning before seeing that the original had made it after all.
So please ignore this thread.
_
On Thu, Nov 29, 2001 at 03:48:32AM -0500, Brian Ashe wrote:
> KM> postfix instead of sendmail
>
> Postfix also is not GPL. It is under the IBM Public License. If you
> read it, you could see that there are certain provisions for
> commercial distribution. While they wouldn't stop you from
> dis
Kevin MacNeil <[EMAIL PROTECTED]> writes:
> I just came across the latest remote root exploit for wu-ftp, which I
> dutifully installed on the small server I maintain. It's too bad
> redhat released the patch early, but accidents happen and there's
> nothing to be done about it now.
>
> That as
Hi Jason,
On Thursday, November 29, 2001, 9:52:59 AM, you babbled something about:
JC> On Thu, Nov 29, 2001 at 03:48:32AM -0500, Brian Ashe wrote:
: KM>> postfix instead of sendmail
JC> :
JC> : Sendmail is the most common mail server available. There is no lack of
JC> : documentation. It has al
I just came across the latest remote root exploit for wu-ftp, which I
dutifully installed on the small server I maintain. It's too bad
redhat released the patch early, but accidents happen and there's
nothing to be done about it now.
That aside, I am wondering why the major distributions stick w
On Thu, Nov 29, 2001 at 03:48:32AM -0500, Brian Ashe wrote:
: KM> postfix instead of sendmail
:
: Sendmail is the most common mail server available. There is no lack of
: documentation. It has also been doing "better" than in the past. Postfix
: also just had a significant DoS against it as well
Dare's no way I'm going to wait for other po dunk distros ta fix their
ag. -eric wood
> It's too bad redhat released the patch early, as it is going to be a pita
for the
> other distributions.
___
Redhat-list mailing list
[EMAIL PROTECTED]
https://
Hi Kevin,
On Thursday, November 29, 2001, 1:10:12 AM, you babbled something about:
KM> That aside, I am wondering why the major distributions stick with
KM> software like wu-ftpd, which have such poor security records, when
KM> better alternatives exist, e.g.:
Licenses, commonality, familiarity
I just came across the latest remote root exploit for wu-ftp, which I
dutifully installed on the small server I maintain. It's too bad
redhat released the patch early, as it is going to be a pita for the
other distributions. But accidents happen, and there's nothing to be
done about it now.
Tha
On Tue, 1 May 2001, Ajay Tikoo wrote:
> After running rpm --rebuild, I just tried to upgrade the kernel again and it
> worked!
> Thank you once again, Mikkel.
>
> Ajay
>
Great! Thanks for letting me know.
___
Redhat-list mailing list
[EMAIL PROTECTE
EMAIL PROTECTED]
> Subject: RE: Core dump on kernel upgrade.
>
>
> Thank you Mikkel.
> I tried to run rpm --rebuilddb. It takes some time to execute and then
> returns to the shell prompt without any message displayed.
> Yes I did upgrade the rpm version before this problem start
On Tue, 1 May 2001, Ajay Tikoo wrote:
> Thank you Mikkel.
> I tried to run rpm --rebuilddb. It takes some time to execute and then
> returns to the shell prompt without any message displayed.
> Yes I did upgrade the rpm version before this problem started.
> What should I do now.
>
> Ajay
>
Try i
TECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Mikkel L. Ellertson
> Sent: Monday, April 30, 2001 6:28 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Core dump on kernel upgrade.
>
>
> On Mon, 30 Apr 2001, Ajay Tikoo wrote:
>
> > Hi,
> > I am somewhat new to linux world.
At which point did this happen? During compile? During running lilo?
During reboot? More info please :)
On Mon, 30 Apr 2001, Ajay Tikoo wrote:
> Hi,
> I am somewhat new to linux world. I was trying to upgrade the kernel from
> 2.2.17 to 2.2.19. While doing so, I get the error core dumped. What c
On Mon, 30 Apr 2001, Ajay Tikoo wrote:
> Hi,
> I am somewhat new to linux world. I was trying to upgrade the kernel from
> 2.2.17 to 2.2.19. While doing so, I get the error core dumped. What could be
> wrong?
> Thank You
>
Possible a corrupted RPM data base. Try running "rpm --rebuilddb" and
see
Hi,
I am somewhat new to linux world. I was trying to upgrade the kernel from
2.2.17 to 2.2.19. While doing so, I get the error core dumped. What could be
wrong?
Thank You
___
Ajay Tikoo
___
Redhat-list mailing
On Wed, 18 Apr 2001 at 11:12pm (-0400), Wes Owen wrote:
> I'm trying to get lynx to parse a .html document and write it out to
> /tmp/file.txt. So I used the command:
>
> /usr/bin/lynx -dump -nolist
> -term=vt100 /home/www/htdocs/index.html
>
> Which works great when I
I'm trying to get lynx to parse a .html document and write it out to
/tmp/file.txt. So I used the command:
/usr/bin/lynx -dump -nolist
-term=vt100 /home/www/htdocs/index.html
Which works great when I run it
when I'm logging into the system, but when I try and run it from cron,
when i backup my system with dump, a second volume
is requested, it doesn't seem to register that i put the
second tape in the drive. i put one in and
type yes, as requested by dump.
The message about the second tape redisplays, when i put
the second tape in and typ
l benefit from software compression if the drive doesn't have
hardware compression. You can go about this many ways, but I can't advise
how to do it with dump. I just make managable sized tarred/gzipped files
and tar them to tape myself (in scripts of course).
I have my hands on a scsi tape
dware compression?
On Wed, 24 Jan 2001, Vidiot wrote:
> >does that mean i can't compress files to tape
> >using dump.
>
> No, you let the drive compress the files.
>
> >the tape drive is a dds3 12gig native 24 gig compress
> >compliant drive.
>
> That
>does that mean i can't compress files to tape
>using dump.
No, you let the drive compress the files.
>the tape drive is a dds3 12gig native 24 gig compress
>compliant drive.
That is up to 24 GB compressed. No all files compress equally.
>how do i use the built in compre
check out the man page for 'mt'
charles
- Original Message -
From: "Steve Lee" <[EMAIL PROTECTED]>
> does that mean i can't compress files to tape
> using dump.
>
> the tape drive is a dds3 12gig native 24 gig compress
> compliant dr
does that mean i can't compress files to tape
using dump.
the tape drive is a dds3 12gig native 24 gig compress
compliant drive.
how do i use the built in compressor? if it exist on tape?
or is what i'm asking before possible.
i can dump and compress to disk but not to tape?
On W
It was worth a try.
On Wed, 24 Jan 2001, Mikkel L. Ellertson wrote:
> On Wed, 24 Jan 2001, Mike Burger wrote:
>
> > "dump -d 8000 -s 15 -f /dev/nst0 / | gzip"?
> >
> That will try and gzip the console output from dump, not what is going
> to the tape drive
>how do you gzip a file before dump puts its
>file to tape?
>
>dump -d 8000 -s 15 -f /dev/nst0 /
>this works but does not gzip the / directory before it gets
>to tape.
>
>Can someone show me some pipes etc. to make this possible.
Don't even attempt to do that.
On Wed, 24 Jan 2001, Mike Burger wrote:
> "dump -d 8000 -s 15 -f /dev/nst0 / | gzip"?
>
That will try and gzip the console output from dump, not what is going
to the tape drive. gzip will also produce an error when used this way.
I do not think you can compress the out
"dump -d 8000 -s 15 -f /dev/nst0 / | gzip"?
On Wed, 24 Jan 2001, Steve Lee wrote:
> how do you gzip a file before dump puts its
> file to tape?
>
> dump -d 8000 -s 15 -f /dev/nst0 /
> this works but does not gzip the / directory before it gets
> to tape.
how do you gzip a file before dump puts its
file to tape?
dump -d 8000 -s 15 -f /dev/nst0 /
this works but does not gzip the / directory before it gets
to tape.
Can someone show me some pipes etc. to make this possible.
Thanks
___
Redhat
Every once in a while I get a core dump file:
core: ELF 32-bit LSB core file of 'netscape-commun' (signal 4), Intel
80386, version 1
Should I be concerned? Is this indicative of a hardware problem? man 7
signal says "Illegal Instruction" for signal 4.
TIA
Bill
e-ma
I am backing up my /home to a DAT tape with:
dump 0uadf 327670 /dev/st0 /dev/hda7
When it asks me to say a second tape is ready and I enter
"yes" it says it cannot open /dev/st0:
Change Volumes: Mount volume #2
DUMP: Is the new volume mounted and ready to go?: ("yes"
Hey, folks:
We have brought some simple backup scripts over from our Solaris boxes
to our the Redhat world and changed a few parameters in order to get some
weekly full/daily incremental backups going on important data.
I dump as follows:
/sbin/dump 0bdfsu 32 327000 /dev/nrst0 4 /dev
Bret Hughes wrote:
> Is fold|lpr the best way to get linewrapping on printing
> textfiles with long lines? How can I get this to happen
> automatically? Also is there a way to indent the wrapped
> line so the eye (mine) can easily see when the next line
> break occurs?
Isn't that what the 'pr'
Is fold|lpr the best way to get linewrapping on printing
textfiles with long lines? How can I get this to happen
automatically? Also is there a way to indent the wrapped
line so the eye (mine) can easily see when the next line
break occurs?
Bret
--
To unsubscribe: mail [EMAIL PROTECTED] with
i have just installed RH 6.0 running kernel 2.2.5-15 and every time i
restart my machine i get a core dump (core file in root dir.)
any ideas what i may be doing wrong or can check?
thank you in advance.
Get Your Private
Lets say that you make a dump, How would you make a stand-alone floppy boot
disk to do a "bare-metal-recovery" restore.
I have checked out
http://www.backupcentral.com/linux-bare-metal-recovery.html and it has great
information regarding using tar and dd, not using dump/restore..is ther
>So I this is how I make two different dumps then use the interactive mode of
>restore like this:
>
># dump -0u -b 126 -d 141000 -s 11500 -L hda1 -f /dev/nst0 /dev/hda1
># dump -0u -b 126 -d 141000 -s 11500 -L hda5 -f /dev/st0 /dev/hda5
No need for the mt rewind command, as the la
I find the following backup-related aliases to be helpful (adapt to your
own partition names):
# scsi-tape bkp for Linux partitions:
alias h2bkp 'dump 0ufs /dev/nst0 120
/dev/hda2'
alias h6bkp 'dump 0ufs /dev/nst0 120
/dev/hda6'
alias rewind 'mt -
s are where God \
---divided by zero. Listen to me! We are all individuals!-
On Wed, 5 Apr 2000, Steven Hildreth wrote:
=>So I write the first dump like this:
=>
=># dump -0u -b 126 -d 141000 -s 11500 -L hda1 -f /dev/nst0 /dev/hda1
=>
=>then:
=>
=># dump -0u -b 126 -d
Ok so I think I got it (thick head I know, Microsoft Will do that to ya,
read = MCSE)...
So I this is how I make two different dumps then use the interactive mode of
restore like this:
# dump -0u -b 126 -d 141000 -s 11500 -L hda1 -f /dev/nst0 /dev/hda1
# dump -0u -b 126 -d 141000 -s 11500 -L
So I write the first dump like this:
# dump -0u -b 126 -d 141000 -s 11500 -L hda1 -f /dev/nst0 /dev/hda1
then:
# dump -0u -b 126 -d 141000 -s 11500 -L hda5 -f /dev/nst0 /dev/hda5
then rewind it? Is this needed? I am just going to backup the partitions and
then change the tape for the next day
e God \
---divided by zero. Listen to me! We are all individuals!-
On Wed, 5 Apr 2000, Steven Hildreth wrote:
=>Could someone tell me how to create multiple dump's on a single tape.
=>
=>I would like to backup all my hard drive(s) to a single tape.
=>
=># dump -0u -b 126 -d 14
rote:
# Could someone tell me how to create multiple dump's on a single tape.
#
# I would like to backup all my hard drive(s) to a single tape.
#
# # dump -0u -b 126 -d 141000 -s 11500 -f /dev/st0 /dev/hda1
#
# Then continue with /dev/hda5, /dev/hda7, /dev/hdb1, etc.
#
# Thanks. Havents fou
Could someone tell me how to create multiple dump's on a single tape.
I would like to backup all my hard drive(s) to a single tape.
# dump -0u -b 126 -d 141000 -s 11500 -f /dev/st0 /dev/hda1
Then continue with /dev/hda5, /dev/hda7, /dev/hdb1, etc.
Thanks. Havents found squat about it, bu
t; > are, I believe strongly it is a failed attack, but on what sort of
> > exploit, and what are they trying to accomplish inorder to get root
> > access? or Dos?
> >
> > Mar 2 19:54:01 mchn3 portmap[27354]: connect from 210.65.216.151 to
> > dump(): request from un
Mar 2 19:54:01 mchn3 portmap[27354]: connect from 210.65.216.151 to
> dump(): request from unauthorized host
>From a quick Deja search I found that the dump() function of portmap is to
provide a list of all available RPC services. To see what that would
return, run `/usr/sbin/rpcinfo -p
51 to
dump(): request from unauthorized host
Mar 2 19:54:05 mchn3 portmap[27355]: connect from 210.65.216.151 to
dump(): request from unauthorized host
Mar 2 19:54:05 mchn3 portmap[27356]: connect from 210.65.216.151 to
dump(): request from unauthorized host
Mar 2 19:54:05 mchn3 portmap[27357]: co
Gustav
Gustav Schaffter wrote:
>
> Hi,
>
> I was just trying out dump/restore. Seems to be a nice package. Most
> importantly, I should be able to do a restore even if my system becomes
> rather 'crippled' since restore doesn't require X.
>
> My problem is when
FYI, you can do this with arkeia's arkc command line tool too.
On Sat, 22 Jan 2000, Bret Hughes wrote:
>
> BTW I went with amanda as a backup solution partly for the very reason you
> describe of not needing an X environment for restorations. One of our
> boxes does not even have X on it (firew
On Sat, Jan 22, 2000 at 12:37:37PM -0600, Steve Borho wrote:
> On Sat, Jan 22, 2000 at 12:32:01PM -0600, Bret Hughes wrote:
> > > Hi,
> > >
> > > I was just trying out dump/restore. Seems to be a nice package. Most
> > > importantly, I should be able to
On Sat, Jan 22, 2000 at 12:32:01PM -0600, Bret Hughes wrote:
> > Hi,
> >
> > I was just trying out dump/restore. Seems to be a nice package. Most
> > importantly, I should be able to do a restore even if my system becomes
> > rather 'crippled' since resto
I don't think dump follows symbolic links by default to both keep from
backing up a particular file multiple times as well as to eliminate the
possibility of a recursive loop during the backup.
I don't have easy access to the man pages right now (on my windows laptop
at home) but it se
Hi,
I was just trying out dump/restore. Seems to be a nice package. Most
importantly, I should be able to do a restore even if my system becomes
rather 'crippled' since restore doesn't require X.
My problem is when doing dump:
I had the idea to create a directory containing only
I wrote earlier asking for help using dump with a DAT drive, and I
received replies from Steven W Orr <[EMAIL PROTECTED]> and
[EMAIL PROTECTED] I just wanted to thank them both for
their helpful responses.
To sum up, setting the -s option to a large enough number will let
dump thin
like a VHS VCR. the read &
> write heads *fly* across the tape diagonally instead of along its length (like a
> cassette, remeber those?) which means that he calculated the *length* of the
> tape as far as dump is concerned as being the physical length of the tape (90 or
> 120 meter
?]
What he's saying is that the 4 & 8mm drives work like a VHS VCR. the read &
write heads *fly* across the tape diagonally instead of along its length (like a
cassette, remeber those?) which means that he calculated the *length* of the
tape as far as dump is concerned as being the ph
?]
The practical solution is to run your dump with a tape length set to
something like 67000 meters. Maybe less for a 90 m tape, but that's the
sort of number I use for a 120. Sorry I don't have the exact number on me,
as my 'puter is at home.
--
--Time flies like the wind. F
Backup gurus-
I was trying to use dump for a rudimentary backup with a 4mm DAT
drive. The drive capacity is 4-8G and the tape capacity is 2-4G (90
meters, I believe). I have about 1-2G to back up. But if I run dump
with the default settings, only a small amount of data is written
before dump
999 11:20:14 +1100
> From: tom minchin <[EMAIL PROTECTED]>
> Reply-To: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: Dump/restore still broken
> Resent-Date: 14 Nov 1999 00:20:25 -
> Resent-From: [EMAIL PROTECTED]
> Resent-cc: recipient list not shown: ;
>
On Sat, Nov 13, 1999 at 05:27:25PM -0500, Randy Carpenter wrote:
>
> Does anyone at Red Hat know when they are going to decide that they
> bug on the dump/restore package? (bug #5923 specifically). I am quite
> desperately awaiting a version where the dump and restore commands *
Does anyone at Red Hat know when they are going to decide that they
bug on the dump/restore package? (bug #5923 specifically). I am quite
desperately awaiting a version where the dump and restore commands *both*
work.
-Randy
I'm very curious about this one.m I use dump myself and whenever I've had
to go to a tape I never had to pass over the label. When I give the
extract command however, it does ask me for a label number to start at. I
have no clue what that's all about; I just say 0.
I'd l
Subtle hint to get you passed the tape label. Skip by the first file!
To restore try
mt -f /dev/whatever fsf 1
restore -f /dev/watever
Does anyone have a decent simple logging script for backups that sends
mail when it is finished, but just uses dump - not rdump. All the nice stuff seems to
Hi all,
Need some help. Here is the problem:
1. Made backup of a filesystem using
dump 0usf /dev/nst0 /
everything went well, and the dump was completed without any errors
2. Want to see what is in tape:
restore -i -f /dev/nst0
got the following message:
Tape is not a dump tape
I'm using RH5.0 on a Pentium, 48MB RAM using fvwm2 configured
to look like AfterStep. I replaced xclock with asclock from
the contrib directory (ftp site). Everything worked ok, now
asclock core dumps (segmentation fault) as user, but works fine
as root.
Can anyone help me isolate this? I've b
> I just thought I'd editorialize here for a second on the benefits of dump
> because I think we should all feel lucky to have it.
>
Why don't you have a look for a good backup software called Arkeia.
http://www.knox-software.com/
They have time limited eval version
Dear all,
I'm having serious trouble with dump; it aborts with the message
"master/slave protocol botched". An example:
[root@eddie /]# dump 0ubBdf 128 9757000 61000 /dev/ntape /dev/sdb2
DUMP: Date of this level 0 dump: Wed May 20 11:06:55 1998
DUMP: Date of last level 0
I just thought I'd editorialize here for a second on the benefits of dump
because I think we should all feel lucky to have it.
1. It is true that you have to tell dump a really big lie for the length
of the tape. A 120m 4mm dat has an effective length of 44000 feet because
of the funny way
Message-ID: <01bd01bd834f$37ffadc0$[EMAIL PROTECTED]>
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
-Original Message-
From: Vidiot <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Sunday, May 17, 19
>That's really what they're for; to end a dump before tape runs out, so you
>don't walk into the machine room and find a big reel ripped to shreds from
>going around and around after it reached the end.
>
>Doesn't matter with modern tapes. dump is really old.
-Original Message-
From: Vidiot <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Sunday, May 17, 1998 4:13 AM
Subject: dump and T3000 drive
>But, when using dump, there are two values that I don't know; density and
>size. The blocks I obtaine
> But, when using dump, there are two values that I don't know; density and
> size. The blocks I obtained fro the HOWTO: 58. I have them set real high
> at this point. If you use the T3000 and dump, please let me know what
> values you use.
I've been using dump and
> But, when using dump, there are two values that I don't know; density and
> size. The blocks I obtained fro the HOWTO: 58. I have them set real high
> at this point. If you use the T3000 and dump, please let me know what
> values you use.
Hmm.. haven't used dump on
I bought a HP T3000 drive today for the Windoze 95 machine and decided to
look at the ftape HOWTO to see if it was supported under Linux. By golly,
it is. Sure enough it works when I write to it.
But, when using dump, there are two values that I don't know; density and
size. The blo
I've been getting a segv with dump-0.3-8. I'm currently running
redhat4.2 I'll be moving to a newer release soon, but wanted to
backup everthing first! I'll use amanda with tar to try to work
around this, but it sure seemed strange.
stefen
--
PLEASE read the Red H
100 matches
Mail list logo