Ritesh Raj Sarraf <[EMAIL PROTECTED]> writes: > yes, it does happen thatway,, but even then i never had to hard boot my > client machine(in your case Host B) because i had soft mounted the /home > on my client machine :-)
Soft NFS mounting writable file systems is a bad idea as few programs handle failed writes gracefully (esp. when the same write has previously succeeded). Your system won't hang but you will likely end up with corrupt data instead. Hard mounting with the intr option will allow you to kill (or interrupt) hung processes giving you some control over the problem. The infinite timeout in NFS was a conscious design decision because it allows file servers to crash and be rebooted w/o any (long term) affect on the client. IOW, once the server comes back, you just continue on with your work as if nothing happened. There are other ways to implement this but they require more complex clients and servers. Keep in mind that NFS was invented when UNIX servers with 4MB RAM were commonplace. Like most software, NFS works well when used properly. That includes sharing of user credentials with NIS or something else as mentioned in a previous post and carefully crafted automount maps. I have worked with some excellent NFS setups and with some poor ones. Well designed setups are a pleasure to work with. -- tim writer <[EMAIL PROTECTED]> starnix inc. tollfree: 1-87-pro-linux thornhill, ontario, canada http://www.starnix.com professional linux services & products -- redhat-list mailing list unsubscribe mailto:[EMAIL PROTECTED] https://listman.redhat.com/mailman/listinfo/redhat-list