Roger Leigh wrote:
> Michael Stone <[EMAIL PROTECTED]> writes:
> > Roger Leigh wrote:
> >>1) The file can be unlinked by another process in between the stat and
> >>   the unlink, causing the unlink to fail.  ln should be aware this
> >>   can happen, and not treat this as a fatal error.
> >
> > Why?
> 
> Because otherwise the command fails when it should not:

I don't think there ever has been a promise that these commands have
atomic behavior.

  ln -sf SOURCE DESTINATION

Is basically the same as the following, only more conveniently
packaged for use as one shell command.

  rm -f DESTINATION
  ln -s SOURCE DESTINATION

Here is a reference to a similar discussion from upstream:

  http://lists.gnu.org/archive/html/bug-coreutils/2005-05/msg00033.html

If you have suggestions as to how to improve the behavior I am sure
that would be appreciated.  But I don't see how this can be made
atomic.  And this does not seem like something to be fixed simply.

> This *is* a real problem, which we are seeing with the sbuild part of
> buildd.  It has (perl):
> 
>   system "/bin/ln -sf $main::pkg_logfile 
> $conf::build_dir/current-$main::distribution";
>   system "/bin/ln -sf $main::pkg_logfile $conf::build_dir/current";

Ew, that is ugly.  And why the hard coded paths?

> When buildd schedules builds concurrently, sbuild is showing the
> problem intermittently.

If you need an atomic operation then you will need to use a command
that is atomic such as rename(2) accessable from within mv.  Here is
an example in shell off the top of my head but expecting to use a
better temporary name than the one I just produced for this example.

  ln -sf $pkg_logfile $build_dir/current-$distribution.$$
  mv $build_dir/current-$distribution.$$ $build_dir/current-$distribution

But of course you are in perl and can access rename from perl even
more directly.

Michael Stone wrote:
> Well, calling ln from perl via system is fugly anyway. I'd suggest using 
> something like:
> 
> unlink($conf::build_dir/current-$main::distribution);
> if (!symlink($main::pkg_logfile, 
> $conf::build_dir/current-$main::distribution))
> {
>   handle_error() if (not $! =~ m/File exists/);
> }

Let me suggest the following instead:

  unlink($conf::build_dir/current-$main::distribution);
  symlink($main::pkg_logfile,$conf::build_dir/current-$main::distribution.$$) 
or die;
  
rename($conf::build_dir/current-$main::distribution.$$,$conf::build_dir/current-$main::distribution)
 or die;

I tested the principle using this script.

  #!/usr/bin/perl
  unlink($ARGV[1].$$);
  symlink($ARGV[0],$ARGV[1].$$) || die;
  rename($ARGV[1].$$,$ARGV[1]) || die;

> I'm not sure what you expect the end result to be if two processes try 
> and operate at the same time. It may be that you need a locking 
> mechanism for this to make sense.

Agreed.  But avoiding needing locking when possible is better.

Bob


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to