Chris Friesen wrote:
I'm not sure who I should address this to...I hope this is correct.
If I share memory between two processes, and protect access to the
memory using standard locking (fcntl(), for instance), do I need to
specify that the memory is volatile? Or is the fact that I'm using
fcntl() enough to force the compiler to not optimise away memory
accesses?
As an example, consider the following code, with <lock>/<unlock>
replaced with the appropriate fcntl() code:
int *b;
int test()
{
b=<address of shared memory>;
while(1) {
<lock>
if (*b) {
break;
}
<unlock>
}
<unlock>
return *b;
}
Without the locks, the compiler is free to only load *b once (and in
fact gcc does so). Is the addition of the locks sufficient to force
*b to be re-read each time, or do I need to declare it as
volatile int *b;
Offically you have to declare volatile int *b. Although I can't be sure
however, looking at this sample of code gcc will re-read the values
anyway, as if fcntl are included in a seperate binary, gcc will probably
not be able to tell that the value *b couldn't be changed by the call
the fcntl, so will dump it to memory before the function call and read
it back afterwards. While it's a little dodgy, in the past I've often
made sure gcc re-reads memory locations by passing a pointer to them to
a function compiled in a seperate unit. If gcc ever gets some kind of
super-funky cross-unit binary optimisation, then this might get
optimised away, but I wouldn't expect such a thing soon :)
Chris
Thanks,
Chris