Am Dienstag, den 18.01.2022, 09:31 +0100 schrieb Richard Biener:
> On Mon, Jan 17, 2022 at 3:11 PM Michael Matz via Gcc <gcc@gcc.gnu.org> wrote:
> > Hello,
> > 
> > On Sat, 15 Jan 2022, Martin Uecker wrote:
> > 
> > > > Because it interferes with existing optimisations. An explicit
> > > > checkpoint has a clear meaning. Using every volatile access that way
> > > > will hurt performance of code that doesn't require that behaviour for
> > > > correctness.
> > > 
> > > This is why I would like to understand better what real use cases of
> > > performance sensitive code actually make use of volatile and are
> > > negatively affected. Then one could discuss the tradeoffs.
> > 
> > But you seem to ignore whatever we say in this thread.  There are now
> > multiple examples that demonstrate problems with your proposal as imagined
> > (for lack of a _concrete_ proposal with wording from you), problems that
> > don't involve volatile at all.  They all stem from the fact that you order
> > UB with respect to all side effects (because you haven't said how you want
> > to avoid such total ordering with all side effects).

Again, this is simply not what I am proposing. I don't
want to order UB with all side effects.

You are right, there is not yet a specific proposal. But
at the moment I simply wanted to understand the impact of
reordering traps and volatile.

> > As I said upthread: you need to define a concept of time at whose
> > granularity you want to limit the effects of UB, and the borders of each
> > time step can't simply be (all) the existing side effects.  Then you need
> > to have wording of what it means for UB to occur within such time step, in
> > particular if multiple UB happens within one (for best results it should
> > simply be UB, not individual instances of different UBs).
> > 
> > If you look at the C++ proposal (thanks Jonathan) I think you will find
> > that if you replace 'std::observable' with 'sequence point containing a
> > volatile access' that you basically end up with what you wanted.  The
> > crucial point being that the time steps (epochs in that proposal) aren't
> > defined by all side effects but by a specific and explicit thing only (new
> > function in the proposal, volatile accesses in an alternative).
> > 
> > FWIW: I think for a new language feature reusing volatile accesses as the
> > clock ticks are the worse choice: if you intend that feature to be used
> > for writing safer programs (a reasonable thing) I think being explicit and
> > at the same time null-overhead is better (i.e. a new internal
> > function/keyword/builtin, specified to have no effects except moving the
> > clock forward).  volatile accesses obviously already exist and hence are
> > easier to integrate into the standard, but in a given new/safe program,
> > whenever you see a volatile access you would always need to ask 'is thise
> > for clock ticks, or is it a "real" volatile access for memmap IO'.
> 
> I guess Martin want's to have accesses to volatiles handled the same as
> function calls where we do not know whether the function call will return
> or terminate the program normally.  As if the volatile access could have
> a similar effect (it might actually reboot the machine or so - but of course
> that and anything else I can imagine would be far from "normal termination
> of the program").  That's technically possible to implement with a yet unknown
> amount of work.

Yes. thanks! Semantically this is equivalent to what I want.

> Btw, I'm not sure we all agree that (*) in the following program doesn't make
> it invoke UB and thus the compiler is not free to re-order the
> offending statement
> to before the exit (0) call.  Thus UB is only "realized" if a stmt
> containing it is
> executed in the abstract machine.
> 
> int main()
> {
>    exit(0);
>    1 / 0;  /// (*)
> }

Yes, this not clear although there seems to be some
understanding there is a difference between 
compile-time UB and run-time UB and I think the
standard should make it clear what is what.

Martin





Reply via email to