Please improve wording of explanation of a Gfortran extension

2018-03-23 Thread Cyril Magnin
On the page
https://gcc.gnu.org/onlinedocs/gfortran/Unary-operators.html#Unary-operators,
we find the following statement:

6.1.11 Unary operators
>
> As an extension, GNU Fortran allows unary plus and unary minus operators
> to appear as the second operand of binary arithmetic operators without the
> need for parenthesis.
>
>X = Y * -Z
>
>
> The meaning is quite clear, but the wording can be improved. An "operator
appears as the second operand" ? Perhaps this could say, "...unary plus or
minus operators to be applied to the second operand of.." or "unary plus or
minus operators to be prefixed to the second operand of ...".

Thanks.


Re: Please improve wording of explanation of a Gfortran extension

2018-03-23 Thread Jonathan Wakely
CCing the fortran list.

On 23 March 2018 at 11:55, Cyril Magnin  wrote:
> On the page
> https://gcc.gnu.org/onlinedocs/gfortran/Unary-operators.html#Unary-operators,
> we find the following statement:
>
> 6.1.11 Unary operators
>>
>> As an extension, GNU Fortran allows unary plus and unary minus operators
>> to appear as the second operand of binary arithmetic operators without the
>> need for parenthesis.
>>
>>X = Y * -Z
>>
>>
>> The meaning is quite clear, but the wording can be improved. An "operator
> appears as the second operand" ? Perhaps this could say, "...unary plus or
> minus operators to be applied to the second operand of.." or "unary plus or
> minus operators to be prefixed to the second operand of ...".

Or "allows the result of unary plus and unary minus operators to appear ..."


Re: Can I use -Ofast without libmvec

2018-03-23 Thread Richard Biener
On Thu, Mar 22, 2018 at 7:47 PM, Jakub Jelinek  wrote:
> On Thu, Mar 22, 2018 at 11:08:58AM -0700, Steve Ellcey wrote:
>> I have a question about the math vector library routines in libmvec.
>> If I compile a program on x86 with -Ofast, something like:
>>
>> void foo(double * __restrict x, double * __restrict y, double * __restrict z)
>> {
>>   for (int i = 0; i < 1000; i++) x[i] = sin(y[i]);
>> }
>>
>> I get a call to the vector sin routine _ZGVbN2v_sin.  That is fine, but
>> is there some way to compile with -Ofast and not use the libmvec vector
>> routines?  I have tried -fopenmp, -fopenmp-simd, -fno-openmp, and -fno-
>> openmp-simd and I always get a call to _ZGVbN2v_sin.  Is there anyway
>> to stop the use of the vectorized calls (without turning off -Ofast)?
>
> Why do you want to avoid that?  You could use -Ofast -U__FAST_MATH__, because 
> then
> glibc headers won't use the simd attribute, but you won't get the finite
> stuff either etc.
> Or you can use use
> double my_sin (double) __asm ("sin");
> and use my_sin instead of sin.

Or you can

#define __simd__ foobaz

and deal with "unknown attribute foobaz" warnings.

Richard.

> Jakub


Re: GSOC Question about the parallelization project

2018-03-23 Thread Sebastiaan Peters
>On Wed, Mar 21, 2018 at 8:04 PM, Sebastiaan Peters
> wrote:
>>>On Wed, Mar 21, 2018 at 9:50 AM, Sebastiaan Peters
>>> wrote:
>On Tue, Mar 20, 2018 at 3:49 PM, David Malcolm  wrote:
>> On Tue, 2018-03-20 at 14:02 +0100, Richard Biener wrote:
>>> On Mon, Mar 19, 2018 at 9:55 PM, Richard Biener
>>>  wrote:
>>> > On March 19, 2018 8:09:32 PM GMT+01:00, Sebastiaan Peters >> > 7...@hotmail.com> wrote:
>>> > > > The goal should be to extend TU wise parallelism via make to
>>> > > > function
>>> > >
>>> > > wise parallelism within GCC.
>>> > >
>>> > > Could you please elaborate more on this?
>>> >
>>> > In the abstract sense you'd view the compilation process separated
>>> > into N stages, each function being processed by each. You'd assign
>>> > a thread to each stage and move the work items (the functions)
>>> > across the set of threads honoring constraints such as an IPA stage
>>> > needing all functions completed the previous stage. That allows you
>>> > to easier model the constraints due to shared state (like no pass
>>> > operating on two functions at the same time) compared to a model
>>> > where you assign a thread to each function.
>>> >
>>> > You'll figure that the easiest point in the pipeline to try this
>>> > 'pipelining' is after IPA has completed and until RTL is generated.
>>> >
>>> > Ideally the pipelining would start as early as the front ends
>>> > finished parsing a function and ideally we'd have multiple
>>> > functions in the RTL pipeline.
>>> >
>>> > The main obstacles will be the global state in the compiler of
>>> > which there is the least during the GIMPLE passes (mostly cfun and
>>> > current_function_decl plus globals in the individual passes which
>>> > is easiest dealt with by not allowing a single pass to run at the
>>> > same time in multiple threads). TLS can be used for some of the
>>> > global state plus of course some global data structures need
>>> > locking.

 This would mean that all the passes have to be individually analyzed for
 which global state
 they use, and lock/schedule them accordingly?
>>>
>>>Their private global state would be excempt by assuring that a pass never
>>>runs twice at the same time.
>>>
>>>The global state that remains should be the same for all passes we are 
>>>talking
>>>about (during the late GIMPLE optimization phase which I'd tackle).
>>>
 If this is the case, is there any documentation that describes the pre-reqs
 for each pass?
 I have looked at the internal documentation, however it seems that all of
 this still has to be created?
>>>
>>>The prereqs are actually the same and not very well documented (if at all).
>>>There's the global GC memory pool where we allocate statements and
>>>stuff like that from (and luckyly statements themselves are function 
>>>private).
>>>Then there's global trees like types ('int') where modification needs to be
>>>appropriately guarded.  Note that "modification" means for example
>>>building a new type for the address of 'int' given that all different
>>>pointer types
>>>to 'int' are chained in a list rooted in the tree for 'int'.  That
>>>means (a few?)
>>>tree building helpers need to be guarded with locks.  I don't have a great
>>>idea how to identify those apart from knowing them in advance or running
>>>into races ... my gut feeling is that there's not a lot to guard but I may
>>>be wrong ;)
>>
>> What does it mean to be a node of a type tree?
>> Does it describe information about that type,
>> or does it keep a reference to where something
>> of that type has been declared?
>
>On the GIMPLE level we have two main kinds of data objects, one is
>'gimple' (and derived clases) for statements and 'tree' for operands,
>types, declarations.  'tree's is also what GENERIC is represented in, the
>IL that gets produced by the frontends which is lowered to GIMPLE by
>the gimplifier.
>
>On GENERIC everything is a 'tree' (there's a class hierarchy of trees
>implemented in C ways via unions), types are, and so are decls
>and expressions (in GENERIC).  You can look at tree.h (and tree-core.h
>for implementation details) and for example see
>
>#define TYPE_POINTER_TO(NODE) (TYPE_CHECK (NODE)->type_common.pointer_to)
>
>where tree.c:build_pointer_type_for_mode does
>
  >/* First, if we already have a type for pointers to TO_TYPE and it's
 >the proper mode, use it.  */
  >for (t = TYPE_POINTER_TO (to_type); t; t = TYPE_NEXT_PTR_TO (t))
>if (TYPE_MODE (t) == mode && TYPE_REF_CAN_ALIAS_ALL (t) == can_alias_all)
  >return t;
>
>so if a pass builds a new pointer type that doesn't already exist we
>modify this list.
>This means an existing type tree is modified even if the type itself
>isn't modified.
>
 As to how this would be implemented, it seems the easiest way would be to
 extend the macros to
 accept a pre-req pass. This w