https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96613

--- Comment #6 from Steve Kargl <sgk at troutmask dot apl.washington.edu> ---
On Mon, Aug 17, 2020 at 06:03:31PM +0000, anlauf at gcc dot gnu.org wrote:
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96613
> 
> --- Comment #5 from anlauf at gcc dot gnu.org ---
> (In reply to kargl from comment #4)
> > I thought that this might be a good candidate for frontend fix.  
> > Similar to thomas's frontend optimizations except except the 
> > transformation is always done.  That is, we should be able to do
> > substitutions based on Table 16.3 before we even get to backend.
> 
> If frontend optimization is preferred, I'll step out of the way.
> 
> Nevertheless, one also need to address issues like:
> 
> % cat maxmin.f90
> program p
>   implicit none
>   print *, min (2.0, 1.d0)
>   print *, min (2.d0, 1.0)
>   print *, kind (min (2.0, 1.d0))
>   print *, kind (min (2.d0, 1.0))
> end program p
> 
> % gfc-11 maxmin.f90 && ./a.out 
>    1.00000000    
>    1.0000000000000000     
>            4
>            8
> 
> The only compiler I found having the same is PGI/NVIDIA.
> 
> OTOH ifort (and similarly sunf95, g95(!)) result in the expected:
> 
>    1.00000000000000     
>    1.00000000000000     
>            8
>            8

Personally, I would rather issue an error if types of the
arguments are not the same, but that ship sailed years ago.
To fix, the above, you'll need to look at iresolve.c

static void
gfc_resolve_minmax (const char *name, gfc_expr *f, gfc_actual_arglist *args)
{
  gfc_actual_arglist *a;

  f->ts.type = args->expr->ts.type;
  f->ts.kind = args->expr->ts.kind;

and re-do the conversion stuff.  AFAICT, type conversion is
not handled correctly.  The largest kind is found regardless
of the type and this kind with the type of first argument is
used to to do conversion.

Reply via email to