CfP: Sixth Workshop on Accelerator Programming using Directives (WACCPD 2019)

2019-07-18 Thread Thomas Schwinge
Hi!

If you're doing cool things in the area of "Accelerator Programming using
Directives" or related things, please consider submitting a paper.  (I'm
on the Program Committee again.)

| 
| Sixth Workshop on Accelerator Programming using Directives (WACCPD 2019)
| (in conjunction with SC19)
| November 18, 2019 - https://waccpd.org
| 
| 
| Call for Papers
| 
| The ever-increasing heterogeneity in supercomputing applications has given 
rise to complex compute node architectures offering multiple, heterogeneous 
levels of massive parallelism. As a result, the 'X' in MPI+X demands more 
focus. Exploiting the maximum available parallelism out of such systems 
necessitates sophisticated programming approaches that can provide scalable as 
well as portable solutions without compromising on performance. A programmer's 
expectation from the scientific community is to deliver solutions that would 
allow maintenance of a single code base whenever possible avoiding duplicate 
effort.
| 
| Raising the abstraction of the code is one of the effective methodologies to 
reduce the burden on the programmer while improving productivity. Software 
abstraction-based programming models, such as OpenMP and OpenACC, have been 
serving this purpose over the past several years as the compiler technology 
steadily improves. These programming models address the 'X' component by 
providing programmers with high-level directive-based approaches to accelerate 
and port scientific applications to heterogeneous platforms.
| 
| Recent architectural trends indicate a heavy reliance of future Exascale 
machines on accelerators for performance. Toward this end, the workshop will 
highlight the improvements over state-of-art through the accepted papers and 
prompt discussion through keynote/panel that draws the community's attention to 
key areas that will facilitate the transition to accelerator-based high- 
performance computing (HPC). The workshop aims to showcase all aspects of 
heterogeneous systems discussing innovative high-level language features, 
lessons learned while using directives to migrate scientific legacy code to 
parallel processors, compilation and runtime scheduling techniques among others.
| 
| WACCPD2019 will be co-located with SC19, Denver. In the past five years of 
this workshop, WACCPD has been one of the major forums at SC to bring together 
programming model users, developers, and tools community to share knowledge and 
experiences to tackle emerging complex parallel computing systems.
| 
| 
| Topics of interest for workshop submissions include (but are not limited to)
| -
| * Programming experiences porting applications in any scientific domain
| * Compiler and runtime support for current and emerging architectures
| (e.g. heterogeneous architectures, low-power processors)
| * Experiences in implementing compilers for accelerator directives on
| newer architectures
| * Language-based extensions and its prototype for directive-based
| programming models
| * Abstract handling of complex/heterogeneous memory hierarchies
| * Extensions to and shortcomings of current directives for heterogeneous 
systems
| * Comparisons against lower or higher-level abstractions
| * Application performance evaluation, validation, and lessons learned
| * Modeling, verification and performance analysis tools
| * Auto-tuning and optimization strategies
| * Parallel computing using hybrid programming paradigms (e.g. MPI,
| OpenMP, OpenACC, OpenSHMEM)
| * Asynchronous execution and scheduling (task-based approaches)
| * Scientific libraries interoperability with directive-based models
| * Power/energy studies and solutions targeting accelerators or
| heterogeneous systems
| 
| 
| Workshop Important Deadlines
| -
| Submission Deadline: August 22, 2019 AOE
| Author notification: September 30, 2019
| Workshop Ready Deadline: October 10, 2019 AOE
| Camera Ready papers due: December 10, 2019 AoE
| 
| 
| Submission Process & Proceedings
| -
| WACCPD papers will be peer-reviewed and selected for presentation at the 
workshop. The paper presented will be published as post-proceedings in Lecture 
Notes in Computer Science (LNCS) with Springer. Papers should be submitted 
electronically via the SC19 Submission Page 
(https://submissions.supercomputing.org/?page=Submit&id=SC19WorkshopWACCPDSubmission&site=sc19)
 and follow the Springer LNCS format. Submissions are limited to 20 pages. The 
20-page limit includes figures, tables, and appendices, but does not include 
references, for which there is no page limit. Authors are encouraged to provide 
an artifact appendix similar to SC19's reproducibility initiative. If an 
Artifact Description (AD) is provided, the

Re: Can LTO minor version be updated in backward compatible way ?

2019-07-18 Thread Richard Biener
On Wed, Jul 17, 2019 at 7:30 PM Andi Kleen  wrote:
>
> Romain Geissler  writes:
> >
> > I have no idea of the LTO format and if indeed it can easily be updated
> > in a backward compatible way. But I would say it would be nice if it
> > could, and would allow adoption for projects spread on many teams
> > depending on each others and unable to re-build everything at each
> > toolchain update.
>
> Right now any change to an compiler option breaks the LTO format
> in subtle ways.

Indeed - that one is quite awkward.  I wonder if we could try mitigating
that by streaming some hash in front of the actual data for optimization/target
nodes we can verify is correct.  Such change would be local to
optc-save-gen.awk and materialize in cl_*_stream_{in,out}.

I think that's the only place where streaming is auto-generated.

Richard.

> In fact even the minor changes that are currently
> done are not frequent enough to catch all such cases.
>
> So it's unlikely to really work.
>
> -Andi
>


Re: Can LTO minor version be updated in backward compatible way ?

2019-07-18 Thread Florian Weimer
* Jeff Law:

> On 7/17/19 11:29 AM, Andi Kleen wrote:
>> Romain Geissler  writes:
>>>
>>> I have no idea of the LTO format and if indeed it can easily be updated
>>> in a backward compatible way. But I would say it would be nice if it
>>> could, and would allow adoption for projects spread on many teams
>>> depending on each others and unable to re-build everything at each
>>> toolchain update.
>> 
>> Right now any change to an compiler option breaks the LTO format
>> in subtle ways. In fact even the minor changes that are currently
>> done are not frequent enough to catch all such cases.
>> 
>> So it's unlikely to really work.

> Right and stable LTO bytecode really isn't on the radar at this time.

Maybe it's better to serialize the non-preprocessed source code instead.
It would need some (hash-based?) deduplication.  For #include
directives, the hash of the file would be captured for reproducibility.
Then if the initial #defines are known, the source code after processing
can be reproduced exactly.

Compressed source code is a surprisingly compact representation of a
program, usually smaller than any (compressed) IR dump.

Thanks,
Florian


gcc-7-20190718 is now available

2019-07-18 Thread gccadmin
Snapshot gcc-7-20190718 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/7-20190718/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 7 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-7-branch 
revision 273586

You'll find:

 gcc-7-20190718.tar.xzComplete GCC

  SHA256=c87c75f35653089868acef5b048e334b0c726612cef58ae25b30b6cd3d8d711a
  SHA1=51477f444e7a2281c7b3846cee7d387556a287eb

Diffs from 7-20190711 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-7
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: [EXT] Re: Can LTO minor version be updated in backward compatible way ?

2019-07-18 Thread Romain Geissler
On Thu, 18 Jul 2019, Florian Weimer wrote:

> > Right and stable LTO bytecode really isn't on the radar at this time.
>
> Maybe it's better to serialize the non-preprocessed source code instead.
> It would need some (hash-based?) deduplication.  For #include
> directives, the hash of the file would be captured for reproducibility.
> Then if the initial #defines are known, the source code after processing
> can be reproduced exactly.
>
> Compressed source code is a surprisingly compact representation of a
> program, usually smaller than any (compressed) IR dump.

Hi,

That may fly in the open source world, however I expect some vendors
shipping proprietary code might be fine with assembly/LTO representation
of their product, but not source.

It looks like from your different answers that for now it's hopeless to
expect good compatibility between minor releases. With that in mind, do
you think it might be worth implementing some kind of flag
-flto-fallback-to-fat-objects={error,warning,silent} where the default
value would be "error" (just say that we have an LTO version mismatch),
"warning" would just print the version mismatch, but fallback to fat
assembly for the conflicting libraries, and "silent" would do that same
fallback silently ? Or are we really the only users of fat LTO objects
and the only ones to face these kind of issues where rebuilding
everything all the time is not easy/possible ?

Cheers,
Romain