Fernando Rodriguez <frodriguez.developer <at> outlook.com> writes:



> > albeit in it's infancy. Naturally it's going to take a while to 
> > become mainstream useful; but that more like a year or 2, at most.
> 
> The value I see on that technology for desktop computing is that we get the 
> GPUs for what they're made (graphics processing) but their resources go
unused 
> by most applications, not in buying powerful GPUs for the purpose of
offloading 
> general purpose code, if that's the goal you're better off investing in more 
> general purpose cores that are more suited for the task.


I think most folks when purchasing a workstation include a graphics
card on the list of items to include. So my suggestions where geared
towards informing folks about some of the new features of gcc that
may intice them to consider the graphics card resources in an
expanded vision of general resources for their workstation.


> To trully take advantage of the GPU the actual algorithms need to be
rewritten 
> to use features like SIMD and other advanced parallelization features, most 
> desktop workloads don't lend themselves for that kind of parallelization.

Not true if what openacc hopes to achived indeed does become a reality.
Currently, you are most correct. Things change; I'm an optimist because 
I see what is  occuring in embedded devices, arm64, and cluster codes.
ymmv.

> That 
> is why despite similar predictions about how OpenMP-like parallel models
would 
> obsolete the current threads model since they where first proposed, it
hasn't  
> happened yet.

Yes it's still new technology, controversial, just like systemd, clusters,
and Software Defined Networks.


> Even for the purpose of offloading general purpose code, it seems with all
the 
> limitations on OpanACC kernels few desktop applications can take advantage of 
> it (and noticeably benefit from it) without major rewrites. Off the top of my 
> head audio, video/graphics encoders, and a few other things that max out the 
> cpu and can be broken into independent execution units.


You are taking a very conservative view of things. Codes being worked
out now for clusters, will find their way to expand the use of the
video card resources, for general purpose things. Most of this will
occur as compiler enhancements, not rewriting by hand or modifying 
algorithmic designs of existing codes. Granted they are going to
mostly apply to multi-threaded application codes.


When folks buy new hardware, it is often a good time to look at what
is on the horizon for computers they use. All I have pointed out is
a very active area that benefits folks to review for themselves. I not
pushing expenditures of any kind on any hardware.

Caveat Emptor.



James






Reply via email to