It was discovered that OpenCL AstroPulse application in SETI can't properly 
work under 340.52 nVidia driver on pre-FERMI hardware (single pulses missing, 
issue reported and reproduced by NV already).
To avoid wrong computations Eric created 2 plan classes that should avoid work 
sending on pre-FERMI GPUs running under that faulty driver.

But it was discovered (look here:  
http://setiweb.ssl.berkeley.edu/beta/forum_thread.php?id=2182&postid=52632  ) 
that multi-GPU hosts containing both FERMI and pre-FERMI GPUs are able to 
recive tasks under another common plan class for FERMI devices and execute 
those recived tasks on pre-FERMI devices. That is, though server obey plan 
class and ignore pre-FERMI devices on 340.52 driver, BOINC client doesn't obey 
plan class and schedule to run tasks for FERMI devices on pre-FERMI ones (when 
both present in host).

Could someone confirm this issue? And how one should act to exclude pre-FERMI 
devices on 340.52 driver in such case of mixed NV devices in host?



-- 
Raistmer the Sorcerer
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to