David, I hope you understand that this can not be solution to this issue, even 
if that true.



Thu, 02 Oct 2014 16:57:41 +0200 от David Anderson <[email protected]>:
>I assume these are cases where the user has set <use_all_gpus>.
>If that flag is removed the problem should go away.
>-- David
>
>On 02-Oct-2014 4:28 PM, Raistmer the Sorcerer wrote:
>> It was discovered that OpenCL AstroPulse application in SETI can't properly 
>> work
>> under 340.52 nVidia driver on pre-FERMI hardware (single pulses missing, 
>> issue
>> reported and reproduced by NV already). To avoid wrong computations Eric 
>> created
>> 2 plan classes that should avoid work sending on pre-FERMI GPUs running under
>> that faulty driver.
>>
>> But it was discovered (look here:
>>  http://setiweb.ssl.berkeley.edu/beta/forum_thread.php?id=2182&postid=52632 )
>> that multi-GPU hosts containing both FERMI and pre-FERMI GPUs are able to 
>> recive
>> tasks under another common plan class for FERMI devices and execute those 
>> recived
>> tasks on pre-FERMI devices. That is, though server obey plan class and ignore
>> pre-FERMI devices on 340.52 driver, BOINC client doesn't obey plan class and
>> schedule to run tasks for FERMI devices on pre-FERMI ones (when both present 
>> in
>> host).
>>
>> Could someone confirm this issue? And how one should act to exclude pre-FERMI
>> devices on 340.52 driver in such case of mixed NV devices in host?
>>
>>
>>
>_______________________________________________
>boinc_dev mailing list
>[email protected]
>http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>To unsubscribe, visit the above URL and
>(near bottom of page) enter your email address.

_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to