That's why I wrote this here, in dev forum. Device-centric approach... 3 or 4 years ago? Hope this will be changed in future. Before it was just idea with abstract need. Now we have particular examples where it needed and needed immediately...
Thu, 02 Oct 2014 17:18:26 +0200 от David Anderson <[email protected]>: >Not in the current design. >-- D > >On 02-Oct-2014 5:09 PM, McLeod, John wrote: >> Is there some way to use both GPUS and get the correct app running on the >> correct >> device? >> >> Sent from my Android phone using TouchDown ( www.nitrodesk.com ) >> >> -----Original Message----- >> *From:* David Anderson [[email protected]] >> *Received:* Thursday, 02 Oct 2014, 11:06AM >> *To:* McLeod, John [[email protected]]; [email protected] >> [[email protected]] >> *Subject:* Re: [boinc_dev] BOINCscheduling & plan class possible issue >> >> Yes. >> You can also use <exclude_gpu> to be more specific >> about which GPU to use for which project. >> >> On 02-Oct-2014 4:59 PM, McLeod, John wrote: >>> Would that then prevent BOINC from running tasks on some of the GPS for any >>> tasks? >>> >>> Sent from my Android phone using TouchDown ( www.nitrodesk.com < >>> http://www.nitrodesk.com >) >>> >>> -----Original Message----- >>> *From:* David Anderson [[email protected]] >>> *Received:* Thursday, 02 Oct 2014, 10:58AM >>> *To:* [email protected] [[email protected]] >>> *Subject:* Re: [boinc_dev] BOINCscheduling & plan class possible issue >>> >>> I assume these are cases where the user has set <use_all_gpus>. >>> If that flag is removed the problem should go away. >>> -- David >>> >>> On 02-Oct-2014 4:28 PM, Raistmer the Sorcerer wrote: >>>> It was discovered that OpenCL AstroPulse application in SETI can't >>>> properly work >>>> under 340.52 nVidia driver on pre-FERMI hardware (single pulses missing, >>>> issue >>>> reported and reproduced by NV already). To avoid wrong computations Eric >>>> created >>>> 2 plan classes that should avoid work sending on pre-FERMI GPUs running >>>> under >>>> that faulty driver. >>>> >>>> But it was discovered (look here: >>>> http://setiweb.ssl.berkeley.edu/beta/forum_thread.php?id=2182&postid=52632 >>>> ) >>>> that multi-GPU hosts containing both FERMI and pre-FERMI GPUs are able to >>>> recive >>>> tasks under another common plan class for FERMI devices and execute those >>>> recived >>>> tasks on pre-FERMI devices. That is, though server obey plan class and >>>> ignore >>>> pre-FERMI devices on 340.52 driver, BOINC client doesn't obey plan class >>>> and >>>> schedule to run tasks for FERMI devices on pre-FERMI ones (when both >>>> present in >>>> host). >>>> >>>> Could someone confirm this issue? And how one should act to exclude >>>> pre-FERMI >>>> devices on 340.52 driver in such case of mixed NV devices in host? >>>> >>>> >>>> >>> _______________________________________________ >>> boinc_dev mailing list >>> [email protected] >>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev >>> To unsubscribe, visit the above URL and >>> (near bottom of page) enter your email address. >_______________________________________________ >boinc_dev mailing list >[email protected] >http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev >To unsubscribe, visit the above URL and >(near bottom of page) enter your email address. _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
