The project is only informed about the most capable devices and the total
number of GPUs.

On Thu, Oct 2, 2014 at 8:59 AM, McLeod, John <[email protected]> wrote:

> Can the project send work that will run on the least capable device and
> have it run on both?
>
> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>
> -----Original Message-----
> From: Raistmer the Sorcerer [[email protected]]
> Received: Thursday, 02 Oct 2014, 11:47AM
> To: [email protected] [[email protected]]
> Subject: Re: [boinc_dev] BOINCscheduling & plan class possible issue
>
> This solution requires user intervention. The question is how to avoid
> this server-side. User can be not aware that his host generates invalid
> result and project can't prevent it now.
>
>
> Thu, 02 Oct 2014 17:06:05 +0200 от David Anderson <[email protected]
> >:
> >Yes.
> >You can also use <exclude_gpu> to be more specific
> >about which GPU to use for which project.
> >
> >On 02-Oct-2014 4:59 PM, McLeod, John wrote:
> >> Would that then prevent BOINC from running tasks on some of the GPS for
> any tasks?
> >>
> >> Sent from my Android phone using TouchDown ( www.nitrodesk.com<
> http://www.nitrodesk.com> )
> >>
> >> -----Original Message-----
> >> *From:* David Anderson [[email protected]]
> >> *Received:* Thursday, 02 Oct 2014, 10:58AM
> >> *To:*  [email protected] [[email protected]]
> >> *Subject:* Re: [boinc_dev] BOINCscheduling & plan class possible issue
> >>
> >> I assume these are cases where the user has set <use_all_gpus>.
> >> If that flag is removed the problem should go away.
> >> -- David
> >>
> >> On 02-Oct-2014 4:28 PM, Raistmer the Sorcerer wrote:
> >>> It was discovered that OpenCL AstroPulse application in SETI can't
> properly work
> >>> under 340.52 nVidia driver on pre-FERMI hardware (single pulses
> missing, issue
> >>> reported and reproduced by NV already). To avoid wrong computations
> Eric created
> >>> 2 plan classes that should avoid work sending on pre-FERMI GPUs
> running under
> >>> that faulty driver.
> >>>
> >>> But it was discovered (look here:
> >>>
> http://setiweb.ssl.berkeley.edu/beta/forum_thread.php?id=2182&postid=52632
> )
> >>> that multi-GPU hosts containing both FERMI and pre-FERMI GPUs are able
> to recive
> >>> tasks under another common plan class for FERMI devices and execute
> those recived
> >>> tasks on pre-FERMI devices. That is, though server obey plan class and
> ignore
> >>> pre-FERMI devices on 340.52 driver, BOINC client doesn't obey plan
> class and
> >>> schedule to run tasks for FERMI devices on pre-FERMI ones (when both
> present in
> >>> host).
> >>>
> >>> Could someone confirm this issue? And how one should act to exclude
> pre-FERMI
> >>> devices on 340.52 driver in such case of mixed NV devices in host?
> >>>
> >>>
> >>>
> >> _______________________________________________
> >> boinc_dev mailing list
> >>  [email protected]
> >>  http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> >> To unsubscribe, visit the above URL and
> >> (near bottom of page) enter your email address.
> >_______________________________________________
> >boinc_dev mailing list
> >[email protected]
> >http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> >To unsubscribe, visit the above URL and
> >(near bottom of page) enter your email address.
>
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
>
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to