Re: [Mesa-dev] How to merge Mesa changes which require corresponding piglit changes

2019-12-02 Thread Tapani Pälli

Hi;

On 11/15/19 8:41 PM, Mark Janes wrote:

Michel Dänzer  writes:


On 2019-11-15 4:02 p.m., Mark Janes wrote:

Michel Dänzer  writes:


Now that the GitLab CI pipeline tests a snapshot of piglit with llvmpipe
(https://gitlab.freedesktop.org/mesa/mesa/merge_requests/2468), the
question has come up how to deal with inter-dependent Mesa/piglit
changes (where merging only one or the other causes some piglit
regressions).


First of all, let it be clear that just merging the Mesa changes as-is
and breaking the GitLab CI pipeline is not acceptable.


 From the Mesa POV, the easiest solution is:

1. Merge the piglit changes
2. In the Mesa MR (merge request), add a commit which updates piglit[0]
3. If the CI pipeline is green, the MR can be merged


In case one wants to avoid alarms from external CI systems, another
possibility is:


For the Intel CI, no alarm is generated if the piglit test is pushed
first.  Normal development process includes writing a piglit test to
illustrate the bug that is being fixed.


Cool, but what if the piglit changes affect the results of existing
tests? That was the situation yesterday which prompted this thread.


We attribute the status change to piglit in the CI config, within a few
hours.  The test shows up as a failure in CI until it is triaged.


I think we have a problem with current gitlab CI process.

Right now if someone needs to update piglit commit used by CI, he also 
ends up fixing and editing the .gitlab-ci/piglit/quick_gl.txt (and 
glslparser+quick_shader.txt) as CI reports numerous failures because of 
completely unrelated stuff as meanwhile people added other tests, 
removed tests and modified them. I think we should turn such warnings on 
only when we have more sophisticated algorithm to detect actual 
regression (not just 'state change', like additional test or removed test).



1. In the Mesa MR, add a commit which disables the piglit tests broken
by the Mesa changes.
2. If the CI pipeline is green, the MR can be merged
3. Merge the piglit changes
4. Create another Mesa MR which updates piglit[0] and re-enables the
tests disabled in step 1

I hope that covers it, don't hesitate to ask questions if something's
still unclear.


It might help developers if CI generated the patch to make their pipeline
pass.


It does for the test result list, if that's what you mean.

However, that patch shouldn't be applied mechanically, but only after
confirming that all changes in test results are expected. Ideally,
whenever there are any new tests, the corresponding CI jobs should be
run several times to make sure the new results are stable, otherwise any
flaky tests should be excluded.


--
Earthling Michel Dänzer   |   https://redhat.com
Libre software enthusiast | Mesa and X developer

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev




// Tapani
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Hardware assisted (VDPAU) decoding of MPEG-2 causes GPU lockup on Radeon HD6320

2019-12-02 Thread Christian König
The reason we had to switch to VDPAU with Ubuntu 16.04 is that we saw 
a major regression with mpeg2 playback using va-api.
What regression was that? The difference between VDPAU and VA-API is 
only marginal for codec support.


During our testing we put Ubuntu 19.10 on one of these boxes and 
noticed that full software acceleration has improved to the point that 
VA-API nor VDPAU was required for VLC to render the mpeg2 and mpeg4 
streams correctly.

Well how was the stack configured then? Pure software playback?

As long as you don't do any high resolution H264 decoding the CPU in 
that package should be capable of decoding both MPEG2 and MPEG4 with 
software decode.


In general I would also try with mpv instead of vlc to rule out player 
issues.


Regards,
Christian.

Am 02.12.19 um 15:06 schrieb Will DeBerry:


well that's the very first APU generation and unfortunately nobody
is working on that old hardware any more.


Agreed, definitely old hardware. Unfortunately we have 10,000 of these 
things in production and they have been playing hardware accelerated 
mpeg2 fine until we upgraded to Ubuntu 16.04 and the new mesa package. 
Now to be specific, our previous version of linux on these systems we 
were using an older software stack and video acceleration pipeline but 
it worked perfectly, so we know the hardware is capable.


*Old Software Stack:*

  * vlc 2.1.5
  * mesa 11.0.6
  * va-api hardware acceleration
  * libva info: VA-API version 0.38.1
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/va/drivers/r600_drv_video.so
libva info: Found init function __vaDriverInit_0_35
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.38 (libva 1.6.2)
vainfo: Driver version: Splitted-Desktop Systems VDPAU backend for
VA-API - 0.7.4
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG4Simple            : VAEntrypointVLD
      VAProfileMPEG4AdvancedSimple    : VAEntrypointVLD
      VAProfileH264Baseline           : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointVLD
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD

*New Software Stack:*

  * vlc 2.2.2
  * mesa 18.2.8-2~bpo9+1
  * vdpau hardware acceleration

The reason we had to switch to VDPAU with Ubuntu 16.04 is that we saw 
a major regression with mpeg2 playback using va-api. It was capable of 
playing back mpeg4 without any issues. Now that we have switched to 
VDPAU however, we are seeing this GPU thread lockup bug and thus 
causing X and other GUI related programs to crash and requiring a 
reboot to recover.


Changing out hardware for the next best thing is not an option at our 
scale and we know that the hardware is capable due to past 
experiences. We are just in need of assistance with someone or some 
party that knows that stack a lot more than us to help dig to the core 
issue of the lockup or help us get VA-API working for mpeg2 in 16.04.


So the best approach is probably to not use hardware acceleration
for MPEG2 clips in general.


With software decoding, the performance doesn't produce something that 
is watchable. One interesting tidbit to note. During our testing we 
put Ubuntu 19.10 on one of these boxes and noticed that full software 
acceleration has improved to the point that VA-API nor VDPAU was 
required for VLC to render the mpeg2 and mpeg4 streams correctly. Is 
this something that could potentially be backported to Ubuntu 16.04? I 
know this is a much bigger task that the one sentence ask alludes to, 
but figured I'd ask anyway.


We are more than welcome to work together on this, especially since 
the hardware is older and probably hard to find. Just needing to find 
a solution so we can move forward on upgrading the software and on 
these older hardware.


On Thu, Nov 28, 2019 at 7:15 AM Christian König 
> wrote:


Hi Will,

well that's the very first APU generation and unfortunately nobody
is working on that old hardware any more.

MPEG2 is known to not be fully supported by that chipset in
general. So the best approach is probably to not use hardware
acceleration for MPEG2 clips in general.

Regards,
Christian.

Am 27.11.19 um 18:35 schrieb Will DeBerry:

Hi all,

I am reaching out hoping to get some assistance with resolving a
bug/crash that we see with the GPU when using VDPAU hardware
acceleration on Ubuntu 16.04. This is specific to the r600
drivers interacting with VDPAU when trying to playback certain
mpeg2 content.

*GPU in question per lscpi: *
00:01.0 VGA compatible controller

Re: [Mesa-dev] How to merge Mesa changes which require corresponding piglit changes

2019-12-02 Thread Michel Dänzer
On 2019-12-02 3:15 p.m., Tapani Pälli wrote:
> On 11/15/19 8:41 PM, Mark Janes wrote:
>> Michel Dänzer  writes:
>>
>>> On 2019-11-15 4:02 p.m., Mark Janes wrote:
 Michel Dänzer  writes:

> Now that the GitLab CI pipeline tests a snapshot of piglit with
> llvmpipe
> (https://gitlab.freedesktop.org/mesa/mesa/merge_requests/2468), the
> question has come up how to deal with inter-dependent Mesa/piglit
> changes (where merging only one or the other causes some piglit
> regressions).
>
>
> First of all, let it be clear that just merging the Mesa changes as-is
> and breaking the GitLab CI pipeline is not acceptable.
>
>
>  From the Mesa POV, the easiest solution is:
>
> 1. Merge the piglit changes
> 2. In the Mesa MR (merge request), add a commit which updates
> piglit[0]
> 3. If the CI pipeline is green, the MR can be merged
>
>
> In case one wants to avoid alarms from external CI systems, another
> possibility is:

 For the Intel CI, no alarm is generated if the piglit test is pushed
 first.  Normal development process includes writing a piglit test to
 illustrate the bug that is being fixed.
>>>
>>> Cool, but what if the piglit changes affect the results of existing
>>> tests? That was the situation yesterday which prompted this thread.
>>
>> We attribute the status change to piglit in the CI config, within a few
>> hours.  The test shows up as a failure in CI until it is triaged.
> 
> I think we have a problem with current gitlab CI process.
> 
> Right now if someone needs to update piglit commit used by CI, he also
> ends up fixing and editing the .gitlab-ci/piglit/quick_gl.txt (and
> glslparser+quick_shader.txt) as CI reports numerous failures because of
> completely unrelated stuff as meanwhile people added other tests,
> removed tests and modified them.

This is at least somewhat intentional, as the results of any newly added
tests should be carefully checked for plausibility.


> I think we should turn such warnings on only when we have more
> sophisticated algorithm to detect actual regression (not just 'state
> change', like additional test or removed test).

It's unclear what exactly you're proposing. In order to catch
regressions (e.g. pass -> warn, pass -> fail, pass -> skip, pass ->
crash), we need a list of all tests on at least one side of each
transition. We're currently keeping the list of all
warning/failing/skipped/crashing tests, but not passing tests (to keep
the lists as small as possible).

One possibility might be to remove the summary at the end of the lists.
That would allow new passing tests to be silently added, but it would
mean we could no longer catch pass -> notrun regressions.


-- 
Earthling Michel Dänzer   |   https://redhat.com
Libre software enthusiast | Mesa and X developer
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] How to merge Mesa changes which require corresponding piglit changes

2019-12-02 Thread Tapani Pälli


On 12/2/19 5:25 PM, Michel Dänzer wrote:

On 2019-12-02 3:15 p.m., Tapani Pälli wrote:

On 11/15/19 8:41 PM, Mark Janes wrote:

Michel Dänzer  writes:


On 2019-11-15 4:02 p.m., Mark Janes wrote:

Michel Dänzer  writes:


Now that the GitLab CI pipeline tests a snapshot of piglit with
llvmpipe
(https://gitlab.freedesktop.org/mesa/mesa/merge_requests/2468), the
question has come up how to deal with inter-dependent Mesa/piglit
changes (where merging only one or the other causes some piglit
regressions).


First of all, let it be clear that just merging the Mesa changes as-is
and breaking the GitLab CI pipeline is not acceptable.


  From the Mesa POV, the easiest solution is:

1. Merge the piglit changes
2. In the Mesa MR (merge request), add a commit which updates
piglit[0]
3. If the CI pipeline is green, the MR can be merged


In case one wants to avoid alarms from external CI systems, another
possibility is:


For the Intel CI, no alarm is generated if the piglit test is pushed
first.  Normal development process includes writing a piglit test to
illustrate the bug that is being fixed.


Cool, but what if the piglit changes affect the results of existing
tests? That was the situation yesterday which prompted this thread.


We attribute the status change to piglit in the CI config, within a few
hours.  The test shows up as a failure in CI until it is triaged.


I think we have a problem with current gitlab CI process.

Right now if someone needs to update piglit commit used by CI, he also
ends up fixing and editing the .gitlab-ci/piglit/quick_gl.txt (and
glslparser+quick_shader.txt) as CI reports numerous failures because of
completely unrelated stuff as meanwhile people added other tests,
removed tests and modified them.


This is at least somewhat intentional, as the results of any newly added
tests should be carefully checked for plausibility.



I think we should turn such warnings on only when we have more
sophisticated algorithm to detect actual regression (not just 'state
change', like additional test or removed test).


It's unclear what exactly you're proposing. In order to catch
regressions (e.g. pass -> warn, pass -> fail, pass -> skip, pass ->
crash), we need a list of all tests on at least one side of each
transition. We're currently keeping the list of all
warning/failing/skipped/crashing tests, but not passing tests (to keep
the lists as small as possible).

One possibility might be to remove the summary at the end of the lists.
That would allow new passing tests to be silently added, but it would
mean we could no longer catch pass -> notrun regressions.



Yeah, the last point is what I had in mind but it is tricky .. I guess I 
don't really have a good concrete proposal currently but I was hoping 
maybe someone comes up with one :)


I guess my issues boil down to difference vs Intel CI that there we 
track Piglit master so the overall state is 'more fresh'. With current 
gitlab CI the issues come late as many commits may have happened. So the 
person dealing with the issue (updating tag) does not have the context 
of those changes or maybe even expertise about the changes (and what was 
expected result), it should've have been caught already earlier.


It could be also that I'm trying to update too big chunk at once, should 
go commit by commit and see what happens to the results.


// Tapani
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Hardware assisted (VDPAU) decoding of MPEG-2 causes GPU lockup on Radeon HD6320

2019-12-02 Thread Alan Swanson
On Mon, 2019-12-02 at 15:16 +0100, Christian König wrote:
> > The reason we had to switch to VDPAU with Ubuntu 16.04 is that we
> > saw a major regression with mpeg2 playback using va-api.
>  What regression was that? The difference between VDPAU and VA-API is
> only marginal for codec support.

Guessing that it'll be the MPEG2 corruption under VA-API which was
fixed by;

st/va: reverse qt matrix back to its original order
https://cgit.freedesktop.org/mesa/mesa/commit/?id=d507bcdcf26b417dea201090165af651253b6b11

-- 
Alan.

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] How to merge Mesa changes which require corresponding piglit changes

2019-12-02 Thread Mark Janes
Michel Dänzer  writes:

> On 2019-12-02 3:15 p.m., Tapani Pälli wrote:
>> On 11/15/19 8:41 PM, Mark Janes wrote:
>>> Michel Dänzer  writes:
>>>
 On 2019-11-15 4:02 p.m., Mark Janes wrote:
> Michel Dänzer  writes:
>
>> Now that the GitLab CI pipeline tests a snapshot of piglit with
>> llvmpipe
>> (https://gitlab.freedesktop.org/mesa/mesa/merge_requests/2468), the
>> question has come up how to deal with inter-dependent Mesa/piglit
>> changes (where merging only one or the other causes some piglit
>> regressions).
>>
>>
>> First of all, let it be clear that just merging the Mesa changes as-is
>> and breaking the GitLab CI pipeline is not acceptable.
>>
>>
>>  From the Mesa POV, the easiest solution is:
>>
>> 1. Merge the piglit changes
>> 2. In the Mesa MR (merge request), add a commit which updates
>> piglit[0]
>> 3. If the CI pipeline is green, the MR can be merged
>>
>>
>> In case one wants to avoid alarms from external CI systems, another
>> possibility is:
>
> For the Intel CI, no alarm is generated if the piglit test is pushed
> first.  Normal development process includes writing a piglit test to
> illustrate the bug that is being fixed.

 Cool, but what if the piglit changes affect the results of existing
 tests? That was the situation yesterday which prompted this thread.
>>>
>>> We attribute the status change to piglit in the CI config, within a few
>>> hours.  The test shows up as a failure in CI until it is triaged.
>> 
>> I think we have a problem with current gitlab CI process.
>> 
>> Right now if someone needs to update piglit commit used by CI, he also
>> ends up fixing and editing the .gitlab-ci/piglit/quick_gl.txt (and
>> glslparser+quick_shader.txt) as CI reports numerous failures because of
>> completely unrelated stuff as meanwhile people added other tests,
>> removed tests and modified them.
>
> This is at least somewhat intentional, as the results of any newly added
> tests should be carefully checked for plausibility.

If a piglit (or any other suite) commit causes a test failure, the
failure is not a Mesa regression, by definition.  CI is for identifying
regressions.  The simple fact that a failure is due to a non-Mesa commit
means it can be immediately masked in CI.

>> I think we should turn such warnings on only when we have more
>> sophisticated algorithm to detect actual regression (not just 'state
>> change', like additional test or removed test).
>
> It's unclear what exactly you're proposing. In order to catch
> regressions (e.g. pass -> warn, pass -> fail, pass -> skip, pass ->
> crash), we need a list of all tests on at least one side of each
> transition. We're currently keeping the list of all
> warning/failing/skipped/crashing tests, but not passing tests (to keep
> the lists as small as possible).

CI must track the development of the test suites to capture the the
required transitions for tests.

If CI does not track each test suite commit, then some developer (eg
Tapani) has to go and triage test results from other piglit committers
in order to deploy tests in CI.  This is a barrier to test-first
development, and it is also unfair to the developers that are diligent
with testing.

Piglit and Crucible are maintained by the Mesa community and it makes
sense that Mesa CI should track their development.

Tracking other test suites (dEQP, CTS, etc) means that the Mesa
community may be distracted by test failures that are bugs in the suite
instead of bugs in Mesa.  Mesa developers are not enabled to fix bugs in
dEQP.  However, tracking external suites also identifies new conformance
requirements that Mesa will eventually be required to pass.

In practice, some test suites are easy to track and have developers that
are eager to resolve issues that are identified by the Mesa community
(eg dEQP, VulkanCTS).  Other suites are in a constant state of build
churn and are hard to track (Skia).

Tracking test suites can be done without too much effort, but it
requires a centralized role similar to a release manager.

> One possibility might be to remove the summary at the end of the lists.
> That would allow new passing tests to be silently added, but it would
> mean we could no longer catch pass -> notrun regressions.
>
>
> -- 
> Earthling Michel Dänzer   |   https://redhat.com
> Libre software enthusiast | Mesa and X developer
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Hardware assisted (VDPAU) decoding of MPEG-2 causes GPU lockup on Radeon HD6320

2019-12-02 Thread Christian König

Well how was the stack configured then? Pure software playback?


In 19.10, yes the whole stack was told to use software playback and 
decoding.


I would investigate this way. 1920x1080 is not a high resolution and 
should decode with the CPU just fine.



Our older Gentoo based setup with the old software stack worked fine


The hardware generally does not support some interlaced frame/field 
features rarely used in todays MPEG2 streams and a software fallback 
isn't easily doable with VA-API/VDPAU and so was never implemented.


Are you sure that the Gentoo based setup isn't software decoding based?

Regards,
Christian.

Am 02.12.19 um 16:02 schrieb Will DeBerry:


What regression was that? The difference between VDPAU and VA-API
is only marginal for codec support.


The regression revolved around deinterlacing the content. If we had to 
deinterlace 1080i for instance, the playback was very choppy and 
dropped frames.


Well how was the stack configured then? Pure software playback?


In 19.10, yes the whole stack was told to use software playback and 
decoding.


As long as you don't do any high resolution H264 decoding the CPU
in that package should be capable of decoding both MPEG2 and MPEG4
with software decode.


That's part of the problem. We do have high resolution (1920x1080) 
mpeg2 at the places we are installed. We have no control over what 
content is available but have to support it.


Our older Gentoo based setup with the old software stack worked fine 
but the Ubuntu 16.04 stack does not playback the same content without 
having to switch to VDPAU and thus introduce the GPU thread lockup 
issue. Ubuntu 18.04 looks to have the same VDPAU GPU lockup issue as 
well and cannot use software playback/decoding successfully.


On Mon, Dec 2, 2019 at 9:16 AM Christian König 
mailto:christian.koe...@amd.com>> wrote:



The reason we had to switch to VDPAU with Ubuntu 16.04 is that we
saw a major regression with mpeg2 playback using va-api.

What regression was that? The difference between VDPAU and VA-API
is only marginal for codec support.


During our testing we put Ubuntu 19.10 on one of these boxes and
noticed that full software acceleration has improved to the point
that VA-API nor VDPAU was required for VLC to render the mpeg2
and mpeg4 streams correctly.

Well how was the stack configured then? Pure software playback?

As long as you don't do any high resolution H264 decoding the CPU
in that package should be capable of decoding both MPEG2 and MPEG4
with software decode.

In general I would also try with mpv instead of vlc to rule out
player issues.

Regards,
Christian.

Am 02.12.19 um 15:06 schrieb Will DeBerry:


well that's the very first APU generation and unfortunately
nobody is working on that old hardware any more.


Agreed, definitely old hardware. Unfortunately we have 10,000 of
these things in production and they have been playing hardware
accelerated mpeg2 fine until we upgraded to Ubuntu 16.04 and the
new mesa package. Now to be specific, our previous version of
linux on these systems we were using an older software stack and
video acceleration pipeline but it worked perfectly, so we know
the hardware is capable.

*Old Software Stack:*

  * vlc 2.1.5
  * mesa 11.0.6
  * va-api hardware acceleration
  * libva info: VA-API version 0.38.1
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/va/drivers/r600_drv_video.so
libva info: Found init function __vaDriverInit_0_35
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.38 (libva 1.6.2)
vainfo: Driver version: Splitted-Desktop Systems VDPAU
backend for VA-API - 0.7.4
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG4Simple            : VAEntrypointVLD
      VAProfileMPEG4AdvancedSimple    : VAEntrypointVLD
      VAProfileH264Baseline           : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointVLD
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD

*New Software Stack:*

  * vlc 2.2.2
  * mesa 18.2.8-2~bpo9+1
  * vdpau hardware acceleration

The reason we had to switch to VDPAU with Ubuntu 16.04 is that we
saw a major regression with mpeg2 playback using va-api. It was
capable of playing back mpeg4 without any issues. Now that we
have switched to VDPAU however, we are seeing this GPU thread
lockup bug and thus causing X and other GUI related programs to

Re: [Mesa-dev] [PATCH] util/u_thread: don't restrict u_thread_get_time_nano() to __linux__

2019-12-02 Thread Marek Olšák
Reviewed-by: Marek Olšák 

Marek

On Sat, Nov 30, 2019 at 10:17 AM Jonathan Gray  wrote:

> pthread_getcpuclockid() and clock_gettime() are also available on at least
> OpenBSD, FreeBSD, NetBSD, DragonFly, Cygwin.
>
> Signed-off-by: Jonathan Gray 
> ---
>  src/util/u_thread.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/util/u_thread.h b/src/util/u_thread.h
> index 6fc923c10e6..461d30bdd12 100644
> --- a/src/util/u_thread.h
> +++ b/src/util/u_thread.h
> @@ -149,7 +149,7 @@ util_get_L3_for_pinned_thread(thrd_t thread, unsigned
> cores_per_L3)
>  static inline int64_t
>  u_thread_get_time_nano(thrd_t thread)
>  {
> -#if defined(__linux__) && defined(HAVE_PTHREAD)
> +#if defined(HAVE_PTHREAD)
> struct timespec ts;
> clockid_t cid;
>
> --
> 2.24.0
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] util/futex: use futex syscall on OpenBSD

2019-12-02 Thread Marek Olšák
Pushed. Thanks!

Marek

On Sat, Nov 30, 2019 at 10:19 AM Jonathan Gray  wrote:

> Make use of the futex syscall added in OpenBSD 6.2.
>
> Signed-off-by: Jonathan Gray 
> ---
>  src/util/futex.h | 18 ++
>  1 file changed, 18 insertions(+)
>
> diff --git a/src/util/futex.h b/src/util/futex.h
> index 268af92882a..cf8dd0206c9 100644
> --- a/src/util/futex.h
> +++ b/src/util/futex.h
> @@ -85,6 +85,24 @@ static inline int futex_wait(uint32_t *addr, int32_t
> value, struct timespec *tim
> return _umtx_op(addr, UMTX_OP_WAIT_UINT, (uint32_t)value, uaddr,
> uaddr2) == -1 ? errno : 0;
>  }
>
> +#elif defined(__OpenBSD__)
> +
> +#include 
> +#include 
> +
> +static inline int futex_wake(uint32_t *addr, int count)
> +{
> +   return futex(addr, FUTEX_WAKE, count, NULL, NULL);
> +}
> +
> +static inline int futex_wait(uint32_t *addr, int32_t value, const struct
> timespec *timeout)
> +{
> +   struct timespec tsrel, tsnow;
> +   clock_gettime(CLOCK_MONOTONIC, &tsnow);
> +   timespecsub(timeout, &tsrel, &tsrel);
> +   return futex(addr, FUTEX_WAIT, value, &tsrel, NULL);
> +}
> +
>  #endif
>
>  #endif /* UTIL_FUTEX_H */
> --
> 2.24.0
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] util/u_thread: don't restrict u_thread_get_time_nano() to __linux__

2019-12-02 Thread Marek Olšák
I've pushed this. Thanks!

Marek

On Mon, Dec 2, 2019 at 5:12 PM Marek Olšák  wrote:

> Reviewed-by: Marek Olšák 
>
> Marek
>
> On Sat, Nov 30, 2019 at 10:17 AM Jonathan Gray  wrote:
>
>> pthread_getcpuclockid() and clock_gettime() are also available on at least
>> OpenBSD, FreeBSD, NetBSD, DragonFly, Cygwin.
>>
>> Signed-off-by: Jonathan Gray 
>> ---
>>  src/util/u_thread.h | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/src/util/u_thread.h b/src/util/u_thread.h
>> index 6fc923c10e6..461d30bdd12 100644
>> --- a/src/util/u_thread.h
>> +++ b/src/util/u_thread.h
>> @@ -149,7 +149,7 @@ util_get_L3_for_pinned_thread(thrd_t thread, unsigned
>> cores_per_L3)
>>  static inline int64_t
>>  u_thread_get_time_nano(thrd_t thread)
>>  {
>> -#if defined(__linux__) && defined(HAVE_PTHREAD)
>> +#if defined(HAVE_PTHREAD)
>> struct timespec ts;
>> clockid_t cid;
>>
>> --
>> 2.24.0
>>
>> ___
>> mesa-dev mailing list
>> mesa-dev@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
>
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH] radeonsi: Add support for midstream bitrate change in encoder

2019-12-02 Thread Satyajit Sahu
Added support for bitrate change in between encoding.

Signed-off-by: Satyajit Sahu 

diff --git a/src/gallium/drivers/radeon/radeon_vce.c 
b/src/gallium/drivers/radeon/radeon_vce.c
index 84d3c1e2fa4..7d7a2fa4eb3 100644
--- a/src/gallium/drivers/radeon/radeon_vce.c
+++ b/src/gallium/drivers/radeon/radeon_vce.c
@@ -268,7 +268,8 @@ static void rvce_begin_frame(struct pipe_video_codec 
*encoder,
enc->pic.rate_ctrl.rate_ctrl_method != 
pic->rate_ctrl.rate_ctrl_method ||
enc->pic.quant_i_frames != pic->quant_i_frames ||
enc->pic.quant_p_frames != pic->quant_p_frames ||
-   enc->pic.quant_b_frames != pic->quant_b_frames;
+   enc->pic.quant_b_frames != pic->quant_b_frames ||
+   enc->pic.rate_ctrl.target_bitrate != 
pic->rate_ctrl.target_bitrate;
 
enc->pic = *pic;
si_get_pic_param(enc, pic);
diff --git a/src/gallium/drivers/radeon/radeon_vcn_enc.c 
b/src/gallium/drivers/radeon/radeon_vcn_enc.c
index aa9182f273b..c4fb9a7bd92 100644
--- a/src/gallium/drivers/radeon/radeon_vcn_enc.c
+++ b/src/gallium/drivers/radeon/radeon_vcn_enc.c
@@ -247,6 +247,17 @@ static void radeon_enc_begin_frame(struct pipe_video_codec 
*encoder,
 {
struct radeon_encoder *enc = (struct radeon_encoder*)encoder;
struct vl_video_buffer *vid_buf = (struct vl_video_buffer *)source;
+   bool need_rate_control = false;
+
+   if (u_reduce_video_profile(enc->base.profile) == 
PIPE_VIDEO_FORMAT_MPEG4_AVC) {
+   struct pipe_h264_enc_picture_desc *pic = (struct 
pipe_h264_enc_picture_desc *)picture;
+   need_rate_control =
+   enc->enc_pic.rc_layer_init.target_bit_rate != 
pic->rate_ctrl.target_bitrate;
+   } else if (u_reduce_video_profile(picture->profile) == 
PIPE_VIDEO_FORMAT_HEVC) {
+struct pipe_h265_enc_picture_desc *pic = (struct 
pipe_h265_enc_picture_desc *)picture;
+   need_rate_control =
+   enc->enc_pic.rc_layer_init.target_bit_rate != 
pic->rc.target_bitrate;
+   }
 
radeon_vcn_enc_get_param(enc, picture);
 
@@ -266,6 +277,10 @@ static void radeon_enc_begin_frame(struct pipe_video_codec 
*encoder,
flush(enc);
si_vid_destroy_buffer(&fb);
}
+   if (need_rate_control) {
+   enc->begin(enc, picture);
+   flush(enc);
+   }
 }
 
 static void radeon_enc_encode_bitstream(struct pipe_video_codec *encoder,
-- 
2.17.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev