It is not sustainable practice to be examining the player time-line information and drawing conclusions about the audio session time line. This would introduce additional bindings between audio subsystem and player objects. We need to drive the architecture to making the tow as independent as possible. It is also not a very accurate approach.

Audio subsystem comprises of audio sessions, audio player and audio stream objects. This subsystem in current form is not aware of duration. Audio players can be created, resume, puased, destroyed on the fly. The audio subsystem simple mixes all streams into one final stream to hand over to audio device.

This new requirement calls for the concept of limiting the audio session time-line. In order to avoid problematic bindings between player and audio subsystem, new interface needs to be created:
IHXTimelineLimit:
        HX_RESULT SetLimit(UINT32 ulLimitInMs);
        HX_RESULT GetLimit(UINT32 &ulLimitMs);
          HX_RESULT ResetLimit();

This interface should be exposed by the audio player object (as optional interface). When duration is computed/updated (HXPlayer::AdjustPresentationTime(void)), the new presentation time is to be Set on audio player via SetLimit().
No limit is to be set for live or otherwise open ended streams.

Audio player will record the limit in SetLimit and call SetLimit on audio session on transition to playing state or immediately if already in the playing state. The time information will need to be converted to audio session timeline: ulLimitInMs - m_ulAPstartTime + m_ulADresumeTime

Audio session when receiving SetLimit call will query GetLimit from all audio players and set to the largest limit value discovered. The limit is not to be set is any of the GetLimit calls return HXR_NO_DATA indicating no limit has been set for audio player. To limit the scope of changes, the limit should also be set only when audio session is in power save mode.

Audio session should call SetLimit on Audio Device. If audio device does not support IHXTimelineLimit interface, limit should fail to set and not be used.

If set, audio session should perform mixing only up to the block that matches or exceed indicated limit.

This power save topic is of interest to a number of members in the community.
It would be great to discuss other aspects of this functionality/design as well:

-> Starting and stopping power save mode.
E.g.
IHXPowerSave:
        StartPowerSave
        EndPowerSave
        IsInPowerSave
        CanStartPowerSave
        GetWakeUpInterval
        SetWakeUpInterval

ClientEngine and AudioSession should implement above API

-> Player's Process Idle operation under PowerSave:
        - in playing state
        - in other state's

-> Preroll modification in response to power save entry/exit

-> Audio session mixer operation in power save

-> Modifications for event time stamp handling in file source

-> How to determine when power save can be started

-> Turning on/off power save as new player/sources join and leave the presentation

-> Leaving power save on completion or clip transition

-> Playlist transitions


Thanks,
Milko


At 08:44 AM 10/14/2008, [EMAIL PROTECTED] wrote:
Eric & All,
  Any comments on this ?

The audio device could certainly do that - for each player, iterate
through all the streams and find out the ending time for all streams. It
would have to look at the "Delay" and "Duration"
properties of each stream.

<<Rajesh>> I could evaluate this option. But this approach does not stop
sending lot of silence data to the device. If the decision is not made
at AudioSession level, (then based on high pushdown value,) it could
send around 600 blocks (for 30 sec as audio pushdown) of silence.


Thanks,
Rajesh.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of ext
[EMAIL PROTECTED]
Sent: Wednesday, October 08, 2008 11:34 AM
To: [EMAIL PROTECTED]; [email protected];
[EMAIL PROTECTED]
Subject: [Client-dev] RE: [Audio-dev] Audio pushdown & Stream End

Eric,
   Comments inlined.

Thanks,
Rajesh.


-----Original Message-----
From: ext Eric Hyche [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 07, 2008 8:12 AM
To: Rathinasamy Rajesh (Nokia-D-MSW/Dallas);
[email protected]; [EMAIL PROTECTED]
Subject: RE: [Audio-dev] Audio pushdown & Stream End

>What if there are more than one stream, should audio device iter
>through to find the max of all duration of stream ?
>

The audio device could certainly do that - for each player, iterate
through all the streams and find out the ending time for all streams. It
would have to look at the "Delay" and "Duration"
properties of each stream.

<<Rajesh>> I could evaluate this option. But this approach does not stop
sending lot of silence data to the device. If the decision is not made
at AudioSession level, (then based on high pushdown value,) it could
send around 600 blocks (for 30 sec as audio pushdown) of silence.

Would playlists ever want to be used in this scenario? If so, then the
audio device would need to take groups into account.
Each "group" is a separate timeline which are sequenced one after the
other. For the case of a simple sequential playlist, think of each clip
in the playlist as being in a different group.

The scheduler would certainly need to be activated at the end of each
group, so that the player could switch groups. So I guess the audio
device would still only need to know the duration of each group.

Eric

=======================================
Eric Hyche ([EMAIL PROTECTED])
Principal Engineer
RealNetworks, Inc.


>-----Original Message-----
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED]
>Sent: Monday, October 06, 2008 11:20 AM
>To: [EMAIL PROTECTED]; [email protected];
>[EMAIL PROTECTED]
>Subject: RE: [Audio-dev] Audio pushdown & Stream End
>
>Hi Eric,
>  If we are looking at the specific info from decoder, then we might
>end up changing in all decoders for which this support is required.
>
>What if there are more than one stream, should audio device iter
>through to find the max of all duration of stream ?
>
>Let me also try to find whether this support is required only for DSP
>solution. If so, it would be easy to get that information as the
>MDFDecoder has reference to thev device.
>But I would prefer to find a solution that would fit for both ARM
>codecs and DSP codecs.
>
>Thanks for your time and comments. Please let me know if you could
>think of any other options.
>
>Thanks,
>Rajesh.
>
>
>-----Original Message-----
>From: ext Eric Hyche [mailto:[EMAIL PROTECTED]
>Sent: Friday, October 03, 2008 11:49 AM
>To: Rathinasamy Rajesh (Nokia-D-MSW/Dallas);
>[email protected]; [EMAIL PROTECTED]
>Subject: RE: [Audio-dev] Audio pushdown & Stream End
>
>I would think that your approach could work, but perhaps you might be
>able to reliably figure out the end of the audio data.
>
>You asked before if the audio session knows when the end of stream is.
>I said the audio session doesn't really know. However, the audio
>decoder *does*. In the IHXAudioDecoder::Decode() call, the last
>argument is a bEOF argument, which says that the input encoded buffer
>is the last one available. Therefore the decoder should be able to tell

>the timestamp of the last frame audio decoded audio data it passes back
to the renderer.
>
>In your scenario, is there any way for the audio device to retrieve
>this information from the decoder and then wait for that timestamp to
>be written to it?
>
>Eric
>
>=======================================
>Eric Hyche ([EMAIL PROTECTED])
>Principal Engineer
>RealNetworks, Inc.
>
>
>>-----Original Message-----
>>From: [EMAIL PROTECTED]
>>[mailto:[EMAIL PROTECTED] On Behalf Of
>>[EMAIL PROTECTED]
>>Sent: Wednesday, October 01, 2008 12:18 PM
>>To: [email protected]; [EMAIL PROTECTED]
>>Subject: [Audio-dev] Audio pushdown & Stream End
>>
>>Hello All,
>>  I'm trying to evaluate an idea of pausing & resuming scheduler for
>>local playback for CPU optimization.
>>
>>As a part of this exercise, the AudioPushDown is increased (say 1 or 2

>>minutes). Scheduler will be paused and resumed based on the high and
>>low water mark of unplayed data in device.  AudioSession
>>(client\audiosvc) does not detect  end of stream. This results in
>>sending lot of silence data to the device. The data is pushed to
>>device
>
>>is based on audio push down (Block count). If audio push down is huge
>>(say 1 min), and if the playback is close to duration ( say 50 msec to
>end playback), the audio session pushes 1 min of data to device (
>Mostly just silence).
>>
>>Given the problem, shouldn't the audio pushdown be dependent on
>presentation duration. ?  Assumption:
>>Presentation duration is greater among all stream playback.
>>
>>NumBlocksReqdToComplete =  (PresentationDuration - CurrTime) /
>>Granularity
>>
>>If(uNumBlocksPushed > NumBlocksReqdToComplete) {
>>  uNumBlocksPushed  = NumBlocksReqdToComplete;
>>
>>  // Add a tolerance.
>>  uNumBlocksPushed  += Tolerance. (say 2 blocks, around 100 msecs) }
>>
>>We will not see this problem most of the time, if our audiopushdown
>values are smaller.
>>
>>
>>Thanks,
>>Rajesh.



_______________________________________________
Client-dev mailing list
[EMAIL PROTECTED]
http://lists.helixcommunity.org/mailman/listinfo/client-dev

_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev

_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev

Reply via email to