On 11/28/17, Ronak <[email protected]> wrote: > Hey, > > Yes I have been going through the examples, and I am getting an EAGAIN > error. I'd like to find out why and what I have to do. > > The samples are not really clear about what I have to do.
EAGAIN errors are OK, that just means you need to provode more data before receiving any. > > Ronak > >> On Nov 28, 2017, at 3:21 PM, Paul B Mahol <[email protected]> wrote: >> >> On 11/28/17, Ronak <[email protected]> wrote: >>> Hi Paul, >>> >>> Thanks for that, that fixed the problem. Now, I'm getting -35 errors when >>> I >>> try to read the audio out of the buffer sink: >>> >>> Why would that happen? >>> >>> - (void)allocateRenderResourcesAudioFormat:(AVAudioFormat * >>> _Nonnull)format >>> capacity:(AVAudioFrameCount __unused)frameCapacity { >>> >>> NSString *bufferArgs = [[NSString alloc] >>> initWithFormat:@"sample_rate=%f:channels=%d:sample_fmt=%s:channel_layout=%d", >>> format.sampleRate, format.channelCount, >>> av_get_sample_fmt_name(AV_SAMPLE_FMT_FLTP), AV_CH_LAYOUT_STEREO]; >>> >>> avfilter_graph_create_filter(&_bufferContext, >>> avfilter_get_by_name("abuffer"), "buffer_context", bufferArgs.UTF8String, >>> NULL, self.filterGraph); >>> avfilter_graph_create_filter(&_bufferSinkContext, >>> avfilter_get_by_name("abuffersink"), "buffer_sink", NULL, NULL, >>> self.filterGraph); >>> >>> avfilter_graph_create_filter(&_bassFilterContext, >>> avfilter_get_by_name("bass"), "bass", >>> "gain=0:frequency=100:width_type=o:width=1", NULL, self.filterGraph); >>> avfilter_graph_create_filter(&_trebleFilterContext, >>> avfilter_get_by_name("treble"), "treble", >>> "gain=0:frequency=10000:width_type=o:width=1", NULL, self.filterGraph); >>> avfilter_graph_create_filter(&_equalizerFilterContext, >>> avfilter_get_by_name("equalizer"), "equalizer", >>> "gain=0:frequency=250:width_type=o:width=1", NULL, self.filterGraph); >>> >>> avfilter_link(_bufferContext, 0, _bassFilterContext, 0); >>> avfilter_link(_bassFilterContext, 0, _trebleFilterContext, 0); >>> avfilter_link(_trebleFilterContext, 0, _equalizerFilterContext, 0); >>> avfilter_link(_equalizerFilterContext, 0, _bufferSinkContext, 0); >>> >>> avfilter_graph_config(self.filterGraph, NULL); >>> } >>> >>> - (void)processBuffer:(AudioBufferList * _Nonnull)buffer >>> outputBuffer:(AudioBufferList * _Nonnull)outputBuffer { >>> >>> AVFrame *audioFrame = av_frame_alloc(); >>> audioFrame->channels = 2; >>> audioFrame->channel_layout = AV_CH_LAYOUT_STEREO; >>> audioFrame->sample_rate = 44100.000000; >>> audioFrame->format = AV_SAMPLE_FMT_FLTP; >>> audioFrame->nb_samples = buffer->mBuffers[0].mDataByteSize/ >>> sizeof(Float32) * 44100; >>> audioFrame->pts = audioFrame->nb_samples; >>> >>> audioFrame->extended_data[0] = buffer->mBuffers[0].mData; >>> audioFrame->extended_data[1] = buffer->mBuffers[1].mData; >>> audioFrame->linesize[0] = buffer->mBuffers[0].mDataByteSize; >>> >>> int result = av_buffersrc_write_frame(self.bufferContext, audioFrame); >>> if (result > 0) { >>> AVFrame *returnedFrame = av_frame_alloc(); >>> int result3 = av_buffersink_get_frame(self.bufferSinkContext, >>> returnedFrame); >>> >>> NSString *string = [[NSString alloc] >>> initWithCString:av_err2str(result3) >>> encoding:NSUTF8StringEncoding]; <---- This shows a -35 error code >>> NSLog(@"The string is %@", string); >>> >>> outputBuffer->mBuffers[0].mData = returnedFrame->extended_data[0]; >>> outputBuffer->mBuffers[1].mData = returnedFrame->extended_data[1]; >>> } else { >>> NSString *string = [[NSString alloc] >>> initWithCString:av_err2str(result) >>> encoding:NSUTF8StringEncoding]; >>> NSLog(@"The string is %@", string); >>> >>> outputBuffer->mBuffers[0].mData = buffer->mBuffers[0].mData; >>> outputBuffer->mBuffers[1].mData = buffer->mBuffers[1].mData; >>> } >>> } >>> >>> Is there something wrong with the frame I'm passing into the call to >>> av_buffersink_get_frame? >> >> Check that return value is not EOF or EAGAIN, there are simple >> examples in ffmpeg source tree. >> >>> >>> >>> This is just something simple I'm trying to get up and running, before I >>> write production level code. >>> >>> Thanks for the help! >>> >>> Ronak >>> >>>> On Nov 28, 2017, at 11:58 AM, Paul B Mahol <[email protected]> wrote: >>>> >>>> On 11/28/17, Ronak <[email protected]> wrote: >>>>> I managed to trace this down to av_frame_get_buffer returning -22. >>>>> >>>>> Here's the code that I tried: >>>>> >>>>> AVFrame *audioFrame = av_frame_alloc(); >>>>> audioFrame->channels = 2; >>>>> audioFrame->channel_layout = av_get_default_channel_layout(2); >>>>> audioFrame->sample_rate = 44100; >>>>> audioFrame->nb_samples = buffer->mBuffers[0].mDataByteSize/ >>>>> sizeof(Float32) * 44100; >>>>> audioFrame->pts = audioFrame->nb_samples; >>>>> av_frame_get_buffer(audioFrame, 0); <--- returns -22 >>>> >>>> You nowhere set sample format. >>>> >>>>> >>>>> audioFrame->extended_data[0] = buffer->mBuffers[0].mData; >>>>> audioFrame->extended_data[1] = buffer->mBuffers[1].mData; >>>>> audioFrame->linesize[0] = buffer->mBuffers[0].mDataByteSize; >>>>> >>>>> AVFrame *otherFrame = av_frame_alloc(); >>>>> int result2 = av_frame_ref(otherFrame, audioFrame); <--- returns -22 >>>>> >>>>> int result = av_buffersrc_write_frame(self.bufferContext, audioFrame); >>>>> >>>>> Why would av_frame_get_buffer return -22? Am I not supposed to call it? >>>>> What >>>>> about write frame? >>>>> >>>>> >>>>>> On Nov 27, 2017, at 7:19 PM, Ronak Patel >>>>>> <[email protected]> wrote: >>>>>> >>>>>> Hi Paul, >>>>>> >>>>>> Do you mind pointing me to the relevant documentation? >>>>>> >>>>>> I tried setting up an AVFrame instance with the sample rate, channel >>>>>> layout and data but the calls to av_frame_ref are failing with -22 >>>>>> errors. >>>>>> I'm looking for any sample code that shows how to properly initialize >>>>>> an >>>>>> AVFrame from an AudioBufferList. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Ronak >>>>>> >>>>>> Sent from my iPhone >>>>>> >>>>>>> On Nov 26, 2017, at 2:17 PM, Paul B Mahol <[email protected]> wrote: >>>>>>> >>>>>>>> On 11/26/17, Ronak <[email protected]> wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I'm trying to build a graphic equalizer using the ffmpeg library for >>>>>>>> iOS, >>>>>>>> wrapping the AVFilter library in an AUAudioUnit. >>>>>>>> >>>>>>>> I'm having trouble figuring out how to convert an AudioBufferList's >>>>>>>> data >>>>>>>> to >>>>>>>> an AVFilter and back. The input buffers are in stereo, so I'm also >>>>>>>> unsure >>>>>>>> how to pass in both data arrays. >>>>>>>> >>>>>>>> Does anyone know how to do this? >>>>>>> >>>>>>> Have you read already available documentation? >>>>>>> >>>>>>> AVFrame stores samples for packed format into AVFrame->data[0]. >>>>>>> And planar format into AVFrame->extended_data[ X ], where X is >>>>>>> channel >>>>>>> number. >>>>>>> _______________________________________________ >>>>>>> Libav-user mailing list >>>>>>> [email protected] >>>>>>> http://ffmpeg.org/mailman/listinfo/libav-user >>>>>> >>>>>> _______________________________________________ >>>>>> Libav-user mailing list >>>>>> [email protected] >>>>>> http://ffmpeg.org/mailman/listinfo/libav-user >>>>> >>>>> >>>> _______________________________________________ >>>> Libav-user mailing list >>>> [email protected] >>>> http://ffmpeg.org/mailman/listinfo/libav-user >>> >>> >> _______________________________________________ >> Libav-user mailing list >> [email protected] >> http://ffmpeg.org/mailman/listinfo/libav-user > > _______________________________________________ > Libav-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/libav-user > _______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
