Hello libav world,
Currently I am pulling pixels off of the GPU (specifically via Windows DXGI
Desktop Duplication API), and then H264 encoding them using nvenc_h264 for
hardware acceleration. Currently, that involves a download of the pixels
into CPU memory (in an AVFrame), and then an immediate re-upload of them
onto the GPU for NVENC. *My question is: is it possible to cut out this
step and go straight from GPU/d3d11 memory to NVENC encoding?*
I noticed that there is a supported pixel format for AV_PIX_FMT_D3D11,
which takes a pointer to a ID3D11Texture2D in the data[0] slot. However,
configuring my code to use this pixel format (including all the necessary
hwdevice config, hwframe context and allocation, etc) still results in an
error:
int write_gpu_video_frame(ID3D11Texture2D* gpuTex, AVFormatContext*
oc, OutputStream* ost) {
> AVFrame *hw_frame = ost->hw_frame; printf("gpuTex address = 0x%x\n",
> &gpuTex); hw_frame->data[0] = (uint8_t *) gpuTex;
> hw_frame->data[1] = (uint8_t *) (intptr_t) 0;
> hw_frame->pts = ost->next_pts++; return write_frame(oc, ost->enc,
> ost->st, hw_frame);
> // write_frame is identical to sample code in ffmpeg repo
> }
Resulting error:
> gpuTex address = 0x4582f6d0
> [h264_nvenc @ 00000191233e1bc0] Error registering an input resource: invalid
> call (9):
> [h264_nvenc @ 00000191233e1bc0] Could not register an input HW frame
> Error sending a frame to the encoder: Unknown error occurred
I can post the other setup code if it's helpful to diagnose the problem.
Any immediate intuitions about this type of approach?
Best regards,
- Nathan
_______________________________________________
Libav-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/libav-user
To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".