Hello all,

I'm having trouble getting GPU-acceleration working when running cuttlefish 
virtual devices in a container. 

I followed the build instructions from:
    
https://github.com/google/android-cuttlefish/blob/main/BUILDING.md#building-the-docker-image

After sourcing setup.sh, I created the cvd container and then ran:

ssh vsoc-01@${ip_cuttlefish} -- './download-aosp -A -C -a $(uname -m)'

When I try cvd_start_cuttlefish with either --gpu_mode=gfxstream or 
--gpu-mode=drm_virgl, things go boink.

Output log when gpu_mode=auto abd boot completes:
    https://hastebin.com/monebixuci

Output log when gpu_mode=drm_virgl:
    https://hastebin.com/olayapayuz

Output log when gpu_mode=gfxstream:
    https://hastebin.com/okobogotet

I can launch cvds with --gpu_mode=gfxstream without issue when doing it 
outside of a container as per:
   
 
https://android.googlesource.com/device/google/cuttlefish/#cuttlefish-getting-started
 
<https://android.googlesource.com/device/google/cuttlefish/#cuttlefish-getting-started>

I'm not sure if it would be related to the present problem but I had to 
make some changes to get my GPU recognized when running the build.sh 
script. I kept getting the message "Building without physical-GPU support". 
The problem was (I think) that ref_oem wasn't getting defined (
https://github.com/google/android-cuttlefish/blob/8dd9f222eca1aebdebd03213b82568f1867a1cbe/build.sh#L53)
 
because the function is_debian_distro doesn't exist. The closest I could 
find was is_debian_series, which is defined in utils.sh. I changed the call 
in build.sh to is_debian_series, but that was still failing so I changed an 
expression within the is_debian_series function, and that cleared it up. 
Specifically I changed:

Original line: (
https://github.com/google/android-cuttlefish/blob/8dd9f222eca1aebdebd03213b82568f1867a1cbe/utils.sh#L228
)

    if ls -1 /etc/*release | egrep -i "(buntu|mint)" > /dev/null 2>&1; then 

Changed to:

    if cat /etc/os-release | egrep -i "Ubuntu" > /dev/null 2>&1; then

After making these changes, the GPU IDing and dependency walking/building 
process proceeded as per: 
https://github.com/google/android-cuttlefish/blob/main/BUILDING.md

I've tried this both on my local x86-64 system with nvidia GPU as well as 
on a gcloud x86-64 instance with nvidia GPU*, and I'm getting the same 
error message pertaining to crosvm_control.sock:

    [ERROR:external/crosvm/src/main.rs:2028] invalid value 
"/home/vsoc-01/cuttlefish_runtime.1/internal/crosvm_control.sock": this 
socket path already exists

Deleting the container directory /home/vsoc-01/cuttlefish_runtime.1 doesn't 
fix the problem (I had seen that mentioned elsewhere as a possible 
solution.)


Any suggestions?

Thanks,

Mark

* The second changed mentioned above was only done on my local machine 
(Ubuntu 20.04 and not on the cloud machine, Debian 10). The first change 
was still required, however.

-- 
-- 
You received this message because you are subscribed to the "Android Building" 
mailing list.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/android-building?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"Android Building" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/android-building/b1fd762b-199a-4b0e-bad6-19998e8ef5d4n%40googlegroups.com.

Reply via email to