Hi,

I'm trying to integrate Ansible into kube-deploy to allow for a faster and 
smoother deployment. I've ran into a strange issue where if I'm running the 
script for the ansible playbook, I cant mount gluster volumes in pods. But, 
if I run the worker.sh/master.sh locally as root (or with sudo) everything 
seems to work fine. 

Here is an example of the Ansible playbook for a master node. In theory, 
everything should run under the root user with the correct ENV variables.


- name: Download the kube-deploy files
>   git: repo=https://github.com/kubernetes/kube-deploy.git 
> dest=/opt/kube-deploy version=master
>
> - name: Run the master deploy script
>   shell: echo Y | ./master.sh
>   args:
>     chdir: /opt/kube-deploy/docker-multinode/
>   environment:
>     USE_CNI: true
>     USE_CONTAINERIZED: true
>     K8S_VERSION: v1.4.0-alpha.2
>
>
 

And here is the actual task definition


---
> - hosts: k8-master
>   become: yes
>   become_method: sudo
>   gather_facts: yes
>   roles:
>       #- common
>     - master
>
>

As an aside, I've tried using the command module instead of shell, removing 
the chdir, removing the "echo Y" and directly removing the /var/lib/kubelet 
folder without success. If a master/worker is started from Ansible, it 
cannot mount a glusterFS volume target. It seems to be related to the 
mounting of the root volume from USE_CONTAINERIZED? I'm using that flag 
since a Glusterfs volume target would fail without it. 

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/2e69d944-d811-475f-b453-fb713fd7854b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to