Hi Eric,
Thanks for your reply, its really clear!

Apparently it will be difficult to solve that, as the job that we submit
are spark job. So I don't really see how to combine
DockerLinuxContainerRuntime with a specific entry point running at the end
a spark job on top of that container runtime. Moreover to have all of that
production ready/stable.

Again thanks for your reply,
Michel

<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mar. 31 mars 2020 à 00:40, Eric Badger <[email protected]> a
écrit :

> The launch_container.sh script is created on the fly by the Nodemanager
> for each task that is run. So you would need to modify the Nodemanager code
> to modify the launch_container.sh script.
>
> The control flow is:
> Nodemanager -> container-executor -> launch_container.sh
>
> The nodemanager launches the container-executor, which is a setuid binary
> so that it can change users to the user that the process should be run as.
> Then the container-executor execs the launch_container.sh script. The
> container-executor waits around for the task to finish and reads its
> stdout/stderr. Once the task is done, the container-executor cleans up as
> well.
>
> The only way I know of to inject a pre-launch script would be to use the
> DockerLinuxContainerRuntime and create an image with an Entrypoint. You
> could set that Entrypoint as the script that you want to run and then the
> Entrypoint script could end by exec'ing into launch_container.sh.
>
> Hope this helps,
>
> Eric
>

Reply via email to