Good morning, Wietse Venema via Postfix-users <[email protected]> writes:
> Nico Schottelius via Postfix-users:
>> [Two-layer architecture: large IPv6-only Kubernetes clusters with
>> external connectivity, plus smaller dual-stack, transit-only,
>> Kubernetes clusters that gateway from/to external IPv4.]
>
>> So long story short, each mx instance will be a container, in total we
>> have planned 8 of them (4 edge nodes, 4 inside the clusters) and for
>> that we can use our home brewed containers, but I think that others would
>> also profit from official postfix containers that can just be
>> trustworthily downloaded and used.
>
> I may be missing something, but how would you customize a 'standard'
> Postfix container image for these widely-different use cases?
Very easily by mounting a ConfigMap, a Secret or any other Volume type
into it. While these are k8s specific, without k8s you typically use
volume mounts from docker. Let me shortly describe the 2 scenarios:
a) k8s
You create a helm chart that carries configurations files,
potentially templating configuration files using helm. helm uses
"values.yaml" which steer the configuration of a helm chart, which
itself functions a bit as an "application definition", somewhat
similar to an API.
Inside the helm chart ConfigMaps are generated which are basically
files that can be mounted at arbitrary places inside the running
container. They can also be exported as environement variables, but
that usage has declined a bit recently.
If your configuration files contain Secret information, the use of
"Secrets", which add a small layer of convenience on top of
ConfigMaps to keep things a bit more secure are added.
If you need persistent storage of data, you use Persist Volumes,
which are itself created by Persistent Volume Claims. Usually these
are backed by something like NFS, Ceph, iSCSI or vendor specific
storage extensions.
The general interface in k8s is called "CSI".
If you do not like helm charts, another typical way is to generate
configurations using "kustomize", another templating/application
definition approach.
b) plain docker / docker-compose
With docker you can use the --volume parameter that is also exposed
in docker-compose. Volumes can be files or directories and behave
very similar to standard (bind) mounts.
Assuming you'd want to run postfix+dovecot in docker compose, what
you'd do is:
- create a docker-compose.yml, defining the containers, the volumes
- Usualy docker-compose uses a .env file that steers an inside shell
script to setup the configuration
- the .env file as well as the shell script can reside outside the
container
- the shell script, can, but does not have to be part of the image at
build time
> If
> it involves another Dockerfile to add another layer and "RUN postconf"
> commands, why not prepend your preferred Dockerfile commands already?
It does not.
> FROM debian:12.7
> RUN apt-get update && apt-get install -y postfix && rm -rf
> /var/lib/apt/lists/*
> RUN postconf maillog_file=/dev/stdout
> RUN ...postconf commands for other settings...
Oh, no, don't do that, please!
Configurations are not supposed to be in the container image.
Practically speaking, there are two solutions in the postfix case to
solve this (using k8s as an example, plain docker is the same, just more
primitive):
At runtime either:
a) copy in main.cf / master.cf from a ConfigMap
b) copy in a shell script that runs postconf based on some config
> One reason to NOT distribute Postfix binaries is to avoid delays
> when a security bug needs to be fixed. When binary packages are
> built by distro maintainers, the workload is distributed, instad
> of making the Postfix project a bottleneck.
That makes a lot of sense and should not change.
Package maintainers are usually split into two different approaches:
- a) Some built containers directly from *their* source, only using the
inside distribution as a help to build their own binaries.
advantages:
- always latest binary right away available
- container image can be stripped down to contain your code only
(go binaries are often statically compiled) without an OS
disadvantage:
- it's like a ./configure && make && make install approach on an OS
- b) Some use the actual operating system and just run the usual apk
add/apt-get install
advantages:
- reusing primitives of the contained OS
- reusing other's work
disadantage:
- requires to wait for others to update the package list before you
can publish your own container image
Personally I prefer (b) approach, as it behaves consistent to a normal
OS, usually has a /bin/sh included which in turn allows debugging inside
the container.
Assuming we'd go with approach (b), the extra workload for the postfix
project would be:
- Rerun a docker build & docker push as soon as the underlying OS's
update their package repository
- Update the Dockerfile once the depending operating system updates
their image (i.e. The debian based postfix image could have been based
on 12.7 and the included postfix version was 3.7.11. Now Debian bumps
to 12.8 and the included postfix version is 3.7.20. Then the postfix
Dockerfile would change "FROM debian:12.7" to "FROM debian:12.8" and
the resulting image tag would change from postfix:3.7.11-debian12.7
to postfix:3.7.20-debian12.8.
HTH & BR,
Nico
-- Sustainable and modern Infrastructures by ungleich.ch
signature.asc
Description: PGP signature
_______________________________________________ Postfix-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
