Some more ideas:

- Local copy dist files, two very easy ways to do without mirroring a
huge official gentoo server:
  - Networked DISTDIR
  - First server on GENTOO_MIRRORS, like "http://10.0.0.2/gentoo";
    - This is interesting if you have a "master" gentoo server and
other misc gentoos, for example I do this on my laptop, if I am on my
local net it tries my local server first, the distfiles I want are
usually there, if not, portage gets s 404 a tries the next server. The
same happens when I am away from my local net.

- Use you own binary packages: This saves some nice compile time, but
the binary optimization has to be a common denominator for all the
architectures in use, or have different binary repositories for
different arches if they are "really" different (i.e. incompatible).
For example, if you have intel and amd server you can optimize to
i686. I like better this approach on more homogeneous setups, like
everything optimized for say core 2.
  - Other consideration here is that when you use binary packages you
fix the use flags for that packages, but gentoo handles this very
nicely: when your use flags match the binary packages it uses the
binary, if you flags are different it use binary for the packages that
has the same flags and compiles only the packages that differs. I find
this very good, as in my server more the 95% of packages usually don't
need to be recompiled.
  - If you a have "master" gentoo server for bin packages, you can use
distcc on it to have the packages build very fast (distributed grid
compilation) and only one time, as the other servers then just
download a bin copy and unpack/install. This "master" server or
compiling server does not necessary need to have all the bin packages
installed, as you can use --buildpkgonly.

- Reduce upgrade downtime building binary packages before install and
have a revert point: some services are safe to upgrade and restart
after emerge of new versions are complete, but what if the new version
does not start for some reason? And for service that you cannot
upgrade while running, it is like this: service stop, emerge take some
compile time = downtime, service start.
  Everyone knows that on a perfect world you should not be upgrading
production servers directly, but test the upgrades on testing
servers/environments and only then put the well tested stuff in
production. But lets face it: just a few of us are able to do that in
real world, we are usually overloaded/underpaid sysadmins, have time
constrains, lack of man power, etc. So what is feasible to do being
short on time/resources?
 - Here goes my favourite approach:
   - First backup every affected package with:quickpkg --include-config=y
     - This makes it very easy to revert a unsuccessfully upgrade and
usually is sufficient to revert, but special attention must me given
to programs/services that uses files not save as config files (like
databases for examples).
     - emerge with  --buildpkgonly, this way a bin package is built
but not installed, while the services are running.
     - now, the upgrade is much faster: service stop, emerge bin
package = very fast tar unpack, service start. If service does not
start, emerge very fast unpack time of previous binary backup version,
service start.
     - this can be easy automated with shell scripts (or say,
semi-automated, as the should ask for confirmation on critical
operations)

- Mail GLSA affected: it was mentioned on this list before to cron
emerge update (or eix-sync). After every update I'd add glsa-check to
e-mail me affected packages (security is never too much =)

- Gentoo server "template": many like to have a stage4 to backup or
replicate servers and customize. This is good to have cd/dvd copy in
case of catastrophic raid/backup servers failure, but I enjoy other
approach as well: I have a "template" root of a generic gentoo in one
of my file servers, I find this very handy and flexible. How I use it:
  - Need new server: boot with gentoo minimal (or better sysresccd)
  - Partition it the way is more appropriate to that server (which is
usually very different among all my servers), usually this is done on
top of some raid.
  - Mount partitions, rsync the "tamplate" server to mounted partitions
  - Change unique configs, like hostname and ip
  - chroot, check if kernel config is appropriate for that machine, if
not ajust and recompile
  - grub install
  - reboot
  Enjoy a new server up and running with most of the things already
configured to you linking. I like this approach because I can change
things directly on the "template" server which I think should apply to
all new servers, and also I very easy regularly update it, just chroot
and emerge world.
  - You can also use this approach to clone a running server, but then
you need a few more tricks, specially to rsync special dirs, like
/dev, temporaries, exclude ssh keys, etc. I can provides my details on
how I do this if someone is interested.

- Versioned configs: you can put config dirs (like /etc) under version
control, like subversion or git. This makes it easy to track changes
and do reverts if needed. In case of polytheistic environments (you
are not the only god, there are other sysadmins) this is also a good
way to track who changed what, why and when.


So, and you guys, what are you gentoo-server tricks?

Regards,

Fabiano.



On Mon, May 23, 2011 at 9:40 PM, kashani <kashani-l...@badapple.net> wrote:
> On 5/23/2011 3:12 PM, la Bigmac wrote:
>>
>> Hello list,
>>
>> Seems to be a few people recently wanting to discuss Gentoo as a server
>> :-) so thought I would pose a question that has been bugging me.
>>
>> What would you guys recommend to manage multiple servers and the package
>> versions?
>>
>> While I have a central emerge server (*rsync)* and sync all of my
>> servers to it I still manually update the packages.
>>
>> Example, openssh how should I be updating openssh on all of my servers
>> other than logging onto each one in turn and running emerge openssh.
>>
>> Should I cron schedule an /emerge/ --/update world /and control the
>> repository of packages or is there a more elegant solution?
>
>        I've become a huge Puppet nerd over the last year. I'm not managing
> Gentoo on it, but it's supported and Puppet Labs does seem to fix Gentoo
> bugs in a reasonable time.
>
> First you'll need Ruby 1.8.7 as 1.9.2 support in Ruby isn't quite there yet.
> I'd also run unstable for Puppet and Facter. You're better off jumping in at
> 2.6.x than 0.25.x.
>
> Puppet requires facter which is very cool in it's own right. It's local
> discovery of the OS and those facts about your system can be used in
> templates to make decisions. Here's an example for setting higher thresholds
> on my large machines.
>
> <% if processorcount.to_i >= 12 then -%>
>
> and here's an example of a module to make sure sudo is the latest version
> and add a config file for my local sudoers additions.
>
> class sudo {
>  package { "sudo": ensure => latest, }
>
>  file { "/etc/sudoers.d/my_additions":
>    ensure  => present,
>    owner   => root, group => root, mode => 440,
>    require => Package["sudo"],
>    source  => "puppet:///modules/sudo/my_additions",
>  }
> }
>
>        In order to make this work you'd really need to have modules for each
> package in your world file and set ensure => latest rather than just
> present. However it does make it easy to keep configs, users, settings,
> which packages in sync across machines.
>
>        That's Puppet in a very very tiny nutshell. They are some unique
> challenges with using it well with Gentoo, but it would ultimately make your
> system easier to reproduce.
>
> kashani
>
>

Reply via email to