Andrew Gabriel said the following, on 07-04-13 10:34 AM:
Edward Ned Harvey (openindiana) wrote:
From: Ben Taylor [mailto:[email protected]]
Patching is a bit of arcane art. Some environments don't have
test/acceptance/pre-prod with similar hardware and configurations, so
minimizing impact is understandable, which means patching only what is
necessary.
This thread has long since become pointless and fizzled, but just for
the fun of it:
I recently started a new job, where updates had not been applied to
any of the production servers in several years. (By decree of former
CIO). We recently ran into an obstacle where some huge critical
deliverable was not possible without applying the updates. So we
were forced, the whole IT team working overnight on the weekend, to
apply several years' backlog of patches to all the critical servers
worldwide. Guess how many patch-related issues were discovered.
(Hint: none.)
Patching is extremely safe. But let's look at the flip side. Suppose
you encounter the rare situation where patching *does* cause a
problem. It's been known to happen; heck, it's been known to happen
*by* *me*. You have to ask yourself, which is the larger risk?
Applying the patches, or not applying the patches?
First thing to point out: Suppose you patch something and it goes
wrong ... Generally speaking you can back out of the patch. Suppose
you don't apply the patch, and you get a virus or hacked, or some
data corruption. Generally speaking, that is not reversible.
For the approx twice in my life that I've seen OS patches cause
problems, and then had to reverse out the patches... I've seen
dozens of times that somebody inadvertently sets a virus loose on the
internal network, or a server's memory or storage became corrupted
due to misbehaving processes or subsystem, or some server has some
kind of instability and needs periodic rebooting, or becomes
incompatible with the current release of some critical software or
hardware, until you apply the patches.
Patches are "bug fixes" and "security fixes" for known flaws in the
software. You can't say "if it ain't broke, don't fix it." It is
broke, that's why they gave you the fix for it. At best, you can
say, "I've been ignoring it, and we haven't noticed any problems yet."
10 years ago, it was the case that something like half the support
calls would have never arisen if the system was patched up to date. (I
don't know the current figure for this.)
OTOH, I have worked in environments where everything is going to be
locked down for 6-10 years. You get as current and stable as you can
for the final testing, and then that's it - absolutely nothing is
allowed to change. As someone else already hinted earlier in the
thread, the security design of such infrastructure assumes from the
outset that the systems are riddled with security holes, and they need
to be made secure in some other (external) way
And the side effect would be dramatically reduced OPEX due to lower
numbers of stuff that should be supporting environment.
--Roman
_______________________________________________
OpenIndiana-discuss mailing list
[email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss