Lifecycle patching

I am trying to understand how people manage lifecycle patching these days. I am not a sysadmin (I am a DB Architect) and what I am being told is that if there is any lag between patching a dev server and a prod server that we will liley get new patches in prod that have not had any soak time in dev. Back when I did do sysadmin work, we downloaded the set of patches being applied in dev and then applied them to prod from our patch repository. IS this not done anymore?

I appreciate any help that can be offered to bring me up to speed on how patching is managed these days.

Is that host on the internet ?
If so, patching should be done more frequently (security patches should be applied as soon as tested).
The standard dev - test - prod applies here as well (or wider)

Mostly you have a repository with patches (hopefully local not from internet) for specific version you wish to upgrade to (for instance repository for Solaris 11.1.14.5) and hosts are running lower version (11.1.12.x or whatever) using 11.1.12.x repository).

So using zfs you can clone or send/receive the existing repository, upgrade it to 11.1.14.5 and define new repository server instance which you will use for upgrading all the hosts (dev test prod).

I don't see how a production patching from the same repository from which you patched dev and test (having in mind that dev test and production should be the same version of operating system) can be different.

If you don't require new features / bug fixes the new version brings or security issues don't affect your machines (not using compromised service etc.), don't fix it if it's not broken.

Peasant, thank you very much for the response.

Unfortunately for us, our infrastructure group is trying to convince us that no one uses a local repository (unless they have to because there servers do not have access to pull them directly). Currently the way our systems (Unix and Linux) are patched is every system pulls whatever is available at the time that they are patched. So if we have a 30 day soak for a set of patches that were applied to dev before applying the patches to qa; it is likely that qa will get patches that were never applied in dev... and so on for production. This scares me (knowing that a single library change can have disastrous effects on a system). I have not even been able to convince them that I need the same version of gcc across an environment. (I.e. 4.1.2 20080704 on one box and 4.4.7 20120313 on another).

Your comments (and any others that might be willing to reply) are what I need to go to them and so no; we cannot keep doing things this way. It is a hard battle because they are seen by management as the expert in their area so if they disagree with us, they usually win. It is forums like this (and experts like yourself) that I have to rely on to make sure that what we are being told id accurate.

I thank you very much for being there and being willing to take a moment of your time to offer your advice and opinions.

Mark

I agree that is not the way to do it (in general, not specifically for Solaris). You need to create (or use) some sort of baseline, either locally or somewhere else that is the same for development, test, qa and production, otherwise (because these environments should be patched in a certain order and never at the same time) there will be all sorts of differences between environments or perhaps even between different hosts within the same environment, depending on the moment of update. Ideally there are some extra unused hosts that can be tested prior to dev or test environments.

There also needs to be a distinction between preventive and corrective patching and patches need to be tested first. I personally am not of the school of "don't fix if it ain't broken", I think regular patching is a necessity. Security patches should be treated differently (perhaps also with a different frequency) from ordinary patching, especially if hosts are directly of indirectly exposed to the Internet. Besides OS, also applications (and their patching) need to be taken into account.

At any rate there should also be a good roll-back mechanism in place, simply removing a patch is often not sufficient to return to a previous state if need be..

I feel so much better that the consensus so far agrees with my school of thought. Even though we may not all agree on the "to patch or not to patch" debate, the agreement seems to be that you must use a controlled and consistent set of patches from dev thru production. As I mentioned initially it has been some time since I have done real "sysadmin" work but I do still manage many oracle databases and it would take someone with a set of nutcrackers on my knuckles to get me to even consider applying different patches in production than I had applied in lower environments. I was shocked when I found out that our servers were being handled differently but I was not sure if the game had changed so I wanted to make sure.

Thanks again for taking the time to share your expertise; it is appreciated,

Mark

Are you saying that all your servers have direct internet access and that they all download patches when the command is issued?

As some have already said, there needs to be an agreed set that you are installing else your testing does not match what you put into production.

For AIX and HP-UX, I pull down a block of fixes to a directory for testing and then copy that directory to production servers. I don't get a fresh download for that very reason. There may be a neater way, but it's not a huge overhead.

For Red Hat Linux we use their Satellite Server, which means that servers with no business need are not directly on the internet and it reduces our public traffic (that we pay for by usage) This allows us to set up cloned channels (as they call it) into which we can move fixes as we require and then each OS still does a network pull, but from this controlled list. We can then be sure that production gets the same as testing. We then update the patches in the cloned channel and start testing the next updates, and round we go again.

I think Centos has the same and I'm sure others do too.

You can even use Red Hat Satellite for Solaris patching (no roll-back though)

Of course, this is always done if anyone agrees that we will actually do some patching. Let's not get into that debate here though. :rolleyes:

Robin

As a DBA, you must have some software patch responsibilities too. It's just common sense and you are right.

Yes, that is what they are doing. This all came up when I was told that the longer we "soak" a set of patches in dev, the more likely it is that a different (and untested) set of patches will go into production:eek: . Once I picked my jaw up off the floor (which I have to do often around here) we decided to "ask" for a patching server to be configured so that we can start controlling what goes in and when. The most generous time you have all offered up on this thread is what I needed to substantiate our request; which I needed because they tried to make me believe that the "whole world" was doing it their way. My assumption was that this was an extremely inaccurate statement but then again, I don't know everything so I turned to those who knew much more about this area than I... you all.

Again I say thanks for sharing both your time and your knowledge.

I'll add to the chorus of saying that is totally wrong.

The patches applied have to be identical. How you get there isn't really important, as long as it's guaranteed to be reproducible. (Downloading directly from the internet is almost certainly NOT reproducible because you're dependent on someone else hosting specific patch versions.)

This is definitely not the route to go. However, here is another thought to think about.

This all depends on the time it takes you to roll your patches through your environment.

We have a 4 environment development strategy.
Dev - Test - QA (PreProd) - Prod

We actually patch our preprod environment first then we roll to prod, dev, test.

I know, sounds a little strange to do it this way but our development cycle is fairly rapid and our patching schedule is not as fast, so we have ran into issues with developing code on a newly patched DEV machine that wouldn't run in the other environments until they got patched. So we found that we patch the preprod systems first, if nothing breaks then roll it into prod, if it does break back out the patch or use the prod image to recover it.

Most of the time code developed on an OS with older patches ran just fine on the OS that is up-to-date on patches. I think I only had 2 instances in 5 years where that didn't happen and I believe that was because of java and a path change.

I don't think that the sequence matters because the original question (over 4 months back now) was all to do with guaranteeing that a consistent set of patches was applied.

Each OS supplier may have different methods, but they all have them and that is what was questioned.

Robin