I know it’s been said, you don’t know how good you are at your job until you handle failure. I’m not going to say I’m perfect at my job, but I did have to handle some pretty bad failure yesterday.
It began early in April. We’d been receive some odd error messages with CommVault and our SQL Cluster. There was a problem with the install. In order to fix that issue, our hosting vendor (who also happens to be our systems administration service) had to remove the software from our servers. In order to complete that we had to fail over each node, one after the other in order to finish the uninstall. So, we waited for the next maintenance window (0400 to 0600) and made it so.
So far, so good.
The next day, the software was reinstalled, but didn’t take. The issue was escalated to CommVault, and they worked through the issue, and determined we needed one more reboot on each node. So, I happily obliged during the following maintenance window.
Again, no problems. Until we came back up. There were additional network problems. Our systems administrator came through with flying colors, and patched us right up. Now to get CommVault installed. Everything seemed to take, but during the next backup, it failed again.
While our systems admin was debugging the software yet again, our cluster was failed over during production hours. That was a no-no in my book.
Internally I was upset. I got a little red in the face I’ll admit. But I blew it off, calmed myself down, and made a call to our vendor to explain how that can’t happen again. By not getting loud, and rude (like my ego wanted), They were very cool. I was assured it would never happen again.
They’ve kept their word, and I have no reason to doubt they will go back on that.
I’m glad I kept my cool, because just two days later, when I had to fail over the server one more time to get the CommVault server removed again, the server didn’t come back up.
I freaked!
It was 0430 in the morning, and my cluster wasn’t coming back online. I immediately dove into the cluster log. I took the error message which I’d never seen before and googled it.
While reading that, I opened up the event viewer, and had that going at the same time. After 15 minutes I figured I wasn’t going to figure it out on my own, so I made the call. I had narrowed it to a problem in Active Directory, and I am no expert there.
I’m not even a solid novice.
Our SA, worked with me for an hour trying to resolve the issue. She took me through parts of Active directory I’d never even thought to look. Unfortunately our Cluster objects in AD were toast. Every attempt was made to recover the objects, but no dice. They just couldn’t be recovered.
And here was my first painful lesson. We hadn’t been making backups of our Active Directory data.
Ouch.
After we were into our production window by 4 hours, I made the call to rebuild the cluster. Our failover had failed us. Our planning for disasters had failed us.
The whole time our SA was handling my mixed responses. At times I had my cool, but after a dozen calls from management and overhearing others’ calls from our end users, I was losing that cool, and calling asking for an update.
Finally, after nearly 8 hours of unscheduled down time, we were live again.
It was tough, but I’ve learned exactly where one of my limits is today. I don’t know Active Directory as well as I should. I reached out to #sqlhelp. I asked for a lesson plan for learning AD. Within minutes I was getting a lesson plan from MVPs.
I will learn from this. I will improve. As a result, servers under my care will be better managed. In the end, companies I work with will have a stronger resource in me, because I’m going to grow strong. I’ll be better the next time I face this situation.
I’ve seen the crowns and heads of conquered kings brought to my doorstep. I only have one response…
When you face failure… what’s your response?