Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

SQL Best Practices: Backup Compression

Posted on April 4, 2011March 28, 2011 by slowder

There are quite a few products out there that help you compress your backups. Acronis, RedGate, and many more.  This got me to thinking about whether or not I should be compressing my own backups.  I know I’m not going to get any additional budget for it this year, so I’m not going to invest the time into trying out 3rd party tools to compress my backups.  I’m going to try out the built in compression options in SQL Server 2008R2.

I know by the very definition of compression that I was going to save space on my backup LUN, but I didn’t know how much I was going to save.  So with a healthy baseline (all my databases had been backed up using the standard method for the previous two days), I switched my maintenance plans over to enable compression on both my full backups and my transaction log backups.

I ran the first job manually, I wanted to be sure it ran, and if there were any issues I wanted to be standing by!  I didn’t want to get an alert page.  So during our standard maintenance window, I ran the new maintenance plan.  I was actually surprised by the results.  With 25 databases being backed up, I saw all the backups shrink to somewhere between 14.7 % and 27.16% of their original size.  Overall I was able to compress the backups to 24.17% of their original size.  This means my current storage of 132GB for backups will now go four times as far! That’s the better compression than you’d find on the detention level!

Trailer #1
Star Wars at MOVIECLIPS.com

 

As a result, I’m planning on upping my on server storage from 24 hours to 48…at least until I have to give up that space to other needs.  By the way, after backups, I use a small Powershell script to copy pull the backups down to our QA server so we can test updates against day old data.  It’s a huge step forward from the previous environment where you really didn’t know how old your test data was!

After seeing huge savings in my storage space, I looked at the amount of time the backups would take, now that I’m compressing them.  To my surprise I was down from 21:52 to 13:43!  I actually expected it to take longer.  But  after several run throughs my findings are consistent, each run is at least 5 minutes quicker than the uncompressed form!

Now that I’ve shared the pros of the compression, I want to share with you a word of warning.  Compression does have an additional CPU cost.    In my environment I saw slightly more than a 10% increase in CPU usage according to SQL Sentry Performance Advisor.  While in my case it’s not a huge increase, 10% could be the difference between acceptible and an alert being fired.

Before turning on compression on your production boxes, make sure you have a period when CPU is LOW, and access to the server is LOW, then test this option.  In fact, if you have access to a test server that mirrors your production server (at least in terms of hardware), test compression there first…then try it on production!

I think I’m going to put this down as a best practice, the only time I will run without it, will be the server is nearing it’s processor limit.  And if that’s happening, I should be well on my way to resolving the high CPU usage, through moving some of the load off that server, or upgrading the server to handle that load.

So, what do you think about making compression a best practice?  Do you compress your backups now?  If so, are you using the built in tools, or are you running a 3rd party tool?  I’d like to find out more about how you do things.  Feel free to share your findings below!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme