Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

Migration to Windows Azure SQL Database:Step 4 and 5

Posted on November 18, 2013November 6, 2013 by slowder

Now that you’ve completed analyzing your current database, and worked through that list of items that won’t work with Windows Azure SQL Database (WASD), it’s time to begin testing your new solution.  Testing begins with setting up your test instance of WASD, and loading your test data, then you point your application to that new instance and then run through your application end-to-end.  Let’s walk through those steps in more detail, and cover some of the issues you might discover.

Push a copy of your data to the cloud

After you’ve set up your instance via manage.windowsazure.com, getting your data pushed is pretty simple.  You could either use the Migration Wizard, or you could use SSIS.  I prefer the migration wizard, since it has better throughput.  All you’ve got to do is pick your source database and tables, then target your new instance and database, and click go!  It takes care of the rest.

If you choose to use SSIS, remember you’ll have to use ADO.NET connections for your destinations.  Also, be sure to set the connection string so that your package knows it’s sending this data to SQL Azure.  I haven’t had a connection disruption in the middle of a data flow task, but I have had it fail between data flow tasks.

Once all your test data is in Azure, and you’ve verified it.  You’re ready to move on to testing your application.

Test your application

It’s a good idea to invite a small number of your system’s power users to join in on the testing.  They’re the ones who generally push your system the hardest currently, so you want them to bring that same focus to testing.  Systematically test everything your application does with the database.  You want to go through everything now, before you turn this over to the end users.  There are some automatic tools for this, I would direct you to my application architect for more information on those automated test suites.

You want to make sure you go through every feature your application has.  When you find an issue you want to record as much detail as you can around what you did to create the issue.  Once you’ve gone through the application once, you’ll bring those issues back to the programmers and you’ll start breaking down what has to be changed to fix the issues.

This is where you’ll discover things like your connection pooling isn’t set appropriately for SQL Azure.  You might discover your retry policy is too aggressive, or you’re sending atomic updates rather than batching them together.  This is when you’ll discover if you’ve been following some programming best practices all along.  If you haven’t, no time like the present to improve those skills, right?

Fixing those issues

As you discover issues you’ll have to prioritize them and decide how you will address them.  You can choose to fix them now, later, or drop the feature all together.  These decisions will come down to a question of time and money.  I can’t give you a definitive answer on which kinds of issues are which.

As you work through these issues, you’ll return to the testing phase several times, each time verifying the issues you address truly are fixed.  Don’t forget you are also re-testing all those items that passed the first time too.  You want every test to pass every  time.  The great news is, once you pass all your tests, you’re ready for the final step.

Go Live!

Going live is very similar to the testing process.  It starts with a database push to make sure all the data you need for production is in Azure.  Once you have that, you make sure all the applications are wired to hit the new Azure instance.  You run through the test suite one more time.  If all your lights “go green”, then you announce to your users the migration is complete, and invite the full user base to start using the new system.

And then you wait.

During our first migration this wait was a little longer than usual.  We heard no feedback from the end users that first day.  All the diagnostics showed the system was performing well.  No unusual SQL errors, the application response time was reporting well below the response time threshold.  Reports were running as quickly as they had during testing.  Why weren’t we hearing anything from the users?

Finally we got a call on the second day.  I was manning the phone.  One of our most demanding users who wasn’t in the test group called was on the line.

“So I’m going through this new system…” she began.  She let that pause hang for far too long. “And it’s just so fast!”

I was completely relieved.  We spent the next 30 minutes discussing the efforts we had put in performance tuning the reports and working with the test users to tweak how we would pull back the data from the data import processes.  The call ended with me feeling the biggest smile cross my face.  It was a great day.

From start to finish this process took a year.  There was plenty of work poured in to the project, and it shows.  The customer now has a system far beyond the one he began with.  A year after go live, he’s doubled the number of clients using his system.  During that year I got to learn a ton about what Azure is, and what it’s capable of.  I also see the potential for it to become something so much more.  I’ve also found that there is a data architecture that many SQL professionals across the country aren’t familiar with: sharding.  It’s the real key to scaling out SQL Server.

And that’s the topic I’m tackling next.  Are you ready to see what the OLTP version of Parallel Data Warehouse looks like?  Stay tuned!

In the mean time, if you have any questions, send them in.  I’m here to help!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme