Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

Partitioned Views

Posted on August 24, 2010February 9, 2011 by slowder

Last week I covered partitioning.  I explained how you could set up filegroups in your database, and then split data from a single table or index across those filegroups.  That way you can reduce blocking in your objects by physically separating the data into parts.  I even covered how you could use partitioning to speed archiving older data from production tables using SWITCH.  This week I want to dive into partitioned views.

Like partitioning tables and indexes across multiple filegroups, you can set up multiple tables and then reference them through a single view, making it appear you have one large table.  These tables can exist all on the same server, or on several servers.  When you start using tables on other servers in your view it’s referred to as a distributed partitioned view.  To set up distributed views, you’re going to need to learn about Federated Database Servers.

I would like to point out using partitioned views is not the preferred method, but there can be times where it will be your only choice.  That’s why I’m teaching you this topic.

Creating Partitioned Views

The data in the tables referenced by your partitioned view should be split into ranges of data values based on on of the columns common to all the member tables.  Each of these tables must be defined in a CHECK constraint specified on that partitioning column.  That way, when you execute a query against the partitioned view, the query optimizer will use that constraint to determine which member tables to hit in order to find the data you’ve requested.

Let’s set up some demo tables in our KOTOR database.  This time we’re going to track sales of our items.  In our example, we have sales tables that will be split by year, then combine them into one partitioned view.

CREATE TABLE sales_2010 (
    saleID INT
  , customerID INT
  , saleDate DATETIME
      CHECK(DATEPART(yy, 2010)
  , saleYear INT
      CHECK( saleyear = 2010)
  , saleTotal DECIMAL(9,2)
    CONSTRAINT PK_Sales2010__saleID_saleYear PRIMARY KEY(saleID, saleYear)
)

This is our template, we will also create tables for sales_2009, sales_2008, and sales_2007, the only thing we have to change will be the name, and the year in the CHECK constraints.

Once we have those defined, creating the VIEW is simple.

CREATE VIEW total_sales (
  SELECT saleID, customerID, saleDate, saleTotal FROM sales_2010
  UNION ALL
  SELECT saleID, customerID, saleDate, saleTotal FROM sales_2009
  UNION ALL
  SELECT saleID, customerID, saleDate, saleTotal FROM sales_2008
  UNION ALL
  SELECT saleID, customerID, saleDate, saleTotal FROM sales_2007
)

Using Partitioned Views

Now that we have defined the total_sales view, you can use it like any other view. You can SELECT from it, and you can update it.

There are two ways you can update a partitioned view. If you have partitioned your tables using the primary key, then you can update the view directly. (UPDATE viewname, SET column = value). But if you didn’t use the partitioning column as part of your primary key, then you’ll have to create an INSTEAD OF INSERT and UPDATE triggers for your view. If you go this route, make sure you build in some error handling to prevent duplicate rows from being inserted.

And to be honest, that can be a pain. Make it easier on yourself, and develop your partitioned views to segment the data based at least in part on the tables’ primary key!

Implementing partitioned views can make it easy to archive data. You can create a new table to cover the new segment of data, then update the view to reference the new table, and remove the reference to the old data.  For those of you not as familiar with partitioning tables, that may be a way of learning to become more familiar with partitioning… just sayin’.

Next time, I’d like to cover some of the gotchas that can exist when you go to running INSERT, UPDATE, and DELETE against your partitioned view.  For now, let this sink in, and play around with the basics.  If you have any questions, send them in!  That’s what I’m here for.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme