So I have this table in my new database that is used as the base table for more than 50% of the queries that are run on my server. It’s our product table. Along with all the basic information about products there’s a column that hold the status of that product, it’s either active or inactive. Every time a user queries the database, somewhere in that query is a join to this table with the WHERE clause “WHERE [status] = ‘a'”.
So I thought, now’s a great time to implement a filtered index. The easiest way I have to think about a filtered index is think of your phone book. Let’s say you only ever use the business numbers. You don’t ever use the residential numbers. Rip out the residential section, and only look through that section.
There are fewer pages to maintain when numbers are changed (added, changed or deleted). There are fewer pages to look through when you’re trying to find a number.
It’s the same thing in SQL server. When you run INSERT, UPDATE, or DELETE statements, it takes the server less time to maintain your selective INDEX. Since you’re only indexing a subset of the rows, rather than all rows. It also takes less time to do a seek, since you have fewer rows to search through in that INDEX.
So I wanted to show how the filtered index can improve query times and costs.
I made a copy of the table on my development server, and only kept the primary key, along with no other indexes, since there were other indexes that might interfere with my testing. I wanted to show how a where clause could reduce the maintenance costs and query times compared with a traditional covered index.
I started by adding the traditional covered index.
CREATE NONCLUSTERED INDEX [ix_Product__Status__include_ProductID] ON [dbo].[product] ([Status]) INCLUDE ([ProductID])
I used SQL Sentry’s Plan Explorer to help me highlight the differences in cost. I ran the query:
SELECT ProductID FROM Product WHERE [Status] = 'a'
Basically the part of every query that I’m trying to optimize. I saved the execution plan and opened it in SQL Sentry Plan Explorer, and looked at the CPU and IO cost of this Index Seek. I found CPU was .4141490 and IO was .55345000. SSMS showed I had 750 logical page reads for this query. I then dropped ix_Product__Status__include_ProductID, and added:
CREATE NONCLUSTERED INDEX [ix_Product__Status__include_ProductID__WHERE_Status_a] ON [dbo].[Product] ([Status]) INCLUDE ([ProductID]) WHERE [status] = 'a'
Now, we have a CPU of .4141490 and an IO of .3167960. An improvement of 42.7%. SSMS showed I had 748 logical page reads for this query. Not a huge difference, but if we get to a point where we have a larger number of inactive products, this index would reduce the number of pages read even further!
Yes I know these numbers aren’t really concrete. I need to learn exactly what these numbers indicate, I know they are relative measurements, but I wanted to show how a lookup can be sped up by using the INDEX with a WHERE clause.
These numbers show the decrease in the IO cost of this query. As for showing the cost of updating the INDEX… I don’t have a script set up to test that, but I should be able to script 10% inserts (approximately 10k rows), sum up the CPU and IO for each query, then do 10% updates (again about 10k rows), again sum up the CPU and IO for each, then 10% DELETE. Then change the index to a non-filtered INDEX and repeat the tests. I don’t have that set up right now, but I put it on my to do list to try out.
It’d be nice to have exact numbers to point to. For now, we’ll leave it there. Do you have any questions about filtered indexes I can answer for you? Just send those in. I’m here to help you learn more about SQL. Let me know how I can do that for you!