This is the last article for my upcoming design patterns session. Today we’re going to look into speeding up your queries by replacing cursors, a row-by-row operation with a table value function. Before you start composing that hate-mail on how I’m wrong for not understanding cursors, let me say this: there is a time and a place for everything, even cursors (think running totals). The problem is I still see them doing row-by-row operations in SQL. And SQL is not optimized for row-by-row. It is optimized for set-based operations.
If you can rethink your T-SQL logic and consider the whole set of data you’ll be working on, rather than what you have to do to each row, you’ll find you’re rewarded by the SQL Engine. Your results come back in record time!
Let’s look at an example I created to illustrate a row-by-row operation. This cursor takes in a column containing a user’s full name, and breaks it into first and last name using a space as the break point.
If you want to follow along, and you have my OriginalTroubledTable, use the following code to set up the NameSplitTable.
IF OBJECT_ID('adventureworks.dbo.NameSplitTable') IS NOT NULL DROP TABLE NameSplitTable GO CREATE TABLE [dbo].[NameSplitTable]( [NameID] INT IDENTITY(1,1) , [FullName] VARCHAR(100) NULL , [FirstName] [varchar](50) NULL , [LastName] [varchar](50) NULL ) ON [PRIMARY] GO --I want to have more than 10k rows so repeat inserts until we get more! WHILE (SELECT COUNT(*) from [NameSplitTable]) < 10000 BEGIN INSERT INTO [NameSplitTable] ([FullName]) SELECT firstname + ' ' + LastName FROM OriginalTroubledTable END
If we were doing a single row, the following code wouldn’t be that bad.
DECLARE @fullname varchar(100) DECLARE @NameID INT DECLARE @splitPos INT SELECT TOP 1 @NameID = NameID , @fullName = FullName FROM [NameSplitTable] SET @splitPos = CHARINDEX(' ', @fullname) IF(@splitPos > 0) BEGIN UPDATE [NameSplitTable] SET FirstName = SUBSTRING(@fullName, 1, @splitPos) , LastName = SUBSTRING(@fullName, @splitPos, LEN(@fullName)) WHERE NameID = @NameID END
Look at the execution plan, the cost of the UPDATE isn’t horrible. In my test environment the execution took less than one second.
The real cost of this query comes in when you run this code once for every row of the table. In my test case, I have 11,760 rows. That means that this table scan is going to scan through the table 11,760 times. That will add up quickly! Let’s see the Cursor version of the code.
DECLARE @fullname varchar(100) DECLARE @NameID INT DECLARE @splitPos INT DECLARE FullNames CURSOR FOR SELECT NameID, FullName FROM [NameSplitTable]; OPEN FullNames; FETCH NEXT FROM FullNames INTO @NameID, @fullname; WHILE @@FETCH_STATUS = 0 BEGIN SET @splitPos = CHARINDEX(' ', @fullname) IF(@splitPos > 0) BEGIN UPDATE [NameSplitTable] SET FirstName = SUBSTRING(@fullName, 1, @splitPos) , LastName = SUBSTRING(@fullName, @splitPos, LEN(@fullName)) WHERE NameID = @NameID END FETCH NEXT FROM FullNames INTO @NameID, @fullname; END CLOSE FullNames DEALLOCATE FullNames
You can run this on your machine if you have a lot of patience, but I’m going to run it for 60 seconds, and see how many rows I can update in 60 seconds. I was able to update 5,734 rows. And that’s the big problem with cursors, each operation could be very quick, but the cumulative effect of running the code so many times adds up!
In this case it’s the cumulative effect of doing that many table scans and table updates to do the work. How much faster would it be if I could update all the rows at once?
Let’s reset our test table and find out.
update [NameSplitTable] set FirstName = NULL, LastName = NULL
First, in order to be able to do this string manipulation across multiple rows we’re going to need a table value function to do the same substring we did in the cursor.
CREATE FUNCTION tvf_splitNames ( @fullName VARCHAR(100) ) RETURNS @output TABLE ( [FirstName] [varchar](50) NULL , [LastName] [varchar](50) NULL ) AS BEGIN DECLARE @splitPos INT SET @splitPos = CHARINDEX(' ', @fullname) IF(@splitPos > 0) INSERT INTO @output SELECT SUBSTRING(@fullName, 1, @splitPos) FirstName , SUBSTRING(@fullName, @splitPos, LEN(@fullName)) AS LastName ELSE INSERT INTO @output SELECT @fullname as FirstName, NULL as LastName RETURN END
Now that we have a table value function we can use the CROSS APPLY technique to apply this function to all the rows in our NameSplitTable. We simply map in The full name, and pull out the resulting first and last names.
SELECT nst.FullName , tvf.FirstName , tvf.LastName FROM [NameSplitTable] nst CROSS APPLY tvf_splitNames (nst.FullName) tvf
Yes, this is only a select, but I wanted to show you how 84% cost to the table scan was changed once we move from row-by-row to Set based processing.
We reduced the cost of the table scan, by adding a table valued function and a nested loop. Yes, this is apples and oranges to the original function, but run the code. We were able to select back 11,760 rows, getting their first and last names in 9 seconds.
The cursor took 2:35 to actually update all the rows.
UPDATE nst SET nst.FirstName = tvf.firstName , nst.LastName = tvf.LastName FROM [NameSplitTable] nst CROSS APPLY tvf_splitNames (nst.FullName) tvf
The cross apply updated it in 1 second.
Now that is apples to apples.
One of the harder things to do in SQL is knowing how well your code is going to scale. If you write code thinking your workload will be a handful of records once a day, and that code is actually run several hundred thousand times per minute, you may not be happy with how quickly the work is performed.
The trick is always learning what SQL does well, and what it doesn’t. Once you understand that, you’ll begin to know what you have to change to speed things up!
That’s it for my design patterns session. As always, if you have any questions, please let me know!