Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

Improve Performance by Replacing Cursors

Posted on January 25, 2012February 1, 2012 by slowder

This is the end...

This is the last article for my upcoming design patterns session. Today we’re going to look into speeding up your queries by replacing cursors, a row-by-row operation with a table value function. Before you start composing that hate-mail on how I’m wrong for not understanding cursors, let me say this: there is a time and a place for everything, even cursors (think running totals). The problem is I still see them doing row-by-row operations in SQL. And SQL is not optimized for row-by-row. It is optimized for set-based operations.

If you can rethink your T-SQL logic and consider the whole set of data you’ll be working on, rather than what you have to do to each row, you’ll find you’re rewarded by the SQL Engine. Your results come back in record time!

Let’s look at an example I created to illustrate a row-by-row operation. This cursor takes in a column containing a user’s full name, and breaks it into first and last name using a space as the break point.

If you want to follow along, and you have my OriginalTroubledTable, use the following code to set up the NameSplitTable.

IF OBJECT_ID('adventureworks.dbo.NameSplitTable') IS NOT NULL
	DROP TABLE NameSplitTable
GO	
CREATE TABLE [dbo].[NameSplitTable](
	  [NameID] INT IDENTITY(1,1)
	, [FullName] VARCHAR(100) NULL
	, [FirstName] [varchar](50) NULL
	, [LastName] [varchar](50) NULL
) ON [PRIMARY]

GO
--I want to have more than 10k rows so repeat inserts until we get more!
WHILE (SELECT COUNT(*) from  [NameSplitTable]) < 10000
BEGIN 
	INSERT INTO [NameSplitTable]
	([FullName])
	SELECT  firstname + ' ' + LastName
	FROM OriginalTroubledTable
END

If we were doing a single row, the following code wouldn’t be that bad.

DECLARE @fullname varchar(100)
DECLARE @NameID INT
DECLARE @splitPos INT

SELECT TOP 1  
	  @NameID = NameID
	, @fullName = FullName
FROM [NameSplitTable]

SET @splitPos = CHARINDEX(' ', @fullname)

IF(@splitPos > 0)
BEGIN
	UPDATE [NameSplitTable] SET
		FirstName = SUBSTRING(@fullName, 1, @splitPos)
		, LastName = SUBSTRING(@fullName, @splitPos, LEN(@fullName))
	WHERE
		NameID = @NameID
END

Look at the execution plan, the cost of the UPDATE isn’t horrible. In my test environment the execution took less than one second.

The real cost of this query comes in when you run this code once for every row of the table. In my test case, I have 11,760 rows. That means that this table scan is going to scan through the table 11,760 times. That will add up quickly! Let’s see the Cursor version of the code.

DECLARE @fullname varchar(100)
DECLARE @NameID INT
DECLARE @splitPos INT

DECLARE FullNames CURSOR FOR
	SELECT NameID, FullName
	FROM [NameSplitTable];

OPEN FullNames;
FETCH NEXT FROM FullNames INTO @NameID, @fullname;

WHILE @@FETCH_STATUS = 0
BEGIN
	SET @splitPos = CHARINDEX(' ', @fullname)
	IF(@splitPos > 0)
	BEGIN
		UPDATE [NameSplitTable] SET
			FirstName = SUBSTRING(@fullName, 1, @splitPos)
			, LastName = SUBSTRING(@fullName, @splitPos, LEN(@fullName))
		WHERE
			NameID = @NameID
END

FETCH NEXT FROM FullNames INTO @NameID, @fullname;
END

CLOSE FullNames
DEALLOCATE FullNames

You can run this on your machine if you have a lot of patience, but I’m going to run it for 60 seconds, and see how many rows I can update in 60 seconds. I was able to update 5,734 rows. And that’s the big problem with cursors, each operation could be very quick, but the cumulative effect of running the code so many times adds up!

In this case it’s the cumulative effect of doing that many table scans and table updates to do the work. How much faster would it be if I could update all the rows at once?

Let’s reset our test table and find out.

update [NameSplitTable] set FirstName = NULL, LastName = NULL

First, in order to be able to do this string manipulation across multiple rows we’re going to need a table value function to do the same substring we did in the cursor.

CREATE FUNCTION tvf_splitNames ( @fullName VARCHAR(100) )
RETURNS @output TABLE (
	[FirstName] [varchar](50) NULL
	, [LastName] [varchar](50) NULL
)
AS
BEGIN
	DECLARE @splitPos INT
	SET @splitPos = CHARINDEX(' ', @fullname)
	IF(@splitPos > 0)
		INSERT INTO @output
		SELECT SUBSTRING(@fullName, 1, @splitPos) FirstName
			, SUBSTRING(@fullName, @splitPos, LEN(@fullName)) AS LastName
	ELSE
		INSERT INTO @output
		SELECT @fullname as FirstName, NULL as LastName

	RETURN 
END

Now that we have a table value function we can use the CROSS APPLY technique to apply this function to all the rows in our NameSplitTable. We simply map in The full name, and pull out the resulting first and last names.

SELECT
	nst.FullName
	, tvf.FirstName
	, tvf.LastName
FROM [NameSplitTable] nst
CROSS APPLY tvf_splitNames (nst.FullName) tvf

Yes, this is only a select, but I wanted to show you how 84% cost to the table scan was changed once we move from row-by-row to Set based processing.

We reduced the cost of the table scan, by adding a table valued function and a nested loop. Yes, this is apples and oranges to the original function, but run the code. We were able to select back 11,760 rows, getting their first and last names in 9 seconds.

The cursor took 2:35 to actually update all the rows.

UPDATE nst SET
	nst.FirstName = tvf.firstName
	, nst.LastName = tvf.LastName
FROM [NameSplitTable] nst
CROSS APPLY tvf_splitNames (nst.FullName) tvf

The cross apply updated it in 1 second.

Now that is apples to apples.

One of the harder things to do in SQL is knowing how well your code is going to scale. If you write code thinking your workload will be a handful of records once a day, and that code is actually run several hundred thousand times per minute, you may not be happy with how quickly the work is performed.

The trick is always learning what SQL does well, and what it doesn’t. Once you understand that, you’ll begin to know what you have to change to speed things up!

That’s it for my design patterns session. As always, if you have any questions, please let me know!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme