Last time, we learned that manually removing duplicate rows from a table could be a long and manual process. Especially when you have as many duplicates as I created in our troubled table. Today we’re going to look at using temp tables to remove the extra rows. I gave you the overview to this already, so today we’re going to explore several versions of this pattern, and discuss some of the pros and cons of each step.
Before we get started, my demo is going to use a “real” table instead of a #temp table or a @table variable. I’ve talked about the differences between them before, and I will re-visit that topic later. For the course of this article our temp table is just a table we’re going to drop when we’re finished using it.
Creating your temporary table
SELECT DISTINCT FirstName , LastName , email INTO TempTroubledTable FROM TroubledTable
or
CREATE TABLE TempTroubledTable ( FirstName VARCHAR(50) , LastName VARCHAR(50) , email VARCHAR(255) ) INSERT INTO TempTroubledTable (FirstName, LastName, email) SELECT DISTINCT FirstName, LastName, email FROM TroubledTable
There are discussions all over the internet on why you should use CREATE TABLE and then INSERT INTO. It has to do with LOCKS that are held while the data is copied over. Generally you’re removing duplicates on an ad-hoc basis, so you don’t think a lot about code re-use while you’re doing your work. But if you’re working on a high transaction volume system, and you know resources on your server are scarce, I would recommend creating your table before inserting the data.
We have a distinct list of the records that are duplicated in our source table.
Removing the duplicates from the source table
Again, we have a choice: DELETE or TRUNCATE. To make that call you need to think about how the source table is used.
- Do you have exclusive access to the source table? Will anyone try to read or write to the table while you delete the duplicate rows?
- If you can get exclusive access, TRUNCATE will remove all the rows (and therefore the duplicates) faster than a DELETE, due to how the transaction log records the operation.
- If you can’t get exclusive access, DELETE will allow you to remove the records while maintaining other users’ access to the table. But beware: you could experience blocking due to those other users’ accessing the table at the same time. But you may need to delete the duplicate rows in batches.
- Is your source table very large? Do you have 100,000 rows? A million rows? more? Or is your transaction log size limited in some way?
- If you have a very large table, the difference in how TRUNCATE works could save you from having your transaction log fill up your drive. This is a very real situation the more rows you have to remove.
- If the source table isn’t very large, then DELETE will perform as well as TRUNCATE would.
So, depending on how you answered the above questions you could use either of the following commands to remove the duplicates.
DELETE FROM tt FROM TroubledTable tt INNER JOIN TempTroubledTable ttt on tt.email = ttt.email
Which could be quite costly, both the join and the transaction log are working against us. (the join is really only needed if you needed to keep some of the rows.)
TRUNCATE TABLE TroubledTable
Either way, once you verify you’ve removed all the duplicate rows, you’re ready to put the unique rows back into your source table.
SELECT email, COUNT(*) FROM troubledtable GROUP BY email HAVING COUNT(*) > 1
Putting the unique rows back into your source table
Since we only selected the DISTINCT rows into our temp table, we can restore the rows with a simple:
INSERT INTO TroubledTable SELECT * from TempTroubledTable
And now we can get rid of our temp table.
DROP TABLE TempTroubledTable
What if INSERT to source fails?
Occasionally the transaction log will be very limited in space, or the transaction log is too slow for my liking. There is another way to get that data from your temp table back to the source. You could rename the temp table. But of course that means you have to drop the current source table, then do the rename.
DROP TABLE TroubledTable GO EXEC sp_rename @objname = 'TempTroubledTable', @newname = 'TroubledTable'
The great thing about this is the command is a data definition language (DDL) command, and runs much faster than the data manipulation language (DML) INSERT/SELECT. It’s something to keep in mind when manipulating data like this.
So, verify you’ve cleared all the duplicates out of the source table
SELECT email, COUNT(*) FROM troubledtable GROUP BY email HAVING COUNT(*) > 1
And make sure you have data in your source table.
SELECT TOP 100 * FROM TroubledTable
And now you’re done!
Next time, we’ll cover how to do this by using a join from the source table to the source table. It’s yet another way you can remove duplicates from your tables. Until then, if you have any questions, please let me know!