Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

SQL Server to Databricks Profiler

Posted on January 26, 2023February 18, 2023 by slowder

Recently I had a client that expressed interest in migrating their data warehouse from Azure SQL DB to Databricks. They weren’t looking to move due to any performance issues in Azure SQL DB. They were running on the Hyperscale offering. They were looking to share a common data architecture between their data warehouse and data science teams. To help them estimate the effort of moving to Databricks, I needed to profile their current Azure SQL DB instance and gather some information.

Databricks has profilers to do this for Snowflake and Teradata. These profilers are notebooks you can run from a Databricks cluster, connect to a source system and collect some useful metrics. The problem was they don’t have one for SQL Server. I took the opportunity to put one together. Many of these queries are inspired by Glenn’s SQL Server Performance scripts.

Notebook Configuration

I set up a few databricks widgets to collect the connection information: server name, database name, username, and password. I then fill that into a JDBC connection string.

synServerlessJdbc = f"jdbc:sqlserver://{SqlServerName}.database.windows.net:1433;database={SqlDbName};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
JdbcconnectionProperties = {
  "user" : UserName,
  "password" : Password
}

Then I create a schema in the metastore catalog where I plan to write the output from all my profiler queries.

spark.sql(f'CREATE SCHEMA IF NOT EXISTS {SqlDbName}')

RunQuery function

Then I defined a simple function that will run a query, then store those results in a table I can use later to analyze my source instance.

def runQuery(queryText, querySubject):
  df = spark.read.jdbc(url=synServerlessJdbc, table=queryText, properties=JdbcconnectionProperties)
  df.write.mode("overwrite").option("overwriteSchema", "true").saveAsTable(f'{SqlDbName}.{querySubject}')

I am skipping any error handling for now. I need to go back and add that in later.

Top-level Database Details

The first check I want to do against the client’s database is to look at the instance to see if any there are any questionable instance-level settings. I got this query from Glenn Berry’s script. In Azure SQL DB these settings are read-only. If the database is running in a VM or physical node in a client’s environment, these values could have been changed. If any values are not equal to their value in use, and is_dynamic is false, then I want to know why the setting has been changed. This could let me know if a non-standard configuration is in play, and may prevent us from continuing the migration.

 SELECT name, value, value_in_use, minimum, maximum, [description], is_dynamic, is_advanced
FROM sys.configurations WITH (NOLOCK)
ORDER BY name OPTION (RECOMPILE);

I’ll also want to take a look at some database-level details too. Again, I’m looking for features in use that could cause us problems when we move to Databricks.

sqltext = f'''(
    SELECT 
        name
        , create_date
        , compatibility_level
        , collation_name
        , is_auto_shrink_on
        , state_desc
        , snapshot_isolation_state_desc
        , is_read_committed_snapshot_on
        , recovery_model_desc
        , page_verify_option_desc
        , is_auto_create_stats_on
        , is_auto_create_stats_incremental_on
        , is_auto_update_stats_on
        , is_auto_update_stats_async_on
        , is_fulltext_enabled
        , is_parameterization_forced
        , is_query_store_on
        , is_cdc_enabled
        , resource_pool_id
        , is_temporal_history_retention_enabled
        , is_memory_optimized_enabled
        , is_ledger_on
    FROM sys.databases 
    WHERE name = '{SqlDbName}'
) alias'''

Then, let’s look at the size of the database. This is another query from Glenn’s collection.

SELECT 
        CAST(SUM(CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8192.) / 1024 / 1024 AS DECIMAL(15,2)) AS [Database Size In MB],
        CAST(SUM(CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8192.) / 1024 / 1024 / 1024 AS DECIMAL(15,2)) AS [Database Size In GB]
    FROM sys.database_files WITH (NOLOCK)
    WHERE [type_desc] = N'ROWS' OPTION (RECOMPILE);

Then, grab a few file-level details. If the client lets their database balloon in size at one point but then cleaned out unused data later, you may find a lot of free space. You can use this opportunity to help them resize their current database while continuing to work towards migrating their database over.

SELECT 
        f.name AS [File Name], 
        f.physical_name AS [Physical Name], 
        CAST((f.size/128.0) AS DECIMAL(15,2)) AS [Total Size in MB],
        CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS DECIMAL(15,2)) AS [Available Space In MB], 
        f.[file_id], 
        fg.name AS [Filegroup Name],
        f.is_percent_growth, 
        f.growth, 
        fg.is_default, 
        fg.is_read_only, 
    fg.is_autogrow_all_files
    FROM sys.database_files AS f WITH (NOLOCK) 
    LEFT OUTER JOIN sys.filegroups AS fg WITH (NOLOCK)
        ON f.data_space_id = fg.data_space_id
    ORDER BY f.[file_id] OPTION (RECOMPILE);

Another query I borrowed from Glenn helps me understand IO details. Is the database mostly write, mostly read, etc? This can be my first indication on how I’ll need to lay out the data in the data lake to improve the database’s IO performance once we’ve migrated over to Databricks.

SELECT 
        DB_NAME(DB_ID()) AS [Database Name], 
        df.name AS [Logical Name], 
        vfs.[file_id], 
        df.type_desc,
        df.physical_name AS [Physical Name], 
        CAST(vfs.size_on_disk_bytes/1048576.0 AS DECIMAL(10, 2)) AS [Size on Disk (MB)],
        vfs.num_of_reads, 
        vfs.num_of_writes, 
        vfs.io_stall_read_ms, 
        vfs.io_stall_write_ms,
        CAST(100. * vfs.io_stall_read_ms/(vfs.io_stall_read_ms + vfs.io_stall_write_ms) AS DECIMAL(10,1)) AS [IO Stall Reads Pct],
        CAST(100. * vfs.io_stall_write_ms/(vfs.io_stall_write_ms + vfs.io_stall_read_ms) AS DECIMAL(10,1)) AS [IO Stall Writes Pct],
        (vfs.num_of_reads + vfs.num_of_writes) AS [Writes + Reads], 
        CAST(vfs.num_of_bytes_read/1048576.0 AS DECIMAL(10, 2)) AS [MB Read], 
        CAST(vfs.num_of_bytes_written/1048576.0 AS DECIMAL(10, 2)) AS [MB Written],
        CAST(100. * vfs.num_of_reads/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)) AS [# Reads Pct],
        CAST(100. * vfs.num_of_writes/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)) AS [# Write Pct],
        CAST(100. * vfs.num_of_bytes_read/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)) AS [Read Bytes Pct],
        CAST(100. * vfs.num_of_bytes_written/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)) AS [Written Bytes Pct]
    FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL) AS vfs
    INNER JOIN sys.database_files AS df WITH (NOLOCK)
        ON vfs.[file_id]= df.[file_id] OPTION (RECOMPILE);

Database Object Details

Next, I’ll get object counts and sizes. This will get tables, views, functions, and stored procedures. Be sure to exclude objects that ship with SQL Server.

   SELECT 
        o.object_id,
        o.parent_object_id,
	    SCHEMA_NAME(o.schema_id) AS schema_name,
        o.name AS object_name,
        o.type_desc,
        o.create_date,
        o.modify_date,
	    e.location AS external_table_location,
	    eff.name AS external_table_format,
	    CASE WHEN e.location IS NOT NULL THEN 'EXTERNAL_USER_TABLE'
	    	ELSE o.type_desc END AS revised_table_type,
        i.[type_desc] AS [index_type_desc],  
        ps.partition_id,
        SUM(ps.row_count) AS row_count,
        SUM(ps.reserved_page_count * 8.0) AS reserved_space_KB,
        SUM((ps.reserved_page_count * 8.0) / 1000) AS reserved_space_MB,
        SUM((ps.reserved_page_count * 8.0) / 1000000) AS reserved_space_GB,
        SUM((ps.reserved_page_count * 8.0) / 1000000000) AS reserved_space_TB,
        SUM((ps.used_page_count * 8.0)) AS used_space_KB,
        SUM((ps.used_page_count * 8.0) / 1000) AS used_space_MB,
        SUM((ps.used_page_count * 8.0) / 1000000) AS used_space_GB,
        SUM((ps.used_page_count * 8.0) / 1000000000) AS used_space_TB
    FROM sys.objects o 
    INNER JOIN sys.schemas s 
        on o.schema_id = s.schema_id
    INNER JOIN sys.tables t 
        on o.object_id = t.object_id
    LEFT JOIN sys.indexes i 
        ON t.[object_id] = i.[object_id]
        AND i.[index_id] <= 1
    LEFT JOIN sys.dm_db_partition_stats ps	
        ON i.index_id = ps.index_id
    LEFT JOIN sys.external_tables e 
        on o.object_id = e.object_id
    LEFT JOIN sys.external_file_formats eff 
        on e.file_format_id = eff.file_format_id
    WHERE 
        o.is_ms_shipped = 0
    GROUP BY    
        o.object_id,
        o.parent_object_id,
        SCHEMA_NAME(o.schema_id),
        o.name,
        o.type_desc,
        o.create_date,
        o.modify_date,
        e.location, 
        eff.name,
        CASE WHEN e.location IS NOT NULL THEN 'EXTERNAL_USER_TABLE'
	    	ELSE o.type_desc END,
        i.[type_desc],
        ps.partition_id

Column Details

Next, get some details about columns.

SELECT 
        o.object_id,
        ac.name,
        CAST(column_id AS INT) as column_id,
        CAST(system_type_id AS INT) as system_type_id,
        CAST(user_type_id AS INT) as user_type_id,
        max_length,
        precision,
        scale,
        is_nullable
        is_identity,
        is_computed,
        encryption_type_desc,
        is_masked
    FROM sys.all_columns ac
    INNER JOIN sys.objects o
        on ac.object_id = o.object_id
    WHERE 
        o.is_ms_shipped = 0

Workload Details

Now that we have object-level data let’s look at the workload running against this database. This can help identify what to convert first. It can also help you identify which objects are referenced most often. I do look in the query store and dm_exec query stats.

SELECT
        q.query_id,
        p.plan_id,    
        t.query_sql_text,
        (rs.avg_duration / 1000) as avg_duration_ms,
        (rs.min_duration / 1000) as min_duration_ms,
        (rs.max_duration / 1000) as max_duration_ms,
        (rs.avg_duration / 1000000) as avg_duration_secs,
        (rs.min_duration / 1000000) as min_duration_secs,
        (rs.max_duration / 1000000) as max_duration_secs,
        CAST(q.last_compile_start_time as DATETIME) as last_compile_start_time,
        CAST(q.initial_compile_start_time as DATETIME) as initial_compile_start_time,
        CAST(q.last_execution_time as DATETIME) as last_execution_time,
        q.count_compiles,
        CAST(rs.last_execution_time  as DATETIME) as runtime_stats_last_execution_time
    FROM sys.query_store_query q
    INNER JOIN sys.query_store_query_text t 
        ON q.query_text_id = t.query_text_id
    INNER JOIN sys.query_store_plan p 
        ON p.query_id = q.query_id
    INNER JOIN sys.query_store_runtime_stats rs 
        ON rs.plan_id = p.plan_id
SELECT 
         qs.sql_handle
       , qs.plan_handle
       , st.text AS query_sql_text
       , execution_count
       , (total_elapsed_time * 1000) / execution_count as avg_duration_ms
       , (min_elapsed_time * 1000) / execution_count as min_duration_ms
       , (max_elapsed_time * 1000) / execution_count as max_duration_ms
       , (total_elapsed_time * 1000000) / execution_count as avg_duration_s
       , (min_elapsed_time * 1000000) / execution_count as min_duration_s
       , (max_elapsed_time * 1000000) / execution_count as max_duration_s
	   , last_execution_time
    FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st

Conclusion

You may decide to run more based on the results you get when running these queries. This is just a way to estimate the effort required to migrate a database over to Databricks. In our case, the results were promising. We found most of the logic to transform data from raw to silver and then from silver to gold in the database. The raw to silver transforms were a variation of a raw data vault load. Building a spark version of this transformation would be straightforward. The more significant effort would be converting the silver to gold transforms from stored procedures to spark-compatible code.

We discovered all the ingestion to the database was already in ADF, so replacing that with a metadata-driven ADF pipeline generator allowed us to replace that part with minimal effort.

Click here to get a copy of the notebook.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Docker-based Spark
  • Network Infrastructure Updates
  • Docker and Watchtower
  • Delta Sharing – Data Providers
  • SQL Server to Databricks Profiler

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2023 shannonlowder.com | Powered by Minimalist Blog WordPress Theme