Recently I had a client that expressed interest in migrating their data warehouse from Azure SQL DB to Databricks. They weren’t looking to move due to any performance issues in Azure SQL DB. They were running on the Hyperscale offering. They were looking to share a common data architecture between their data warehouse and data science teams. To help them estimate the effort of moving to Databricks, I needed to profile their current Azure SQL DB instance and gather some information.
Databricks has profilers to do this for Snowflake and Teradata. These profilers are notebooks you can run from a Databricks cluster, connect to a source system and collect some useful metrics. The problem was they don’t have one for SQL Server. I took the opportunity to put one together. Many of these queries are inspired by Glenn’s SQL Server Performance scripts.
Notebook Configuration
I set up a few databricks widgets to collect the connection information: server name, database name, username, and password. I then fill that into a JDBC connection string.
synServerlessJdbc = f"jdbc:sqlserver://{SqlServerName}.database.windows.net:1433;database={SqlDbName};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
JdbcconnectionProperties = {
"user" : UserName,
"password" : Password
}
Then I create a schema in the metastore catalog where I plan to write the output from all my profiler queries.
spark.sql(f'CREATE SCHEMA IF NOT EXISTS {SqlDbName}')
RunQuery function
Then I defined a simple function that will run a query, then store those results in a table I can use later to analyze my source instance.
def runQuery(queryText, querySubject):
df = spark.read.jdbc(url=synServerlessJdbc, table=queryText, properties=JdbcconnectionProperties)
df.write.mode("overwrite").option("overwriteSchema", "true").saveAsTable(f'{SqlDbName}.{querySubject}')
I am skipping any error handling for now. I need to go back and add that in later.
Top-level Database Details
The first check I want to do against the client’s database is to look at the instance to see if any there are any questionable instance-level settings. I got this query from Glenn Berry’s script. In Azure SQL DB these settings are read-only. If the database is running in a VM or physical node in a client’s environment, these values could have been changed. If any values are not equal to their value in use, and is_dynamic is false, then I want to know why the setting has been changed. This could let me know if a non-standard configuration is in play, and may prevent us from continuing the migration.
SELECT name, value, value_in_use, minimum, maximum, [description], is_dynamic, is_advanced
FROM sys.configurations WITH (NOLOCK)
ORDER BY name OPTION (RECOMPILE);
I’ll also want to take a look at some database-level details too. Again, I’m looking for features in use that could cause us problems when we move to Databricks.
sqltext = f'''(
SELECT
name
, create_date
, compatibility_level
, collation_name
, is_auto_shrink_on
, state_desc
, snapshot_isolation_state_desc
, is_read_committed_snapshot_on
, recovery_model_desc
, page_verify_option_desc
, is_auto_create_stats_on
, is_auto_create_stats_incremental_on
, is_auto_update_stats_on
, is_auto_update_stats_async_on
, is_fulltext_enabled
, is_parameterization_forced
, is_query_store_on
, is_cdc_enabled
, resource_pool_id
, is_temporal_history_retention_enabled
, is_memory_optimized_enabled
, is_ledger_on
FROM sys.databases
WHERE name = '{SqlDbName}'
) alias'''
Then, let’s look at the size of the database. This is another query from Glenn’s collection.
SELECT
CAST(SUM(CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8192.) / 1024 / 1024 AS DECIMAL(15,2)) AS [Database Size In MB],
CAST(SUM(CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8192.) / 1024 / 1024 / 1024 AS DECIMAL(15,2)) AS [Database Size In GB]
FROM sys.database_files WITH (NOLOCK)
WHERE [type_desc] = N'ROWS' OPTION (RECOMPILE);
Then, grab a few file-level details. If the client lets their database balloon in size at one point but then cleaned out unused data later, you may find a lot of free space. You can use this opportunity to help them resize their current database while continuing to work towards migrating their database over.
SELECT
f.name AS [File Name],
f.physical_name AS [Physical Name],
CAST((f.size/128.0) AS DECIMAL(15,2)) AS [Total Size in MB],
CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS DECIMAL(15,2)) AS [Available Space In MB],
f.[file_id],
fg.name AS [Filegroup Name],
f.is_percent_growth,
f.growth,
fg.is_default,
fg.is_read_only,
fg.is_autogrow_all_files
FROM sys.database_files AS f WITH (NOLOCK)
LEFT OUTER JOIN sys.filegroups AS fg WITH (NOLOCK)
ON f.data_space_id = fg.data_space_id
ORDER BY f.[file_id] OPTION (RECOMPILE);
Another query I borrowed from Glenn helps me understand IO details. Is the database mostly write, mostly read, etc? This can be my first indication on how I’ll need to lay out the data in the data lake to improve the database’s IO performance once we’ve migrated over to Databricks.
SELECT
DB_NAME(DB_ID()) AS [Database Name],
df.name AS [Logical Name],
vfs.[file_id],
df.type_desc,
df.physical_name AS [Physical Name],
CAST(vfs.size_on_disk_bytes/1048576.0 AS DECIMAL(10, 2)) AS [Size on Disk (MB)],
vfs.num_of_reads,
vfs.num_of_writes,
vfs.io_stall_read_ms,
vfs.io_stall_write_ms,
CAST(100. * vfs.io_stall_read_ms/(vfs.io_stall_read_ms + vfs.io_stall_write_ms) AS DECIMAL(10,1)) AS [IO Stall Reads Pct],
CAST(100. * vfs.io_stall_write_ms/(vfs.io_stall_write_ms + vfs.io_stall_read_ms) AS DECIMAL(10,1)) AS [IO Stall Writes Pct],
(vfs.num_of_reads + vfs.num_of_writes) AS [Writes + Reads],
CAST(vfs.num_of_bytes_read/1048576.0 AS DECIMAL(10, 2)) AS [MB Read],
CAST(vfs.num_of_bytes_written/1048576.0 AS DECIMAL(10, 2)) AS [MB Written],
CAST(100. * vfs.num_of_reads/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)) AS [# Reads Pct],
CAST(100. * vfs.num_of_writes/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)) AS [# Write Pct],
CAST(100. * vfs.num_of_bytes_read/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)) AS [Read Bytes Pct],
CAST(100. * vfs.num_of_bytes_written/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)) AS [Written Bytes Pct]
FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL) AS vfs
INNER JOIN sys.database_files AS df WITH (NOLOCK)
ON vfs.[file_id]= df.[file_id] OPTION (RECOMPILE);
Database Object Details
Next, I’ll get object counts and sizes. This will get tables, views, functions, and stored procedures. Be sure to exclude objects that ship with SQL Server.
SELECT
o.object_id,
o.parent_object_id,
SCHEMA_NAME(o.schema_id) AS schema_name,
o.name AS object_name,
o.type_desc,
o.create_date,
o.modify_date,
e.location AS external_table_location,
eff.name AS external_table_format,
CASE WHEN e.location IS NOT NULL THEN 'EXTERNAL_USER_TABLE'
ELSE o.type_desc END AS revised_table_type,
i.[type_desc] AS [index_type_desc],
ps.partition_id,
SUM(ps.row_count) AS row_count,
SUM(ps.reserved_page_count * 8.0) AS reserved_space_KB,
SUM((ps.reserved_page_count * 8.0) / 1000) AS reserved_space_MB,
SUM((ps.reserved_page_count * 8.0) / 1000000) AS reserved_space_GB,
SUM((ps.reserved_page_count * 8.0) / 1000000000) AS reserved_space_TB,
SUM((ps.used_page_count * 8.0)) AS used_space_KB,
SUM((ps.used_page_count * 8.0) / 1000) AS used_space_MB,
SUM((ps.used_page_count * 8.0) / 1000000) AS used_space_GB,
SUM((ps.used_page_count * 8.0) / 1000000000) AS used_space_TB
FROM sys.objects o
INNER JOIN sys.schemas s
on o.schema_id = s.schema_id
INNER JOIN sys.tables t
on o.object_id = t.object_id
LEFT JOIN sys.indexes i
ON t.[object_id] = i.[object_id]
AND i.[index_id] <= 1
LEFT JOIN sys.dm_db_partition_stats ps
ON i.index_id = ps.index_id
LEFT JOIN sys.external_tables e
on o.object_id = e.object_id
LEFT JOIN sys.external_file_formats eff
on e.file_format_id = eff.file_format_id
WHERE
o.is_ms_shipped = 0
GROUP BY
o.object_id,
o.parent_object_id,
SCHEMA_NAME(o.schema_id),
o.name,
o.type_desc,
o.create_date,
o.modify_date,
e.location,
eff.name,
CASE WHEN e.location IS NOT NULL THEN 'EXTERNAL_USER_TABLE'
ELSE o.type_desc END,
i.[type_desc],
ps.partition_id
Column Details
Next, get some details about columns.
SELECT
o.object_id,
ac.name,
CAST(column_id AS INT) as column_id,
CAST(system_type_id AS INT) as system_type_id,
CAST(user_type_id AS INT) as user_type_id,
max_length,
precision,
scale,
is_nullable
is_identity,
is_computed,
encryption_type_desc,
is_masked
FROM sys.all_columns ac
INNER JOIN sys.objects o
on ac.object_id = o.object_id
WHERE
o.is_ms_shipped = 0
Workload Details
Now that we have object-level data let’s look at the workload running against this database. This can help identify what to convert first. It can also help you identify which objects are referenced most often. I do look in the query store and dm_exec query stats.
SELECT
q.query_id,
p.plan_id,
t.query_sql_text,
(rs.avg_duration / 1000) as avg_duration_ms,
(rs.min_duration / 1000) as min_duration_ms,
(rs.max_duration / 1000) as max_duration_ms,
(rs.avg_duration / 1000000) as avg_duration_secs,
(rs.min_duration / 1000000) as min_duration_secs,
(rs.max_duration / 1000000) as max_duration_secs,
CAST(q.last_compile_start_time as DATETIME) as last_compile_start_time,
CAST(q.initial_compile_start_time as DATETIME) as initial_compile_start_time,
CAST(q.last_execution_time as DATETIME) as last_execution_time,
q.count_compiles,
CAST(rs.last_execution_time as DATETIME) as runtime_stats_last_execution_time
FROM sys.query_store_query q
INNER JOIN sys.query_store_query_text t
ON q.query_text_id = t.query_text_id
INNER JOIN sys.query_store_plan p
ON p.query_id = q.query_id
INNER JOIN sys.query_store_runtime_stats rs
ON rs.plan_id = p.plan_id
SELECT
qs.sql_handle
, qs.plan_handle
, st.text AS query_sql_text
, execution_count
, (total_elapsed_time * 1000) / execution_count as avg_duration_ms
, (min_elapsed_time * 1000) / execution_count as min_duration_ms
, (max_elapsed_time * 1000) / execution_count as max_duration_ms
, (total_elapsed_time * 1000000) / execution_count as avg_duration_s
, (min_elapsed_time * 1000000) / execution_count as min_duration_s
, (max_elapsed_time * 1000000) / execution_count as max_duration_s
, last_execution_time
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
Conclusion
You may decide to run more based on the results you get when running these queries. This is just a way to estimate the effort required to migrate a database over to Databricks. In our case, the results were promising. We found most of the logic to transform data from raw to silver and then from silver to gold in the database. The raw to silver transforms were a variation of a raw data vault load. Building a spark version of this transformation would be straightforward. The more significant effort would be converting the silver to gold transforms from stored procedures to spark-compatible code.
We discovered all the ingestion to the database was already in ADF, so replacing that with a metadata-driven ADF pipeline generator allowed us to replace that part with minimal effort.
Click here to get a copy of the notebook.