Process/Thread Context Switch Impact on SQL Server Performance

The estimated reading time for this post is 5 minutes

SQL Server like any other application is using a specific process and threads, therefore the processor resource needs to be shared with fair amount of time with other processes within operating system (Windows). To accomplish this goal, there is a concept of Context Switch which is referring to the process of saving and loading the state of a process or thread execution so that execution can be resumed from the same point of execution in later time. This technique allows Operating Systems to be multitasking systems.

Context Switching has a cost in performance due to running the task scheduler within Operating System and indirectly due to sharing the CPU cache between multiple tasks, be aware that switching between threads of a single process can be faster than between two different processes.

Context Switches are computationally intensive and Windows Server is fully optimized to use of context switches. Switching from one process to another is required more time to save and load registers, memory maps, updating memory tables and lists within the Operating System. There are three different triggers for a context switch which are Multitasking, Interrupt Handling and; User and Kernel Mode Switching. 

SQL Server command (Query) execution goes through few level of process controlling as the following image illustrates. SQL Server has few components within SQLOS which called Scheduler, Worker, Tasks and Threads which threads are actually created by Windows Operating System and handed over to SQL Server process.

Let’s execute the following script to capture detail information about SQL Server task execution for further analysis:-

WHILE(1=1)
BEGIN
INSERT INTO #TempTasks
SELECT T.session_id,R.command,R.sql_handle, T.task_state, T.task_address, TH.os_thread_id ,T.context_switches_count AS [Task Context Switch],T.scheduler_id,
W.context_switch_count AS [Worker Context Switch],W.start_quantum,W.end_quantum, (W.end_quantum - W.start_quantum) AS [Quantum]
--into #TempTasks
FROM sys.dm_os_tasks T 
INNER Join sys.dm_os_workers W ON W.task_address = T.task_address
INNER Join sys.dm_os_threads TH ON TH.thread_address = W.thread_address
INNER Join sys.dm_exec_requests R ON R.session_id = T.session_id
WHERE T.session_id = 52 --and T.task_state <> 'DONE'
END

Once you execute the above script, it is time to execute the following script as an example, the following script is a financial report query which is used Pivot command.

SET STATISTICS TIME ON;
 
SELECT        pvt.[SalesPersonID], pvt.[FullName], pvt.[JobTitle], pvt.[SalesTerritory], pvt.[2011], pvt.[2012], pvt.[2013],pvt.[2014]
FROM            (SELECT        soh.[SalesPersonID], p.[FirstName] + ' ' + COALESCE (p.[MiddleName], '') + ' ' + p.[LastName] AS [FullName], e.[JobTitle], st.[Name] AS [SalesTerritory], soh.[SubTotal], YEAR(DATEADD(m, 6, 
                                                    soh.[OrderDate])) AS [FiscalYear]
                          FROM            [Sales].[SalesPerson] sp INNER JOIN
                                                    [Sales].[SalesOrderHeader] soh ON sp.[BusinessEntityID] = soh.[SalesPersonID] INNER JOIN
                                                    [Sales].[SalesTerritory] st ON sp.[TerritoryID] = st.[TerritoryID] INNER JOIN
                                                    [HumanResources].[Employee] e ON soh.[SalesPersonID] = e.[BusinessEntityID] INNER JOIN
                                                    [Person].[Person] p ON p.[BusinessEntityID] = sp.[BusinessEntityID]) AS soh PIVOT (SUM([SubTotal]) FOR [FiscalYear] IN ([2011], [2012], [2013],[2014])) AS pvt

Once the above query is executed, we are able to see how many milliseconds are spent on query execution by using SET STATISTICS IO ON option. In our system, the above query executed as below:-

SQL Server parse and compile time: 
   CPU time = 47 ms, elapsed time = 47 ms.
 
(14 row(s) affected)
 
 SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 52 ms.

Now it is time to analyze the captured SQL Server task information. Captured data contains following columns that explained as below:-

  • Session_ID: Indicates the session id which executed the command.
  • Command: Indicates the type of command.
  • SQL_Handle: References the TSQL Statement.
  • Task_State: Indicates the Task status during capturing information.
  • Task_Address: References the Task_Address from Sys.dm_os_tasks.
  • OS_Thread_ID: Indicates the Thread ID in Windows Operating System.
  • Task Context Switch: Indicates the number of context switches occurs during specific task execution.
  • Scheduler_ID: References the scheduler from Sys.dm_os_schedulers.
  • Worker Context Switch: Indicates the total number of context switch occurs for the worker which owns the task.
  • Start_Quantum: The time that task start execution in milliseconds.
  • End_Quantum: The time that task end execution in milliseconds.
  • Quantum: The time that task executed in milliseconds.
  • Context_Switch_Quantum: The time that spent for context switch operation.

Run the following query statement to get understandable information from capture task execution data:-

SELECT *, (LEAD(Start_Quantum) OVER(ORDER BY Start_Quantum) - End_Quantum) AS Context_Switch_Quantum FROM #TempTasks;

Above result set shows that query is spending more time on context switch rather than execution. There are many ways to reduce the context switch quantum in a query rather than changing hardware or operating system.

Let’s run the optimized version of financial report query while we are capturing the task execution information. The following query is optimized as we removed some of the unnecessary aggregates and expressions within TSQL query.

SELECT        pvt.[SalesPersonID], pvt.[Firstname], pvt.[JobTitle], pvt.[SalesTerritory], pvt.[2011], pvt.[2012], pvt.[2013],pvt.[2014]
FROM            (SELECT        soh.[SalesPersonID], p.[FirstName],p.[MiddleName],P.[LastName], e.[JobTitle], st.[Name] AS [SalesTerritory], soh.[SubTotal],  soh.[OrderDate] AS [FiscalYear]
                          FROM            [Sales].[SalesPerson] sp INNER JOIN
                                                    [Sales].[SalesOrderHeader] soh ON sp.[BusinessEntityID] = soh.[SalesPersonID] INNER JOIN
                                                    [Sales].[SalesTerritory] st ON sp.[TerritoryID] = st.[TerritoryID] INNER JOIN
                                                    [HumanResources].[Employee] e ON soh.[SalesPersonID] = e.[BusinessEntityID] INNER JOIN
                                                    [Person].[Person] p ON p.[BusinessEntityID] = sp.[BusinessEntityID]) AS soh PIVOT (SUM([SubTotal]) FOR [FiscalYear] IN ([2011], [2012], [2013],[2014])) AS pvt

The optimized version of query executed as below:-

SQL Server parse and compile time: 
   CPU time = 343 ms, elapsed time = 379 ms.
 
(14 row(s) affected)
 
 SQL Server Execution Times:
   CPU time = 16 ms,  elapsed time = 32 ms.

Run the following query statement to get understandable information from capture task execution data:-

SELECT *, (LEAD(Start_Quantum) OVER(ORDER BY Start_Quantum) - End_Quantum) AS Context_Switch_Quantum FROM #TempTasks
WHERE sql_handle <> 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

The above result set shows that context switch quantum values reduced significantly by removing unnecessary functions and expressions within our query.

Conclusion

To achieve lowest context switch as possible in your SQL Server environment, it is recommended to:-

  • Provide dedicated hardware for SQL Server.
  • Perform SQL Server hardware sizing practice prior installation and configuration. 
  • Perform Processor stress and stability test to make sure there is hardware related issue.
  • Optimize queries as much as possible.
  • Avoid installing unnecessary software.

Hamid J. Fard

I am SQL Server Data Platform Expert with more than 9 years’ of professional experience, I am currently Microsoft Certified Master: SQL Server 2008, Microsoft Certified Solutions Master: Charter-Data Platform, Microsoft Data Platform MVP and CIW Database Design Specialist. I also do Pig, Hive and Sqoop development regarding Hadoop Big Data platform. After a few years of being a production database administrator I jumped into the role of Data Platform Expert. Being a consultant allows me to work directly with customers to help solve questions regarding database issues for SQL Server.

More Posts

Follow Me:
FacebookLinkedIn