Advanced Indexing and Performance Monitoring Fundamentals

In earlier post about Execution Plans, you have seen that one of the most important decisions that Microsoft SQL Server takes when executing a query, is how to access the data in any of the tables involved in the query and how each step of a query was executed. In this post, you will see how nonclustered indexes have the potential to significantly enhance the performance of your applications. You will also learn to use tools that can help you design these indexes appropriately.

Methods for Obtaining Index Information

SQL Server provides many ways to obtain information about indexes. SQL Server Management Studio offers a variety of ways to obtain information about indexes. Object Explorer lists the indexes that are associated with tables. The sp_helpindex system stored procedure returns details of the indexes that are associated with a specified table. In the following table, some of the more useful views to obtain information about indexes are shown.

System view Descriptions
sys.indexes Index type, filegroup, or partition scheme ID, and the current setting of index options that are stored in metadata
sys.index_columns Column ID, position within the index, type (key or nonkey), and sort order (ASC or DESC)
sys.stats Statistics associated with a table, including statistic name and whether it was created automatically or by a user
sys.stats_columns Column ID associated with the statistic

SQL Server provides a set of dynamic management objects that contain useful information about the structure and usage of indexes. Some of the more useful views and functions are shown in the following table.

Dynamic Management Object Descriptions
sys.dm_db_index_physical_stats Index size and fragmentation statistics
sys.dm_db_index_operational_stats Current index and table I/O statistics
sys.dm_db_index_usage_stats Index usage statistics by access type

SQL Server also provides a set of functions that return information about the structure of indexes. Some of the more useful functions are shown in the following table.

INDEXKEY_PROPERTYIndex column position within the index and column sort order (ASC or DESC)
INDEXPROPERTY Index type, number of levels, and current setting of index options that are stored in metadata
INDEX_COL Name of the key column of the specified index

Filtered Indexes

You can use filtered indexes to create smaller, more focused indexes that deliver greater efficiency and better performance. Filtered indexes are nonclustered indexes that you define by including a WHERE clause in the CREATE INDEX statement. The WHERE clause filter limits the rows that the index will include

There are scenarios that using filtered indexes are Beneficial:

  • Huge tables. A huge table that users frequently query by filtering on small subset of data that the table contains. In this scenario, a filtered index that is built on the part of table would include just rows that are frequently accessed will be beneficial and the cost of storing and maintaining this index is lower and also enhancement of query response time is more obvious.
  • Tables that have many NULL values. When a column includes many NULL values, a Filtered index that is built on the non-Null rows is efficient.

The following example creates a filtered index that includes a WHERE clause to limit the rows that the index contains:

Creating a Filtered Index

) WHERE City='New York'

You can use an indexed view to achieve a similar result to a filtered index. you just need to specify a filter in the indexed view definition to exclude the unwanted rows. However, there are some important consideration between these two:

  • indexed views are based on multiple tables, but a filtered index is based on a single table.
  • Filtered indexes only support simple comparison operators in their WHERE clause. So, for example, you cannot use the LIKE operator to create a filtered view. If you need to filter by more complex logic, use an indexed view.
  • The query optimizer uses filtered indexes in more situations than indexed views, so by using a filtered index, you are more likely to improve performance.
  • You can rebuild a filtered index while it is online, but indexed views do not support online rebuilds.
  • Updates of filtered indexes generally require fewer CPU resources than updates to indexed views, which helps to minimize maintenance costs.
  • Filtered indexes do not need to be unique indexes, but indexed views do because an index that is built on a view is a clustered index.

Performance Monitoring and Tuning

Many factors can affect database performance. Indexing, hardware, network performance, application design, logical and physical database design, data changes, and operating system configuration are just a few things that could have major effects on database performance. To achieve performance targets, you need to choose the metrics by measuring performance carefully. There can be no definitive rules about what a performance monitoring and tuning strategy should include. For example, Some queries will run more frequently, and you might choose to prioritize these, but even an infrequently run query might need to be prioritized because when it runs, it must run quickly.

The tools that are built into SQL Server to help you tune your system include:

  • Database Engine Tuning Advisor recommends and creates indexes for your system. You supply it with either a trace file, or scripts that you want to optimize for. Database Engine Tuning Advisor will not prioritize one query over another. Although Database Engine Tuning Advisor is probably too simplistic, it is still useful to run as an automated review pass on your design. Database Engine Tuning Advisor can use significant resources and should be run when the database is used least, or by running Database Engine Tuning Advisor in a mirrored system.
  • SQL Server Profiler is an essential performance tuning tool that captures events and stores them in a trace file that can then be replayed at a later date. The events that are captured can be specified by using included templates, or by choosing exactly which events you require. The useful events that you can capture, including locking information, caching information, recompilation information, and stored procedures, scripts, and batches starting and stopping.

SQL Server Profiler is being deprecated and will be removed from a future version of SQL Server. it will be replaced by Extended Events Profiler and SQL Server Distributed Replay.

  • Performance Monitor is a Windows performance monitoring tool. You can view many counters in real time and see whether performance problems are caused by SQL Server itself or an underlying problem. Performance problems in SQL Server might be caused by another application or service, or by network congestion. You can also use Performance Monitor to record performance activity so that you can review it later.
  • The dm_db_index_usage_stats DMV returns a large amount of information about index operations, how many times they were performed, and when they were last performed.

Establishing a Performance Baseline

After you optimize your system, you should create a baseline that includes the most important performance metrics, in addition to metrics that provide a general system overview. A baseline provides these benefits:

  • A starting point for troubleshooting.
  • A basis for hardware planning because it enables you to spot trends and create projections for future hardware requirements. This approach can help specially limited hardware budgets situation.
  • Calculating the impact of changes in database design or hardware. After you make the changes, use the baseline to verify that achieve the desired improvements. If there is no improvement, you can roll back the changes, but if there is improvement, you can implement the changes. After you have made changes to a server, you should create a new baseline that reflects the new configuration.

When you are planning a performance baseline, you should create samples that monitor system resource usage over an extended period of time, to include periods of low, normal, and high usage. This will help you to gain a true picture of system performance, rather than just a snapshot of performance at a single point in time. To minimize the impact of monitoring, you should monitor your servers from a remote workstation, and connect to them by using Performance Monitor. Avoid using remote desktop connections to connect to a server and then running Performance Monitor on that server because this uses server resources.

After you create a baseline, you should periodically compare current server performance with the baseline figures and investigate any values that are significantly above or below baseline figures. You should investigate unexpected improvement in addition to unexpected performance degradation.

You can create a baseline by monitoring the following counters:

  • Counters for assessing memory:
    • Memory:Available Mbytes captures the amount of available memory on the server in megabytes.
    • Paging File:% Usage captures page file usage, and ideally should be a very low value. A high value indicates that the server has insufficient memory. You can also use the Memory:Pages/sec counter to verify this.
    • SQL Server:Buffer Manager:Buffer cache hit ratio indicates the percentage of pages that are read from the data cache without having to read from disk. Ideally, this figure should be over 90 percent; if it is lower than this, the impact of disk I/O can become a problem.
    • SQL Server:Buffer Manager:Page life expectancy indicates in seconds how long pages that are read into memory will remain in the cache before being removed to enable the caching of other pages. Higher values indicate that there is sufficient memory available; if the value falls, this could be because the workload has increased and you need to add more memory.
    • SQL Server:Memory Manager:Memory Grants Pending indicates the number of queries that are currently waiting to be allocated memory so that they can execute. The ideal value for this counter is 0. A value higher than this is a strong indication that the server has insufficient memory.
  • Counters for assessing physical disks:
    • Physical Disk:Avg. Disk sec/Read and Physical Disk:Avg. Disk sec/Write enable you to monitor average read and write times. You can use the following figures as general guidelines for assessing disk I/O by using these counters:
      • Less than 10 milliseconds represents very good performance for online transaction processing (OLTP) systems.
      • Between 10 and 20 milliseconds represents good or acceptable performance for OLTP systems.
      • Between 20 and 50 milliseconds represents below average to slow performance for OLTP systems.
      • More than 50 milliseconds typically on an OLTP system indicates a bottleneck.
      • Less than 30 milliseconds for a decision support system (DSS) typically represents good performance.
    • PhysicalDisk: Avg. Disk Queue Length. A value greater than 2 for an individual disk often indicates a potential bottleneck, particularly if you are also experiencing high disk latency.
    • Processor:% Privileged Time indicates the percentage of total time that a CPU or CPU core spends executing kernel commands, which includes SQL Server disk I/O requests. You can use it to identify over-utilized disk subsystems.
  • The counters in above measure all disk activity, regardless of its source. To identify disk I/O that results specifically from SQL Server activity, you can use the following counters:
    • SQL Server:Buffer Manager: Page reads/sec
    • SQL Server:Buffer Manager: Page writes/sec
    • SQL Server:Buffer Manager: Checkpoint pages/sec
    • SQL Server:Buffer Manager: Lazy writes/sec
  • Counters for assessing CPUs:
    • Processor:% Processor Time indicates the percentage of time that a processor spends processing workloads (sometimes referred to as executing non-idle threads). If the value of this counter is consistently greater than 80 percent, it may indicate that the CPU or CPUs represent a bottleneck in the system.
    • System:Processor Queue Length indicates the number of threads that are waiting for CPUs to become available to be processed. On a single processor system, a value that is consistently greater than five can indicate that the CPU or CPUs represent a bottleneck in the system. On multiprocessor systems, you should divide the queue length by the number of processors to obtain the relevant value.
  • Counters for assessing network performance:
    • Network Interface:Bytes Total/sec captures the total number of bytes that are sent and received over a network connection for each second.
    • Network Interface:Current Bandwidth records the actual capacity (as opposed to the rated capacity) of a network interface card. You can calculate network utilization for a specific network adapter in the following way: (Network Interface:Bytes Total/sec ÷ Network Interface:Current Bandwidth) × 100. If this figure is consistently greater than 90 percent, the network connection may represent a bottleneck.
    • IPv4:Datagrams/sec and IPv6:Datagrams/sec capture the number of IP datagrams that are sent and received over a defined period of time, and use this as a benchmark when you are testing network performance.
    • SQL Server:Availability Replica: Bytes Received from Replica/sec and SQL Server:Availability Replica: Bytes Sent to Replica/sec monitor the traffic between availability replicas in an AlwaysOn availability group.
  • In addition to the counters in above, SQL Server includes a range of dedicated performance objects and counters that you can use to create a baseline and to troubleshoot issues:
    • SQL Server:General Statistics:User Connections establish the number of user connections to a server, and then monitor the over time
    • SQL Server SQL Statistics:SQL Compilations/sec and SQL Server SQL Statistics:SQL ReCompilations/sec track the number of times SQL Server compiles and recompiles execution plans. Compiling an execution plan can be resource-intensive, so you typically want to see a small number of compilations and recompilations. You can compare the SQL Server SQL Statistics:SQL Compilations/sec counter against the SQL Server SQL Statistics:Batch Requests/sec counter to see how many of the batches require a compilation. The number of recompilations should be significantly lower than the number of compilations, ideally about 10 percent. If this figure is significantly higher, you should investigate the cause of the recompilations.

In this post, we described the options for monitoring performance, and explained how you can create a baseline to aid troubleshooting. You should be very careful when choosing metrics that you use to measure performance. You should always measure your system against the specific technical and business requirements to ensure that you obtain the required levels of performance. Each situation will be different. You should take each system on a case-by-case basis. Tune the system and remove bottlenecks that prevent you from meeting your goals, benchmark the system to provide a performance baseline, and then monitor your system to ensure that you are meeting your baseline. Hope this post was informative, please share it with others if you think it worth to read. Stay tuned to learn more about SQL Server.

Share This Story

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp

Share Your Comments

Inline Feedbacks
View all comments

About The Author

Search Articles


Follow Fard Solutions