SQL Server I/O Subsystem Simulation

The estimated reading time for this post is 4 minutes

It is been long time that I do hardware I/O subsystem pressure test prior to SQL Server installation, based on my experience and observation, not many DBAs or Consultants are doing this. Basically before installing SQL Server in any hardware server, we are in need to do some I/O tests base on SQL Server I/O pattern to simulate the production environment before we face the I/O subsystem bottleneck during production.

I personally was using SQLIO till Microsoft replaced it by DiskSpd tool which is a generic tool to test I/O Subsystem based on different I/O patterns.

During this kind of test, you need to know how SQL Server reads and write into the disk on different operations such as:-

  • Checkpoint
  • Log Writer
  • Backup
  • Restore
  • Rebuild Index

The following table shows the I/O pattern for major SQL Server I/O operations:-

When you’re running DiskSpd, be aware that the load put on the environment could affect the performance of other virtual machines on the same physical machine. This could generate lots of load and disturb anyone else using other VMs in the same host, other LUNs on the same SAN, or other traffic on the same network.

If you use DiskSpd to write data to a physical disk, you can destroy the data on that disk. DiskSpd does not ask for confirmation. Be careful when you’re using physical disks (as opposed to files) with DiskSpd.

Before jumping into the analysis of the results, it is important to know the max IO throughput of the targets in order to determine whether the system is performing at acceptable level. Knowing what to expect from the hardware is key to understanding what values you can expect in the results from this tool. In turn, this lets you know if your results are “good” or “poor.” You will want to not only consider the max throughput of the disk(s) or VHDs themselves, but also of the HBA or I/O card. Also take into consideration any RAID penalties for the different types of RAID used when you determine your max throughput.

Test Zone

Lets run few tests to examine the performance of the storage hardware regarding the following operations:-

  • Data File (Read/Write)
  • Log File (Write)
  • Backup
  • Rebuild Index
All the mentioned tests are against NetApp storage server with 14 SSD harddrive with total of 14TB storage capacity and connected to the server by Fiber Channel 1GB. The server has 8 core processor and 64GB memory capacity. 

Data File (Read/Write)

To accomplished the Data File (Read/Write) test, I prepared the following configurations:-

  • File Size: 200GB
  • Block Size: 64KB
  • Operation Type: Random
  • Threads: 16
  • Outstanding I/O: 1
  • Hardware Cache Level: No
  • Write/Read Percentage: 40/60
  • Test Duration: 60 Seconds
C:\diskspd.exe -c200G -b64k -r -t16 -o1 -h -L -w40 -d60 f:\Testdata.mdf

data read test

data write test

As above figure shows, the average I/O latency per thread is less than a millisecond which is ideal for SQL Server.

Log File (Write)

To accomplished the Log File (Write) test, I prepared the following configurations:-

  • File Size: 20GB
  • Block Size: 4KB
  • Operation Type: Sequential
  • Threads: 2
  • Hardware Cache Level: No
  • Write/Read Percentage: 100/0
  • Test Duration: 60 Seconds
C:\diskspd.exe -c20G -b4k -t2 -h -L -w100 -d60 h:\Testdata.ldf

log file write test

As the result shows, the average I/O latency is less than a millisecond and the data transfer per second is 49.92MB which is ideal for SQL Server.

Backup

To accomplished the Backup test, I prepared the following configurations separately for Read and Write operations:-

  • File Size: 200GB
  • Block Size: 64KB
  • Operation Type: Sequential
  • Threads: 1
  • Hardware Cache Level: No
  • Write/Read Percentage: 0/100
  • Test Duration: 60 Seconds
C:\diskspd.exe -c200G -b1M  -t1 -h -L -w0 -d60 f:\data.mdf

backup read

  • File Size: 200GB
  • Backup Devices: 4 Units
  • Block Size: 512KB
  • Operation Type: Sequential
  • Threads: 4
  • Hardware Cache Level: No
  • Write/Read Percentage: 100/0
  • Test Duration: 60 Seconds

 

C:\diskspd.exe -c200G -b512k  -t4 -h -L -w100 -d60 g:\Testdata.mdf

Backup Write

Based on the above test results, the backup process takes quite long time to be completed and might not fit into the recovery SLA. In terms of I/O throughput, it shows the hardware is ideal for SQL Server.
For more information read SQL Server Backup Internals.

Rebuild Index

To accomplished the Rebuild Index test, I prepared the following configurations separately for Read and Write operations:-

  • Index Size: 70GB
  • Block Size: 512KB
  • Operation Type: Sequential
  • Threads: 16
  • Hardware Cache Level: No
  • Write/Read Percentage: 0/100
  • Test Duration: 60 Seconds
C:\diskspd.exe -c70G -b512k  -t16 -h -L -w0 -d60 f:\data.mdf

rebuild index read

  • Index Size: 70GB
  • Block Size: 512KB
  • Operation Type: Sequential
  • Threads: 16
  • Hardware Cache Level: No
  • Write/Read Percentage: 100/0
  • Test Duration: 60 Seconds

 

C:\diskspd.exe -c70G -b512k  -t16 -h -L -w100 -d60 f:\data.mdf

rebuild index write

As per test result, it shows that SQL Server might have performance impact due to almost 11 milliseconds average I/O latency. 
Assume that you want to a test single SSD drive, and the drive is rated at 500 MB/sec READ or up to 10,000 READ IOPs and 450 MB/sec Write or up to 30,000 WRITE IOPs. Additionally, the I/O card can handle that rate of transfer. In this scenario, we should be able to send that volume of data per second with a reasonable transfer rate. Typical guidelines for determining optimal Average Disk Sec/Transfer for ideal SQL Server performance are shown below.

data transfer table

Hamid J. Fard

I am SQL Server Data Platform Expert with more than 9 years’ of professional experience, I am currently Microsoft Certified Master: SQL Server 2008, Microsoft Certified Solutions Master: Charter-Data Platform, Microsoft Data Platform MVP and CIW Database Design Specialist. I also do Pig, Hive and Sqoop development regarding Hadoop Big Data platform. After a few years of being a production database administrator I jumped into the role of Data Platform Expert. Being a consultant allows me to work directly with customers to help solve questions regarding database issues for SQL Server.

More Posts

Follow Me:
FacebookLinkedIn


Leave a Comment

Be the First to Comment!

Notify of
avatar
1000
wpDiscuz