SQL Server Compute On-Premise, Storage On-Cloud

The estimated reading time for this post is 4 minutes

Microsoft Azure offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol. Azure File shares can be mounted concurrently by cloud or on-premises deployments of Windows, macOS, and Linux. Azure Files enable sharing files between applications running in your virtual machines using familiar Windows APIs or File REST API. Additionally, Azure File Sync allows caching and synchronization of Azure Files shares on Windows Servers for local access. I believe that cloud cost also matters, therefore these are the costs of storing your data in Azure File shares. The prices shown below are the monthly charges per GB of data stored. These prices vary based on the redundancy option that you choose.

  • Locally Redundant Storage (LRS): RM0.356 / Per GB
  • Geographically Redundant Storage (GRS): RM0.445 / Per GB

Locally Redundant Storage (LRS): Makes multiple synchronous copies of your data within a single datacenter.

Geographically Redundant Storage (GRS): Same as LRS, plus multiple asynchronous copies to a second datacenter hundreds of miles away.

Azure inbound data transfer is free, except outbound data transfer which I highly suggest you to refer to here for more information.

You might ask your self, ‘Why do we need to keep our compute on-premise as we can have it on Azure cloud?’, the answer is that, some companies are using more than 64 cores of machine to handle their databases due to nature of queries and business process, therefore moving such database environment to Azure is much costlier than having it on-premise for years (5Years) and other factor is that maintenance cost for storage system is quite costly compare to compute unit and software license over years. The following figure illustrates how logically your database will be placed.

To make sure that this solution does not impact the performance you are required to provide sufficient amount of main memory to SQL Server to buffer as much as possible data page rather than reading from cloud storage. I performed some experiment, and would like to share the result with you. 

The following script creates the database data files on Azure File Server and a log file on-premise to avoid lots of IO operations on Azure File Server and at same time increase the database transaction performance.

--Only applicable on SQL Server 2005 and 2008.
DBCC TRACEON(1807, -1)
GO
EXEC SP_CONFIGURE 'xp_cmdshell','1';
RECONFIGURE WITH override;
 
EXEC XP_CMDSHELL 'net use Y: \\fardsqlbak.file.core.windows.net\fardsqldatafiles  /u:AZURE\fardsqlbak vmME19SkeWy25/s5gxaTVM0vAgwEZm5A3vqkSL90PW6t3f377oDIVDIJlAnKK1HnPsgnuaM9G4L***************';
GO
 
CREATE DATABASE [OnPremise_Compute]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'OnPremise_Compute', FILENAME = N'\\fardsqlbak.file.core.windows.net\fardsqldatafiles\Data\OnPremise_Compute.mdf' , SIZE = 512MB , FILEGROWTH = 65536KB )
 LOG ON 
( NAME = N'OnPremise_Compute_log', FILENAME = N'F:\Log\OnPremise_Compute_log.ldf' , SIZE = 64MB , FILEGROWTH = 65536KB )
GO
ALTER DATABASE [OnPremise_Compute] SET COMPATIBILITY_LEVEL = 130
GO
USE OnPremise_Compute;
GO
CREATE TABLE STRESS (ID INT IDENTITY(1,1), RECPADDING BINARY(50) DEFAULT 0XFFFFF);
 GO

I have created a table named STRESS on the database to populate lots of data and monitor Azure File Server IOPS and database performance. The following script is our stress test script, it might not be so complicated or close to real-world workload, but I do believe due to CHECKPOINT command, it will stress SQL Server in terms of data checkpoint on data files which placed on Azure File Server. You do need to run each of the script in different sessions concurrently.

--SESSION 1
SET NOCOUNT ON;
INSERT INTO DBO.STRESS DEFAULT VALUES
GO 100000000000000
 
 
--SESSION 2
SET NOCOUNT ON;
WHILE(1=1)
BEGIN
	DECLARE @NewValue BINARY;
	SET @NewValue = CAST(SysDateTime() AS BINARY);
 
	WITH X AS
	(
	  SELECT TOP (1000) RECPADDING 
	  FROM dbo.STRESS TABLESAMPLE (1000 ROWS)
	)
	UPDATE X SET RECPADDING = @NewValue;
	WAITFOR DELAY '00:00:02';
	CHECKPOINT;
END
 
--SESSION 3
SET NOCOUNT ON;
WHILE(1=1)
BEGIN
DELETE TOP (10) FROM dbo.STRESS WHERE ID IN ( SELECT ID FROM dbo.STRESS TABLESAMPLE (10 ROWS))
WAITFOR DELAY '00:00:01';
CHECKPOINT;
END

I ran the above stress scripts, meanwhile I was monitoring the performance counters of Azure File Server and other SQL Server counters.

The above counter shows the Avg. Write Queue Length on Azure File Server with maximum value of 2.490 which is the graph shows the high throughput of Azure File Server and SMB3.0 protocol.

The above counter shows that due to sufficient amount of memory, SQL Server does not need to read any data pages from Azure File Server.

The Avg. Bytes/Write counter shows that Azure File Server throughput is very high and is able to handle 8KB per Write IO.

Other factor of having high performance database with such solution, is to make sure the Network Output Queue Length is almost zero and this only happens if your network bandwidth is sufficient and Azure File Server is able to respond as fast as possible.

And the last two counters are referring to the count of network packets has been sent or received by SQL Server to/from Azure File Server.

Conclusion

On-Premise Compute On-Cloud Storage solution is to cut storage maintenance cost and increase availability of database data files. The performance of this solution is highly depends on network bandwidth between on-premise and Azure File Server.

You are required to perform some assessment on your database environment prior implementing such solution. 

Author of this blog post or Fard Solutions Sdn Bhd is not responsible for any damage or business interruption by using above solution without SQL Server expert consultation.

 

 

Hamid J. Fard

I am SQL Server Data Platform Expert with more than 9 years’ of professional experience, I am currently Microsoft Certified Master: SQL Server 2008, Microsoft Certified Solutions Master: Charter-Data Platform, Microsoft Data Platform MVP and CIW Database Design Specialist. I also do Pig, Hive and Sqoop development regarding Hadoop Big Data platform. After a few years of being a production database administrator I jumped into the role of Data Platform Expert. Being a consultant allows me to work directly with customers to help solve questions regarding database issues for SQL Server.

More Posts

Follow Me:
FacebookLinkedIn