SQL Server In-Memory Table Size Estimation Script

Microsoft introduced In-Memory OLTP technology for SQL Server in version 2014, I would not explain about this technology, you may visit our other blog post at SQL Server In-Memory Internals to read more about it. It is quiet tricky to estimate and provide enough memory for In-Memory tables. Many of our clients implemented this technology but at the end they fed up with it because In-memory OLTP did not provide the performance they expected to have. Estimating the In-memory table size is very critical because of not having sufficient memory capacity, In-Memory tables are in need of swipe in and swipe out from memory to provide enough memory for other components.
In-Memory Table size is not equivalent to disk based table size, in-memory table needs extra memory space for row  versioning as it is the main feature of this technology. For more info visit SQL Server In-Memory Internals blog post.

DBAs are responsible to make the calculation before table migration into the memory, but one my participants during Microsoft Customer Immersion Experience event raise the following question, ‘DBAs are either very busy to maintain the database or do not have enough knowledge regarding this matter to do calculation, why not SQL Server provides this estimation as a report?’, from that point it came to my mind to develop the following script.

The following script creates a stored procedure that calculates the estimated memory space for the disk based table to migrate to in-memory. It accepts few parameters to provide close to accurate result.

CREATE PROCEDURE Sp_InMemTableSizeEst (@SCHEMA AS Sysname , @TableName AS Sysname, @NumHash AS TINYINT, @NumNonIndex AS TINYINT,@LongTransDurationInSec INT = 1,@PeakNumUpdateDeleteInSec TINYINT = 1,@TotalChangesPerc TINYINT = 20)
AS BEGIN
 
IF(@NumHash + @NumNonIndex) > 8
BEGIN
	RAISERROR('The Number of Indexes Should Not Be More Than 8.',16,1);
	RETURN 0;
END
 
--ColumnSize
SELECT Column_Name,COL_LENGTH(Table_Schema+'.'+TABLE_NAME,COLUMN_NAME)+2 AS Size_Bytes
FROM INFORMATION_SCHEMA.COLUMNS  WHERE TABLE_NAME = @TableName AND TABLE_SCHEMA = @SCHEMA;
 
--RowSize
EXECUTE(N'
select SUM(COL_LENGTH(Table_Schema+''.''+TABLE_NAME,COLUMN_NAME)) +2 As DiskRowSize_Bytes,
(Select Count(*) from ['+@SCHEMA+'].['+@TableName+']) As [RowCount] Into ##RowSizeCount
from INFORMATION_SCHEMA.COLUMNS  where TABLE_NAME = '''+@TableName+''' AND Table_Schema = '''+@SCHEMA+''' ');
 
SELECT * FROM ##RowSizeCount;
 
EXECUTE(N'Select Distinct Count(*) As [InMemHashIndex_BucketCount] Into ##HashBuckets From ['+@SCHEMA+'].['+@TableName+']');
 
DECLARE @COUNT INT = 1;
DECLARE @powerby VARCHAR(500) = 'Select 2';
WHILE (1=1)
BEGIN
	EXECUTE(@powerby + ' AS [Bucket] Into ##ActualBucket');
	IF((SELECT A.Bucket - I.InMemHashIndex_BucketCount FROM ##HashBuckets I,##ActualBucket A)> 0)
		BREAK;
	SELECT  @powerby = @powerby+ '*2';
	--Print @Powerby;
	DROP TABLE ##ActualBucket;
	SET @COUNT = @COUNT +1;
END
 
--InMem Hash Index Size
SELECT B.[InMemHashIndex_BucketCount] AS [Bucket_Count] ,B.[InMemHashIndex_BucketCount] * 8 AS [Bucket_Usage_Size_Bytes] ,A.[Bucket] AS [Bucket_Allocation_Count], A.[Bucket]*8 AS [Bucket_Allocation_Size_Bytes] INTO ##InMemHashBuckets FROM  ##HashBuckets B, ##actualBucket A;
SELECT * FROM ##InMemHashBuckets;
 
--InMem Table Size
SELECT 24 AS [TIMESTAMP],(@NumHash+@NumNonIndex) * 8 AS [IndexSize_Bytes],R.[DiskRowSize_Bytes] AS [RowBodySize_Bytes], R.[DiskRowSize_Bytes] + 24 + (@NumHash+@NumNonIndex) * 8 AS InMemRowSize_Bytes_Total ,
((R.[DiskRowSize_Bytes] + 24 + (@NumHash+@NumNonIndex) * 8) * R.[RowCount])/1024 AS [InMemTableSize_KB] INTO ##InMemTableSize
FROM ##RowSizeCount R;
 
SELECT * FROM ##InMemTableSize;
 
--InMem NonClust Index Size 
SELECT (8+(R.DiskRowSize_Bytes/2)) AS [IndexRowSize_Bytes], ((8+(R.DiskRowSize_Bytes/2))*B.[InMemHashIndex_BucketCount])/1024 AS [InMemNonClustIndexSize_KB] INTO ##InMemNonIndex FROM ##RowSizeCount R,##HashBuckets B
 
SELECT * FROM ##InMemNonIndex
 
--Rowversioning Size
SELECT (@LongTransDurationInSec * @PeakNumUpdateDeleteInSec) * T.InMemRowSize_Bytes_Total AS [RowChangesSize_Bytes_Sec], 
(((@LongTransDurationInSec * @PeakNumUpdateDeleteInSec) * T.InMemRowSize_Bytes_Total) * ((@TotalChangesPerc*R.[RowCount])/100))/1024 AS [RowVersioningSize_KB]
INTO ##RowVersioning
FROM ##InMemTableSize T, ##RowSizeCount R
 
SELECT * FROM ##RowVersioning
 
--Summary
SELECT T.[InMemTableSize_KB],NIC.[InMemNonClustIndexSize_KB] * @NumNonIndex AS [Total_Non_Clust_Index_Size_KB],(H.[Bucket_Allocation_Size_Bytes]/1024) * @NumHash AS[Total_Hash_Index_Size_KB] , RV.RowVersioningSize_KB ,
(T.[InMemTableSize_KB]+(NIC.[InMemNonClustIndexSize_KB] * @NumNonIndex ) +((H.[Bucket_Allocation_Size_Bytes]/1024) * @NumHash) + RV.RowVersioningSize_KB ) AS [Total_InMem_Obj_Size_KB]
FROM ##InMemTableSize T , ##InMemNonIndex NIC , ##RowVersioning RV, ##InMemHashBuckets H
 
--Drop tables
DROP TABLE ##RowSizeCount;
DROP TABLE ##HashBuckets;
DROP TABLE ##ActualBucket;
DROP TABLE ##InMemTableSize;
DROP TABLE ##InMemNonIndex;
DROP TABLE ##InMemHashBuckets;
DROP TABLE ##RowVersioning;
 
END

 

The following statement, executes the created stored procedure:-

EXEC Sp_InMemTableSizeEst	@SCHEMA = 'Sales', @TableName = 'SalesOrderDetail' , @NumHash = 4 , @NumNonIndex = 4 , 
							@LongTransDurationInSec = 2 , @PeakNumUpdateDeleteInSec = 5 , @TotalChangesPerc = 30;

 

Above statement mentions that the table is ‘Sales.SalesOrderdetail’ and the number of Hash indexes will be 4, including another 4 Non-Clustered Index. The longest transaction duration in second is 2 and the number of changes (Update/Delete) in a second can be 5 and also the total table changes are 30%.

After execution, SQL Server returns the following resultsets:-

InMemEstMemory

As you se at above figure:-

  • The row size is 127 bytes for disk based table.
  • The table size in-memory is about 25471KB.
  • The Total In-Memory Non-Clustered Index size is 8411KB.
  • The Toal In-Memory Hash Index size is 4096KB.
  • The Row Versioning needs 76415KB.
  • And Total In-Memory Object needs 139626KB.

It is better to provide additional 20% – 30% memory space.

Share This Story

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp

Share Your Comments

guest
0 Comments
Inline Feedbacks
View all comments

About The Author

Search Articles

Categories

Follow Fard Solutions