What’s new in the Server 2016 CTP 3.3 ScriptDom Parser

We just made SQL Server 2016 CTP 3.3 publicly available a few days ago, and it has some new features which are also reflected in the managed T-SQL parser (Microsoft.SqlServer.TransactSql.ScriptDom). As readers of this blog may know, I am a big fan of this parser and to see it keeping pace with the latest preview release of the SQL Server engine feels great!

Here are the main changes you can leverage if you are using the SQL Server 2016 CTP 3.3 ScriptDom parser:

  • The ScriptDom parser is up to date with the new support for delayed compression of delta rowgroups in Columnstore indexes: the new COMPRESSION_DELAY option in ALTER INDEX helps if you are using non-clustered indexes in a real-time analytics scenario. See this article for more details.
  • The RSA_3072, RSA_4096 algorithms are now added as algorithms inside the ScriptDom parser. These are typically used in the CREATE ASYMMETRIC KEY syntax.

I also wanted to inform anyone who is planning to upgrade their previous installation of SQL Server 2016 preview (CTP 3.2 typically) to the release notes, which contain a very important step (if you are using FILESTREAM) you must take BEFORE upgrading to CTP 3.3. So please take a minute and check the release notes prior to upgrading!

Hope this helps!

Advertisements

Changes in SQL 2014 RTM ScriptDom

It has been over 3 months now that SQL Server 2014 is generally available. The well-known and much talked about features such as in-Memory OLTP and Clustered ColumnStore indexes are big motivations to move your workload to the new platform.

With the new features, the T-SQL language surface has also changed of course. Under compatibility level 120, there are new grammar elements to consider. I recently went through one such round of impact analysis, specifically around the ScriptDom parser.

The good news is that the move from SQL 2012 ScriptDom to the latest one is very incremental and almost all of the changes are only due to new syntax elements. I prepared a list of these statements (as depicted in the parser) for your convenience.

 

In-Memory OLTP support

No coincidence that the changes related to in-Memory OLTP are the biggest! This section also includes related support such as ATOMIC block statements, delayed durability, inline index definitions etc.

 

Buffer Pool Extension, Resource Governor, HADR and FCI related Server DDL

This set of changes has to do with the support for configuring the BPE, I/O parameters in Resource Governor, and some minor but important DDL to configure HADR availability groups and FCI instances from T-SQL itself.

 

Clustered ColumnStore related

This group includes the index DDL and Archive compression related changes.

 

Incremental Statistics related

 

Managed Lock Priority related

Managed lock priority changes extend to quite a few operations such as index maintenance, partition switch etc.

 

Backup Encryption + Security DDL

Here you notice the changes to support encryption while backing up; and also the new encryption algorithm enumeration to support the same. We also have several new event notifications available.

 

Miscellaneous

In conclusion, while I believe I have covered all the changes (and I’m glad there seem to be none which are BREAKING changes as such) – if you still find something which has been added or changed and I did not mention it in this post, please feel free to leave a comment below!

Normalizing T-SQL text, part 2: using the TransactSql.ScriptDom classes

Happy New Year 2014 to all of you! With SQL Server 2014 set to release this year, I’m sure you are all excited about the months to come.

In my previous post I had reviewed the classic ways of obtaining ‘normalized’ text for ad-hoc SQL queries. Do take a minute to glance at that post in case you have not already done so. Both the methods described previously are dynamic – they need an active workload to operate upon. So if you have a static set of queries captured somewhere – such as a .SQL file or such, then we need an alternate method.

Algorithm

If you think about it, the core of normalizing these ad-hoc query text patterns is to identify literals and replace then with a generic / common value. Once the specific literal values are replaced with their generic ‘placeholders’ then it becomes a relatively easy task to identify commonalities.

To identify commonalities we propose to use a hashing algorithm, conceptually similar to the one used in the previous approaches. However, when computing hashes, there is another problem to consider: minor differences in whitespace / alphabet case of the query text will cause different hash values to be raised for essentially identical text.

ScriptDom Implementation

The first thing to consider is what kind of literals we would replace. In the ScriptDom class hierarchy, we have the following derived classes for the parent Literal class:

  • IntegerLiteral: whole numbers
  • NumericLiteral: decimal numbers such as 0.03
  • RealLiteral: numbers written with scientific notation such as 1E-02
  • MoneyLiteral: values prefixed with currency symbol such as $12
  • BinaryLiteral: such as 0xABCD1234
  • StringLiteral: such as ‘Hello World’
  • NullLiteral: the NULL value
  • DefaultLiteral: the DEFAULT keyword
  • MaxLiteral: the MAX specifier
  • OdbcLiteral: ODBC formatted literal such as { T ‘blah’ }
  • IdentifierLiteral: ‘special’ case when an identifier is used as a literal. I’ve never seen a real world example of this Smile

We need to keep this in mind when we write the visitor to traverse through the AST.

Visitor definition

Next, we need to setup our visitor. We will use the Visitor pattern to do this, and implement overridden methods to handle the various types of literals described above. And for each type of literal we will replace the value of the literal with a fixed, generic value. Here is an example for the real literal visitor method:

public override void ExplicitVisit(RealLiteral node)
{
    node.Value = "0.5E-2";
    base.ExplicitVisit(node);
}

Visitor invocation

For performance reasons we will call the visitor with the Parallel.ForEach loop which makes efficient use of multi-core CPUs:

Parallel.ForEach(
                 (frag as TSqlScript).Batches,
                 batch =>
                     {
                         myvisitor visit = new myvisitor();

                        batch.Accept(visit);

This way, each T-SQL batch in the original script is visited on a separate thread.

Regenerate the Script

Once the visitor does its job to ‘normalize’ the various literals encountered, the next step is to generate the script based on the tokens already obtained. That will take care of one of the 2 problems we spoke about – namely whitespace. We can do that using one of the many SqlScriptGenerator classes available (there is one for each compatibility level.) In the code snippet below, srcgen is one of the SqlScriptGenerator classes and script holds the output:

scrgen.GenerateScript(batch, out script);

Calculate the hash

Once the normalized script is re-generated from the SqlScriptGenerator class, it can then be run through a proper hash algorithm (in this sample we use SHA1) to calculate the hash value of the given script. Here is where we also handle the case sensitive / insensitive nature of the script:

  • For case insensitive cases, we simply convert the generated script to lower case before we compute the hash.
  • For case sensitive, we calculate the hash as-is on the generated script.

using (var hashProvider = new SHA1CryptoServiceProvider())
{
    if (caseSensitive)
    {
        hashValue = Convert.ToBase64String(hashProvider.ComputeHash(Encoding.Unicode.GetBytes(script)));
    }
    else
    {
        hashValue = Convert.ToBase64String(hashProvider.ComputeHash(Encoding.Unicode.GetBytes(script.ToLowerInvariant())));
    }
}

Track unique hashes

We can use a simple Dictionary class in .NET to track these, along with usage counts for each bucket. Each bucket also tracks an example of the batch (the original text itself.)

Sample output

The sample project when compiled and executed as below gives us the following output.

Command line

TSQLTextNormalizer.exe c:tempinput.sql c:tempoutput.sql 110 false

Input

select * from ABC
where
i = 1
GO

select * from abC where i = 3
GO

Output

— 2 times:
select * from ABC
where
i = 1
GO

That’s it! You can use this in many cases, limited only by your imagination Smile And more importantly I hope it showed you the power and flexibility of the ScriptDom classes.

Download the sample project here

Disclaimer

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.  We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm.

Normalizing T-SQL text, part 1: using the RML Utilities and the DMVs

A common problem when dealing with workloads which issue ad-hoc SQL commands (i.e. without parameterization) is to find out the ‘normalized’ version of the pattern. For example, these three statements are essentially the same ‘template’:

  • SELECT BusinessEntityId FROM Person.Person WHERE LastName = ‘Smith’
  • SELECT BusinessEntityId FROM Person.Person WHERE LastName = ‘Singh’
  • SELECT BusinessEntityId FROM Person.Person WHERE LastName = ‘Lee’

And so are these three, except that they are a distinctly new pattern:

  • SELECT * FROM Person.person WHERE LastName IN (‘Smith’)
  • SELECT * FROM Person.person WHERE LastName IN (‘Smith’, ‘Singh’)
  • SELECT * FROM Person.person WHERE LastName IN (‘Smith’, ‘Singh’, ‘Lee’)

The requirement therefore is to detect only the common ‘template’ / ‘normalized’ text – in most cases a representative example of the ad-hoc statement. In this post, I will show you the classic ways to do that: using the RML Utilities (specifically using ReadTrace) or using the query_hash value in the DMVs.

Option 1: Using the RML Utilities / ReadTrace

The classic way to do with profiler traces of course, was to capture a trace file (using server side tracing preferably) and then run that trace through the ReadTrace utility. The neat thing about this utility is that it would ‘aggregate’ similar patterns into hash buckets and then display statistics for each hash ID. To do that successfully, it requires a SQL instance and database.

Here is a sample command line for using this utility:

readtrace -Ic:temptest.trc -S. -dTest -f

The above imports the trace (from c:temptest.trc) into a database called Test on my local default instance of SQL 2012. Once that is done, you can work your magic on the various tables which are created by this utility and extract the unique batches:

SELECT OrigText, HashID
FROM ReadTrace.tblUniqueBatches

Here is the sample output from my test, where I had issued those 6 statements (each delimited by a GO) which were shown at the beginning of this post:

  • SELECT BusinessEntityId FROM Person.Person WHERE LastName = ‘Smith’ => hash value of -8759399170292131263
  • select * from Person.person where LastName IN (‘SMith’) => hash value of 1644287078950160061

So the good news is that this utility correctly found the underlying patterns very accurately, but it also requires extra efforts (to provision a SQL instance and gather a trace file.) One other advantage with the ReadTrace utility is that it also supports importing Extended Event Logs (XEL) files as well (though that feature is still in Beta.)

For more details on the normalization capabilities within ReadTrace, Download RML Utilities for SQL Server (x64) and then read the PDF help file. The various patterns are described in the section ‘Normalization Patterns’.

Option 2: Using the query_hash value

Starting SQL Server 2008 and above you can use the query_hash functionality to isolate these common patterns. Here is an example of a query which can do that and list one sample query for each ‘pattern’:

select * from
(select ST.text, ROW_NUMBER() OVER (PARTITION BY query_hash ORDER BY query_hash) Ranking from sys.dm_exec_query_stats QS
cross apply sys.dm_exec_sql_text (QS.sql_handle) ST
where objectid IS NULL
and ST.encrypted = 0
) AS Q
where Q.Ranking = 1

While this is easy to use and available out of the box, it does not work for SQL 2005 or obviously not for SQL 2000 (why do I even bring up SQL 2000, given that it is out of support? When upgrading those few workloads from SQL 2000 to higher versions, it is appropriate to tune some of this stuff by replacing them with parameterized queries.)

Also, there is another problem with IN predicates. When there is a varying set of literals passed to the IN predicate, it turns out that the query_hash is different. Here is an example:

  • select * from Person.person where LastName IN (‘SMith’, ‘Singh’, ‘Lee’)  => hash value 0x6FAFADDF37AA7683
  • select * from Person.person where LastName IN (‘SMith’) => hash value 0x77A1391010F61AE7
  • select * from Person.person where LastName IN (‘SMith’, ‘Singh’) => hash value 0x9BEA60D8D6595AA5

Summary

I’d like to quickly compare these two approaches and then setup what’s coming next.

ReadTrace query_hash
  • Offline method
  • Can consume TRC or XEL files
  • Supports SQL Server versions 2000, 2005 and 2008, 2008 R2, 2012 and SQL 2014 CTP2 (version  9.04.0004 as of time of writing this post)
  • Correctly handles many patterns including IN lists
  • Online method
  • Requires SQL 2008 or above
  • Dependent on the query plan cache
  • Some limitations such as not aggregating IN predicate lists correctly

In the next part I will show you how you can cook your own ‘aggregator’ / ‘normalizer’ using the Transact-SQL ScriptDom parser and then compare it with these two approaches. Stay tuned!

Slides and samples for my SQL PASS ScriptDom talk

Thank you very much, PASS AppDev for giving me the chance to share the capabilities and usage of the ScriptDom parser! The session was packed to capacity even before we got underway, something we did not quite expect given the specialized nature of the talk.

Request: if you want to see an ‘encore’ or want more, please leave a comment on this blog, or email the AppDev DL.

Edit 12 Nov 2013: The recording of this session is now available.

Download Samples

Download Slide deck

Disclaimer

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.  We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm.

SQL PASS AppDev Virtual Chapter meeting

I’ll be speaking about the SQL Server ScriptDom parser at the PASS AppDev Virtual Chapter meeting this Friday, 1st of November 2013. Here is the session abstract as published on the AppDev VC website:

Session title: DIY: T-SQL Swiss Knife Using the ScriptDOM T-SQL Parser

Abstract: Want to find out how exactly many DELETE statements in your database? Even better, how many of those are missing a WHERE clause? Or maybe you want to check your code for the usage of the COMPUTE clause, but Upgrade Advisor does not help you do so? Or maybe you want to ‘pretty print’ your SQL scripts? Many such requirements can be easily taken care of by using the TransactSql.ScriptDom parser which ships with SQL Server 2012. This session is for DBAs and developers who want to dig DEEP into the T-SQL code with simple C# and PowerShell scripts. And thereby end up making their jobs (and lives) easier!

If you are interested in T-SQL and programming, do join us at Fri, Nov 01 2013 07:00 (GMT-07:00) Mountain Time (US & Canada).