The SQL 2014 pre-release forums are now live on MSDN! Please post your questions / comments / feedback on the forum: http://social.msdn.microsoft.com/Forums/en-US/home?category=sql14prerelease
Category / Uncategorized
Error 41342 when creating In-Memory OLTP (‘Hekaton’) filegroup
A Twitter conversation unearthed a specific requirement of using in-memory OLTP features, namely the processor instruction set requirements. The user was trying to create a ‘Hekaton’ database using the sample script from here, and was getting error message 41342:
The model of the processor on the system does not support creating filegroups with MEMORY_OPTIMIZED_DATA. This error typically occurs with older processors.
This error is documented at our (brand new) SQL 2014 Books Online page. According to the page, memory-optimized tables require a processor model that supports atomic compare-and-exchange operations on 128-bit values. In the Intel x86-64 instruction set, such 128-bit atomic operations are accomplished by the CMPXCHG16B (assembly language in case you were wondering
) instruction.
With this background, and knowing that the user was running VirtualBox, I thought this is a problem with the CPU emulation code in VirtualBox. Personally, I am fortunate to be running Windows 8 Hyper-V so I would never see this issue!
For VirtualBox, it turns out that we need to adjust the virtualization software settings as per this thread. More information is also available at this link. From my understanding the VirtualBox setting is only effective if the physical hardware allows it as well. In other cases, if you get this error on physical hardware, check if it is a processor which supports the CMPXCHG16B instruction (older AMD processors might not support it, for example.)
Hope this helps!
Debugging story: Slowness due to NTFS short file (8.3) name generation
When I teach production debugging to my customers, I always tell them that be successful you need to not only know the right tool and command syntax, but also know the right methodology. And perhaps even more importantly when debugging certain types of issues, knowledge of Windows Internals and the ability to ‘connect the dots’ is a savior.
Recently, I had one such experience wherein an issue with an application turned out to be related to a NTFS behavior (which I had previously read about in my SQL Server work) and had I not ‘connected the dots’ I would not have been able to remediate the issue quickly. Wondering what it is all about? Read on!
Scenario
So this was a strange problem: a server had hundreds of processes which were all seemingly ‘hung’ and not doing anything. The server in question was hosting a vendor application, which had spawned those processes, but normally those processes should execute quickly and not accumulate they way they had done.
This appeared to me like the processes where hung in some way. Since the application was deployed as a service, these child processes were on a non-visible desktop, so maybe they had popped up an UI element – that was my first theory. But as with any user mode process hanging situation, I decided to collect hang mode dumps and look at those to be sure about the root cause. I took sample dumps from 2 processes using DebugDiag 1.2.
Dump analysis
Once the dumps were available, I opened them in WinDbg (x86, as these were 32-bit application processes.) Here are the call stacks of the dump for the first process:
0:000> kL
ChildEBP RetAddr
0018d368 74cd9a04 ntdll!NtSetInformationFile+0x12
0018d43c 74cecd44 kernel32!MoveFileWithProgressTransactedW+0x334
0018d474 74ceccec kernel32!MoveFileWithProgressTransactedA+0x5b
0018d494 74d3d93f kernel32!MoveFileWithProgressA+0x1b
0018d4b0 00401302 kernel32!MoveFileA+0x16
WARNING: Stack unwind information not available. Following frames may be wrong.
0018d4cc 00401aa2 fooapp+0x7777
0018ff94 76fb9ef2 kernel32!BaseThreadInitThunk+0xe
0018ffd4 76fb9ec5 ntdll!__RtlUserThreadStart+0x70
0018ffec 00000000 ntdll!_RtlUserThreadStart+0x1b
The above call stack is due to a file being moved. In this case by dumping the parameters to MoveFileA, we found that the file was being renamed in the same folder (the file extension was being changed from .FOO to .BAR – file extensions changed to protect the identity
). The call stack for the thread in the second process is given below:
0:000> kL
ChildEBP RetAddr
0018d38c 74c7c5dc ntdll!NtCreateFile+0x12
0018d430 74cc3f86 KERNELBASE!CreateFileW+0x35e
0018d45c 74cc53e4 kernel32!CreateFileWImplementation+0x69
0018d48c 004012b3 kernel32!CreateFileA+0x37
WARNING: Stack unwind information not available. Following frames may be wrong.
0018d4c0 004019df fooapp+8888
0018ff94 76fb9ef2 kernel32!BaseThreadInitThunk+0xe
0018ffd4 76fb9ec5 ntdll!__RtlUserThreadStart+0x70
0018ffec 00000000 ntdll!_RtlUserThreadStart+0x1b
The above is a call stack of the application creating a new file. By looking at both of these, it appeared to me that the hard disk was performing slowly. So we extracted the file name (the first parameter to CreateFileA routine) and found that it was a new file within a folder on the D: drive. When we ran performance counter logs for the D: drive, it appeared to be just normal.
So it was quite puzzling to see the calls to creating and renaming files taking so long despite the physical drive performing quite well.
Root cause
We then looked a bit further by browsing to the folder on D: drive which was being accessed by the above application. We could not even list the contents (Explorer would appear unresponsive.) It then came to our mind that when we have a large number of similarly named files in the same folder, (which was exactly the case with this application) then NTFS has to work extra hard to generate to unique 8.3 naming convention names (so called ‘short file name’.) This TechNet article mentions a threshold of 300000 files above which this short name generation can become a bottleneck. The algorithm for the short file name generation is also documented here and there is another very interesting but really old KB article which shows how to achieve the name conversion in FoxPro ![]()
Now, these names are more of a legacy remnant of DOS and the 16-bit world, and in most servers there should not be any reason to use them. So thankfully we have a way to disable the generation of these short file names. I then tried to evaluate the effect that disabling this has. My test results are given below.
Testing
I executed a series of tests with a simple PowerShell script which creates and renames file in a tight loop. For each case, the folder already contained over a million files were already created in the same folder. My tests were done on a Windows 8 laptop with a single spindle SATA hard disk.
|
Test Description
|
File Count per process
|
Avg. time (8dot3 ON)
|
Avg. time (8dot3 OFF)
|
% Improvements |
|
Single process creating files
|
25000
|
126028
|
102617
|
19%
|
|
5 process creating + renaming files
|
25000
|
567611
|
375946
|
34%
|
|
20 process creating files
|
25000
|
2420367
|
1422348
|
41% |
Conclusion
The benefits of disabling 8dot3 name generation are obvious from the test results above, and validate the guidance given in the KB articles mentioned previously. And keep in mind these results were obtained on Windows 8 and Windows 2008 R2, wherein the algorithm for 8dot3 name generation has been further tuned. The gains might be even more dramatic on Windows 2003.
So in summary, we would like to confirm the benefits of disabling 8dot3 name generation on servers, where there are large number of similarly named files in the same folder. And for SQL Servers using FILESTREAM storage, this setting is recommended as well.
Further Reading
For more information on NTFS, I encourage everyone to take a look at the presentation that Neal Christiansen (Principal SDE, Microsoft) delivered in a community meeting. The presentation is found here: NTFS – The workhorse file system for the Windows Platform. Neal also has presented two excellent talks on Channel9, which can be accessed here and here. Amazing information, take a look!
Jose Barreto, Principal PM at Microsoft has blogged about the performance impact of 8.3 names. His TechEd 2011 presentation is also worth a look.
On a side note, Adi Oltean talks about potential security considerations when the 8.3 filename is used, do take a look!
PowerShell script to extract T-SQL task code from SSIS packages
Requirement
Some time back I presented a PowerShell script which extracts T-SQL code from RDL files. Remember that I created this script to assist me in doing code reviews of ad-hoc SQL snippets which were embedded in the report definition files.
Another common usage of ad-hoc T-SQL code is within SSIS packages, specifically the Execute SQL task or the (somewhat less commonly used) Execute T-SQL Statement Task. The sample script I provide in this blog post relates to the Execute SQL task.
Methodology
To do this efficiently we have a couple of choices:
- We could parse the DTSX file (as it is finally a XML file) directly
- We could use the managed API to load the package and iterate through the tasks
Given that our requirement is quite specific and we may want to reduce any dependency on managed objects, I chose to use the XML parsing approach. It does help very much that the DTSX package schema is fully documented (and I have kept in mind the changes done to the DTSX schema in SQL 2012.)
Scope: this script will focus on the Execute SQL task for the moment. Interested readers can modify the script to operate on the (rarely used) Execute T-SQL Statement Task. It will also skip any StoredProcedure invocation calls.
Usage
The TSQLfromDTSX.ps1 script takes 2 parameters: the path to the files and an option to recursively look at sub-folders.
powershell.exe .TSQLfromDTSX.ps1 -RootFolder <folder path> -Recurse <0 or 1>
Important: if the folder path has embedded spaces, you need to surround the path with single quotes (‘) and NOT the usual double quotes as you would expect.
Output
It outputs the extracted T-SQL code to the standard output, so you can simply redirect that to another file if required. An interesting feature I’ve plugged into the script is to prefix the T-SQL code with the name of the package and the specific task where that code was found. These ‘identity’ values are surrounded by the ~~ markers, which can make it easy for you to ‘tag’ these batches with some names in other activities.
This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.
This posting is provided “AS IS” with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm.
Effective technical presentations: a mind-map
Over the last year I’ve been fortunate enough to be trained by some top-notch professionals from the world of media and big-stage events on how to deliver world-class sessions – whether it be a multi-day training, a short TechEd session, an online meeting or webcast, or even a video recording.
From those trainings and also from my experiences delivering and coaching my fellow colleagues to deliver world-class technical sessions, I’ve seem many patterns emerge. Some of these are ‘must-do’ items and some of them are ‘do NOT do this’ type items.
So over time, I’ve been putting together a mind-map of what can help us make great technical presentations. I’m sharing this with the broader community now, in the hope that it helps you and more importantly to solicit your feedback and comments – that’s the way we ALL learn!
Here we go! And do not forget to comment and leave your questions, thoughts and feedback!
(Please click on the thumbnail below to see the full size view.)
PowerShell script to extract T-SQL code from RDL files
One of the things I do in my day job at Microsoft Services is to review database design and T-SQL code. Now, when it comes to code, there is a challenge just to find ALL the possible code which hits our database engine. This is because a non-trivial amount of T-SQL code is not in the form of Stored Procedures, but it is embedded inside other places – such as Reporting Services (SSRS).
Now if you have hundreds of RDL files, extracting T-SQL from those files is a very tedious job. Opening each file in Report Designer, you would have to comb through each dataset – a tedious and error prone procedure. This is where the knowledge of the RDL format is helpful. Luckily for us, the specification for RDL is documented and available publicly. Using this knowledge, I constructed a quick (and perhaps dirty
) PowerShell script to scan through multiple RDL files and extract the T-SQL command text.
To use the script just invoke it from PowerShell command prompt and pass in the root folder path (-RootFolder) where your RDL files are stored. The –Recurse parameter is also useful in case you have many subfolders where RDL files are stored.
Do leave comments on whether this script was useful for you!
This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.
This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm.
SQL Server and Storage / SAN replication (mirroring) tips
Some customers have adopted block level replication of the storage disks (LUNs) as their DR solution for SQL Server databases. Remote mirroring of storage disks sounds great – transparent to the application, replicates every block as-is; so very little reconfiguration required on the other side, scales fairly linearly regardless of the number of databases and so on. Sounds better than baked bread, huh?
However, there are some finer points we would like to share from our experiences on the field.
Support Policy
Our KB articles sometimes read like legal documentation and rightly so – they are in many cases the ‘rulings’ and ‘bye-laws’ by which customers and us reach conclusions in tricky situations. So there are some KB articles which are a must-read in the case of storage.
KB 913945 states that Microsoft does not officially certify specific third party products will work with SQL Server. The most important part of that article is here:
Microsoft support documents may use the phrase "not supported by Microsoft" with regard to third-party products. This phrase means that the first line of support for such a product should be the third-party product vendor rather than Microsoft.
Next we come to KB 910716 which specifically talks about remote mirroring a.k.a. SAN replication. Some excerpts from this document:
- This article does not endorse any specific vendor’s solution.
- Work with your hardware vendor and your storage vendor to make sure that the remote mirroring solution that you want to use supports the I/O requirements of SQL Server.
- If your solution meets the requirements that are discussed in this article, Microsoft Customer Support Services will provide support for issues that are not related to storage devices.
- To obtain support from Microsoft Customer Support Services for issues that involve transactional throughput, you must be able to reproduce the issue in a mirror-free environment.
- To obtain support for issues that involve a storage device, see your storage vendor.
- To obtain support for issues that involve third-party replication technology, see your technology vendor.
Lastly we have KB 943984 which talks about using the storage mirroring to support a so-called geo-cluster. In such cases, remote disk mirroring is used to provide a consistent copy of the clustered storage disks on a remote site. This article has one very important line:
Please contact the data replication vendor for any issues related to accessing data on failover.
By now, you must be feeling very jittery about using such remote mirroring
But fear not, there are some basic precautions you can take to be sure.
SQL I/O Requirements
The fundamental issue is that the database and log files are now in two places rather than one. And that means that the basic I/O requirements that SQL Server assumes or requires are to be followed in both places. These requirements are described in the ages-old (but still relevant) SQL I/O Basics Part 1 document. (FYI, the Part 2 document is also a very good read) and are given below:
- Stable Media: Any system running SQL Server must provide stable media capabilities for the database log and data files. If the system has something like a non-battery backed cache or perhaps caching enabled on disk, it is not safe for SQL Server installations. Ensure that your system has proper caching attributes to secure the data to stable media.
- Write Ordering: SQL Server is designed to maintain the WAL protocol as described earlier in this paper. The underlying environment plays a pivotal role in the maintenance of the protocol. Write ordering must be maintained. With any system, but specifically for remote mirroring implementations, write ordering is critical to maintain a point-in-time view of the data. Many remote mirroring vendors implement generation logic to maintain physical write order even when the write operations are transmitted between remote caches with alternate protocols.
- Torn I/O Prevention (Split I/Os): For SQL Server, the 8-KB blocks need to be handled as a single block of data. Systems that split I/Os must be configured to avoid splitting I/O requests into smaller blocks. Some dynamic disk and volume managers can be configured with block and cluster sizes that are smaller than 8 KB or not on 8-KB boundaries. These systems may split the SQL Server I/O request across physical system components. In doing so, they can introduce the possibility of torn pages and they can destroy true write ordering precedence.
How do I test for these?
The easiest way to test if a disk subsystem (standalone or mirrored) meets the above requirements is to simulate SQL Server I/O patterns using the SQLIOSim utility. This utility actually replaces the SQLIOStress utility described in Bob Dorr’s whitepaper mentioned previously.
Write ordering implementations
There are some reference steps you can check for remote mirroring and SQL Server. I’ll keep adding to these as I come across more ![]()
EMC SRDF
For example, this EMC paper talks about using something called Consistency Groups when using their SRDF technology to remote mirror a group of LUNs. Consistency Groups are used by EMC to ensure write-ordering, which as you might have read above, is a very important criteria, especially when you have different sets of LUNs for log and data files.
Note: other manufacturers may have their own equivalent to ensure write ordering. I mention EMC’s solution purely as a sample. It is not to be taken in any way as a recommendation or guarantee of any sort.
Hyper-V Replicas
Hyper-V in Windows Server 2012 offers something called Replicas which are an easy way to provide for DR of a VM. SQL Server is supported in such scenarios – see KB 956893 for details. The scenario is supported as long as the EnableWriteOrderPreservationAcrossDisks flag is set.
TEMPDB and remote mirroring / storage replication
One thing to definitely keep in mind when using remote mirroring is to consider where TEMPDB is placed. If you are replicating the LUNs where TEMPDB data / log files are placed, that is calling for big trouble! A lot of bandwidth on your storage replication channel will be wasted moving these bits across, when in reality the TEMPDB will be recreated on startup of the DR instance.
One good thing here is that SQL server 2012 allows TEMPDB data / log files to be placed on local disks even for clustered instances – this is a big help as it helps avoid including TEMPDB in the remote mirroring setup.
Wrap-up cheat sheet
Here is a summary of the key points to keep in mind when implementing storage replication / remote disk mirroring:
- Check with the manufacturer if they have addressed the SQL Server storage requirements, specifically the write-ordering implementation
- Test with SQLIOSim well before deployment of the actual SQL instances
- Always check if your HBA drivers and SAN firmware / software is up to date
- Ensure that TEMPDB is not being replicated using disk mirroring across the storage replication channel
That’s it for now. Hope you enjoyed this post! Please leave your feedback / questions in the comments, or use the ‘Email Author’ link on the top right section of the blog to reach me.
‘Cannot resolve the collation conflict’ error message with temp tables
Scenario
Today my customer told me that he is facing an error message ‘Cannot resolve the collation conflict’ when executing an JOIN between two tables: a base table and a temporary table. They had recently created the database afresh from scripts and were facing this issue since. On the existing ‘good’ server the problem did not occur.
The only obvious difference was that the ‘good’ database and ‘bad’ database differed in collations.
It is a fairly common problem and one which has a simple solution, but it does need to be handled in T-SQL code to ensure that it works in all possible scenarios. Here is a simple reproduction of the issue.
Walkthrough
Firstly, let us check the SQL instance collation:
select SERVERPROPERTY(‘Collation’)
go
Case 1: Temporary tables without any specific column level collation specifier
It tells us that the instance is at the default: SQL_Latin1_General_CP1_CI_AS. Next, let us simulate the ‘bad’ database, which had a different collation from the instance collation, and create a table in that database.
CREATE DATABASE Ukrainian COLLATE SQL_Ukrainian_CP1251_CS_AS
GO
USE Ukrainian
GO
CREATE TABLE MyTab
(
someval VARCHAR(50)
)
GO
SELECT collation_name
FROM sys.columns
WHERE name = ‘someval’
and object_id = object_id(‘MyTab’)
GO
As you can see, the column inherits the database collation, namely SQL_Ukrainian_CP1251_CS_AS. Next, let us simulate the actual issue:
CREATE TABLE #something
(
somevaltemp varchar(50)
)
GO
SELECT *
FROM MyTab JOIN #something
ON someval = somevaltemp
GO
The error is reproduced:
Msg 468, Level 16, State 9, Line 3
Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "SQL_Ukrainian_CP1251_CS_AS" in the equal to operation.
This is interesting because tables created in TEMPDB inherit the collation of TEMPDB unless the collation is explicitly specified. A quick check from sys.columns in TEMPDB will confirm it:
SELECT collation_name, name
FROM tempdb.sys.columns
WHERE name = ‘somevaltemp’
GO
The result is the same as the instance / TEMPDB collation: SQL_Latin1_General_CP1_CI_AS.
Case 2: Temporary tables with collation specified for the column
Next, let us try to create the temporary table with a collation value identical to the database from which it is created:
CREATE TABLE #somethingexplicit
(
somevaltempexplicit varchar(50) COLLATE DATABASE_DEFAULT
)
GO
We can also confirm the collation of the newly created column:
SELECT collation_name, name
FROM tempdb.sys.columns
WHERE name = ‘somevaltempexplicit’
GO
As you will see it is now SQL_Ukrainian_CP1251_CS_AS. Let us attempt the JOIN again this time against the new temporary table:
SELECT *
FROM MyTab JOIN #somethingexplicit
ON someval = somevaltempexplicit
GO
You will see that it succeeds without any issues.
Case 3: Temporary tables generated by a SELECT…INTO
Let us try a final test, this time with a SELECT INTO a temporary table:
SELECT someval AS somenewvalinto
INTO #tempselectinto
FROM MyTabSELECT collation_name, name
FROM tempdb.sys.columns
WHERE name = ‘somenewvalinto’
GO
As you can predict, the columns of the temporary table inherit the collation from the base table which we are selecting from. So the collation is retained as SQL_Ukrainian_CP1251_CS_AS.
SELECT *
FROM MyTab JOIN #tempselectinto
ON someval = somenewvalinto
GO
The JOIN also succeeds in the final case.
Conclusion
If you are creating a temporary table from your database code, and if your database happens to use an explicit collation, it is your DUTY to specify the collation in the CREATE TABLE call for the temporary table. You can either explicitly specify the collation value or use DATABASE_DEFAULT clause.
Using the TransactSql.ScriptDOM parser to get statement counts
Today there was a question on the #sqlhelp Twitter hashtag: “Is it possible to get UPDATE / SELECT / INSERT / DELETE statement counts from SQL Server?”
Implementation
This is a perfect use case for the SQLDOM parser a.k.a. Microsoft.SqlServer.TransactSql.ScriptDom. I put together a sample C# application to demonstrate how easy it is for requirements like the above. The assumption made below (for simplicity) is to parse the input from a .SQL script file. In the real world, we can easily script the object code from a live connection (using SMO for example) and pass it as a MemoryStream to the TextReader below. But that is left, as they say, to the reader as an intellectual exercise ![]()
Test case
Here is the sample code, and interestingly if you run it on a script of AdventureWorks, it tells you that there is not a single DELETE statement embedded in any of the sample database’s Stored Procedures, Views or Functions (which is correct and it is a minor eye-opener as well
).
Found 37 SELECTs, 1743 INSERTs, 15 UPDATEs & 0 DELETEs
Conclusion
The ScriptDom excels at simple tasks like this, but do keep in mind some complications which need to be considered for more advanced requirements. I’ll be posting some tips on how to overcome those complications later on.
Sample Code
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.SqlServer.TransactSql.ScriptDom;
using System.IO;
namespace SampleSQLVisitor
{
class Program
{
static void Main(string[] args)
{
TextReader txtRdr = new StreamReader("myscriptfile.sql");
TSql110Parser parser = new TSql110Parser(true);
IList<ParseError> errors;
TSqlFragment sqlFragment = parser.Parse(txtRdr, out errors);
// TODO report the parsing errors generated (if any)
SQLVisitor myVisitor = new SQLVisitor();
sqlFragment.Accept(myVisitor);
myVisitor.DumpStatistics();
}
}
internal class SQLVisitor : TSqlFragmentVisitor
{
private int SELECTcount = 0;
private int INSERTcount = 0;
private int UPDATEcount = 0;
private int DELETEcount = 0;
private string GetNodeTokenText(TSqlFragment fragment)
{
StringBuilder tokenText = new StringBuilder();
for (int counter = fragment.FirstTokenIndex; counter <= fragment.LastTokenIndex; counter++)
{
tokenText.Append(fragment.ScriptTokenStream[counter].Text);
}
return tokenText.ToString();
}
// SELECTs
public override void ExplicitVisit(SelectStatement node)
{
//Console.WriteLine("found SELECT statement with text: " + GetNodeTokenText(node));
SELECTcount++;
}
// INSERTs
public override void ExplicitVisit(InsertStatement node)
{
INSERTcount++;
}
// UPDATEs
public override void ExplicitVisit(UpdateStatement node)
{
UPDATEcount++;
}
// DELETEs
public override void ExplicitVisit(DeleteStatement node)
{
DELETEcount++;
}
public void DumpStatistics()
{
Console.WriteLine(string.Format("Found {0} SELECTs, {1} INSERTs, {2} UPDATEs & {3} DELETEs",
this.SELECTcount,
this.INSERTcount,
this.UPDATEcount,
this.DELETEcount));
}
}
}
Disclaimer
This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.
This posting is provided “AS IS” with no warranties, and confers no rights.
TechEd India 2013 – ‘T-SQL Horrors’ slides
TechEd 2013 was a grand success! Thank you – those of you who stayed till 6:15PM at the Pune session – and even more to those who engaged me in Q&A till 7PM that evening
I was very impressed and happy to see people interested in my talk, even though it was at the end of a very long day for most.
As promised earlier I am publishing my content deck, with the sample scripts for all the scenarios which we discussed. All the examples will execute on AdventureWorks (2005/2008 or 2012 versions) and should give you ample practice on the concepts we discussed.
And please – if you liked the session, please do leave a comment, either here on the blog or on my Twitter account. That’s pretty much the best way you can say thanks ![]()