SQL Server and ‘Instant File Initialization’ Under the Hood – Part 3

Welcome back! As promised last time around, here’s part 3 of the ‘Instant File Initialization’ (a.k.a. ‘IFI’) optimization for SQL Server series. If you missed the first two parts you should definitely take some time to read them first before resuming this one, because the previous posts cover a lot of things which would be assumed in this post:

With that background, this post will show you how IFI works (or rather – did not – in specific release!) in conjunction with the Buffer Pool Extension feature in SQL 2014 and above.

Buffer Pool Extension Overview

A few weeks ago, one of my colleagues asked this question internally: ‘does IFI (Instant File Initialization) have impact on creating the BPE?’. That question was the inspiration for this entire series of posts, so I thank him for that. In order to answer the question we first need to understand conceptually how the Buffer Pool Extension (BPE) feature works. The Books Online topic for BPE is a good starting point, but here is my summary:

  • Think of the BPE as offering a ‘Level 2’ cache over the primary ‘Level 1’ i.e. classic Buffer Pool. If the DB page cannot be found in either L1 cache or L2 cache only then will it spill over to regular physical read from the data file.
  • The BPE cache mechanism is actually based on a file which is preferably hosted in very fast storage, such as a SSD. For example, in an Azure D-Series VM context, the D: drive is an excellent place holder for the BPE and / or the TEMPDB – see this article from the SQL Server Product Team for some details.
  • The BPE file is created when you ALTER SERVER CONFIGURATION command to enable BPE, or on SQL startup (if BPE was already configured.)
  • The size of the BPE file can be up to a multiple of the ‘max server memory’ configured (the limit varies by SQL Edition) but we do not generally recommend more than 4x the max server memory setting. The reason I mentioning this here is to consider that the BPE file may be a very large file and depending on which buffer page we are saving into the BPE file, the offset of that file write operation may be quite large.
  • Finally, the BPE file is deleted on SQL Server shutdown (and hence re-created on startup.)

BPE Internals

As with other operations in SQL, the writes into the BPE are optimized using the WriteFileGather() API. And based on which buffer page was being written to the file, the offset into the BPE file itself can be quite large. If we run a Process Monitor trace during the BPE file operations, we will notice that in SQL 2014 RTM there are a number of Synchronous Paging I/O (the second highlighted line in the below snipping) following a regular write operation to the BPE (which in the below screenshot is the first call to WriteFile at a offset of 196771840:

httpsmsdnshared.blob.core.windows.netmediaMSDNBlogsFSprod.evol.blogs.msdn.comCommunityServer.Blogs.Components.WeblogFiles0000006464metablogapi7444.image_1E74B602.png

But as you learnt in the first two parts, writing into ‘random’ locations inside a file will cause the OS to silently ‘zero out’ the allocations from the previous valid data length to the new location, and indeed in the case of BPE writes as well, you will see the tell-tale signs of this:

httpsmsdnshared.blob.core.windows.netmediaMSDNBlogsFSprod.evol.blogs.msdn.comCommunityServer.Blogs.Components.WeblogFiles0000006464metablogapi3755.image_2BDAC908.png

Notice the calls to CcZeroDataOnDisk above, which represents the zero-stamping at a Windows level. Now this is synchronous and will cause the top-level WriteFileGather() to block till the allocations are zeroed up to the current data length. What this means is that the SQL task which caused the buffer fetch in the first place will be blocked a bit more than you would like.

Salvation!

From the above, it certainly looks like there is a potential gap / improvement possibility in SQL 2014 RTM because the calls to write into the BPE would be effectively synchronous and slow down operations. Thankfully, our development team has acted on the feedback from customers and has introduced a call to SetFileValidData() in SQL 2014 Service Pack 1. Specifically, this issue was the one fixed by using the SetFileValidData() API in the BPE initialization code!

So if you now capture a Process Monitor trace during BPE initialization in SQL 2014 SP1, here is what you will see:

httpsmsdnshared.blob.core.windows.netmediaMSDNBlogsFSprod.evol.blogs.msdn.comCommunityServer.Blogs.Components.WeblogFiles0000006464metablogapi2021.image_0E6877FC.png

Since the valid data length is being set proactively to the entire file size itself, initial writes to the BPE file at any random (high) offsets are no longer blocking due to underlying zeroing. This leads to a significant improvement for some customers.

Edit 28 May 2015: Now that SQL 2016 CTP2 is officially available, I’m glad to report that the above improvement is also present in SQL 2016 code base!

So with that, you now know one more way in which the IFI optimization is being used within SQL Server engine. There is one more place which we can talk about, but I’d like to challenge our readers to share their guesses on what that might be – please post your guesses as comments, and I will come back to you shortly with that information as well!

Advertisement

SQL Server and ‘Instant File Initialization’ Under the Hood – Part 2

This is part 2 of my series on ‘Instant File Initialization’ and how that ‘brand name’ actually works under the covers. This post will take a look at what really happens when a database file is created and how the ‘Instant File Initialization’ optimization really helps from a SQL Server perspective. Before you proceed, it is highly recommended that you read Part 1 of this series; so if you have missed the first part, I highly recommend you start there!

Before we begin, a big ‘thank you’ to Bob Dorr, who offered some valuable insight on this topic and also authored an excellent white paper on the overall SQL I/O topic. As well as a shout out for Bob Ward’s excellent ‘Inside SQL I/O’ talk at SQL PASS Conference 2014. Links to both of their works are at the end of this blog post.

In the Beginning…

Let’s start simple: anyone who has worked with SQL Server knows that if you specify a very large file size for the data file, it takes a while (at least with the default setup) to finish this. You also probably know that this is because of ‘zeroing out’ of the underlying allocations.

Now, the million dollar question: when a database is created, ‘conceptually’ there is nothing inside it – right? Smile So why would we need to do the zeroing at this time? Recall from Part 1 of the series, that the first WriteFile() call triggered off the underlying zeroing at an OS level. So, though the data file is basically ‘empty’, maybe SQL is writing into some random file locations and causing this?

Now, why would SQL Server write into ‘random’ places at DB creation?. The answer is that SQL still needs to perform some ‘metadata’ setup on the file or on the new grown section of the file. This ‘metadata’ is basically the internal allocation related pages namely GAM, SGAM and PFS pages, which are scattered at predictable intervals throughout the length of the file.

GAM / PFS Initialization

Now, if you are like me, you would want to verify or see this in the debugger, and indeed some quick poking around with WinDbg will reveal the intricacies of why we are doing this random I/O immediately after resizing or creating the file (and therefore why the zeroing of clusters will normally happen unless you enabled the conditions to use ‘instant file initialization’.)

Firstly, you can poke around in the debugger (note that I used only public symbols for the below walkthrough – you can get started with WinDbg and SQL Server here) and if you get a bit savvy with the debugger you can uncover things like the below:

0:111> x sqlmin!Init*Pages
00007ff8`da328f90 sqlmin!InitGAMIntervalPages (<no parameter info>)
00007ff8`da329190 sqlmin!InitDBAllocPages (<no parameter info>)
00007ff8`da3286a0 sqlmin!InitPFSPages (<no parameter info>)

If you set a few breakpoints you will see the action around PFS and GAM initialization (you will see a lot more PFS than GAM pages because the interval tracked by GAM pages are much larger than PFS). Here is a sample for PFS pages initialization:

sqlmin!InitPFSPages
sqlmin!InitDBAllocPages
sqlmin!FileMgr::CreateNewFile
sqlmin!AsynchronousDiskAction::ExecuteDeferredAction
sqlmin!AsynchronousDiskWorker::ThreadRoutine
sqlmin!SubprocEntrypoint
sqldk!SOS_Task::Param::Execute
sqldk!SOS_Scheduler::RunTask

Please keep this aspect in mind because we will revisit this later.

Case 1: Without ‘Instant File Initialization’

Now, imagine this: if SQL were to directly start writing to ‘random’ locations corresponding to the above GAM, PFS pages, then consider (and if you read Part 1 carefully) we would expect the corresponding WriteFile() operations to cause the OS to issue underlying CcZeroDataOnDisk calls to zero out. This would be inefficient, so in SQL what we do is to proactively issue 8MB chunked I/O writes to zero out the file. You can easily verify this if you run a filtered Process Monitor trace, which I did do and the same is summarized below:

image

If you dig a bit deeper, specifically use the Stack view inside of Process Monitor for one of the WriteFile() calls shown above, you can see all the details down to the WriteFileGather() routine which does the I/O in chunks of 8MB to zero out the file proactively:

image

Notice that there are no calls by the kernel to CcZeroDataOnDisk. So we are in a way doing what the OS did in the earlier case, perhaps a bit more aggressively due to the larger I/O sizes (8MB.)

Now you can imagine why it takes a long time to zero out a large file. If you attended Bob Ward’s excellent ‘Inside SQL I/O’ session at SQL PASS 2014 he actually does some calculations to show you how long it would take to zero out a large data file. For example, if you have a 10GB data file and you have 150MB/sec serial I/O throughput on the drive, you can estimate roughly 70 seconds to do the zero initialization. That can be a really long time, especially if you get an autogrow of that size!

Seed question: if you scroll through the ProcMon trace to the last of the 8MB WriteFile operations (which are the zeroing ones) then you will notice that there are some 8KB writes which follow. Why? The answer follows at the end of ‘Case 2’ walkthrough below!

image

Case 2: With ‘Instant File Initialization’

Now, assume that the SQL Service account has been allocated the SeManageVolumePrivilege (which allows the successful use of the SetFileValidData API I mentioned in the previous post) then SQL will attempt to use this ‘optimization’ to avoid the zeroing overhead. We captured a sample trace using Process Monitor while SQL was creating a 5GB data file. Here is a screenshot of how the Process Monitor logs look like with Instant File Initialization optimization enabled successfully:

image

You can see the reference to SetValidDataLengthInformationFile (highlighted) followed by a series of 8KB writes. In the debugger, you will see the following call stack which proves that we do indeed call the SetFileValidData() API from the FCB::InitializeSpace() call:

KERNELBASE!SetFileValidData
sqlmin!FCB::InitializeSpace
sqlmin!FileMgr::CreateNewFile
sqlmin!AsynchronousDiskAction::ExecuteDeferredAction
sqlmin!AsynchronousDiskWorker::ThreadRoutine

Now we answer the previous question we seeded at the end of the Case 1 section: why do we still get the 8KB writes? If you recall from the ‘GAM / PFS Initialization’ section previously then this should be crystal clear! Here is a call stack of one of the 8KB writes:

image

As you can see above, this is for a PFS page initialization. So this explains the 8KB writes after the file was created.

Case 3: Sparse File Creation (Database Snapshot)

Next, let’s look at one of the special cases: Database snapshots in SQL Server are implemented using NTFS ‘sparse file’ functionality. Now, in the case of a sparse file, we do not use either of the two mechanisms mentioned above, and instead use a special mechanism to do the ‘zero initialization’. Why? Read on!

If you read the ‘Instant File Initialization’ (IFI) section in the SQL I/O Basics Chapter 2 white paper, you will see this sentence:

The algorithm used by SQL Server is more aggressive than the NTFS zero initialization (DeviceIoControl, FSCTL_SET_ZERO_DATA)

From MSDN it is clear that there is an optimization to set a range in a sparse file as all zeros without physically extending the file size:

If you use the WriteFile function to write zeros (0) to a sparse file, the file system allocates disk space for the data that you are writing. If you use the FSCTL_SET_ZERO_DATA control code to write zeros (0) to a sparse file and the zero (0) region is large enough, the file system may not allocate disk space.

AHA! So I hope that explains why we cannot use the conventional ‘zero stamping’ or the SetFileValidData mechanism for sparse files. But let’s see this for ourselves! Let’s start by creating a DB snapshot, but before I executed the below I also put a breakpoint in WinDbg on kernelbase!DeviceIoControl().

— Create the database snapshot
CREATE DATABASE ZN_test ON
( NAME = ZN, FILENAME =
‘l:tempZN_test.ss’ )
AS SNAPSHOT OF ZN;
GO

Here is the corresponding Process Monitor trace:

image

From WinDbg we can get the call stack. You can see that FCB::ZeroFile() calls the DeviceIoControl in this case:

KERNELBASE!DeviceIoControl
KERNEL32!DeviceIoControlImplementation
sqlmin!FCB::ZeroFile
sqlmin!FCB::InitializeSpace
sqlmin!FileMgr::CreateNewFile

Wow! So I hope you get a feel for how many optimizations we have in place for SQL Server from an I/O perspective.

Case 4: Log File Initialization

Last but not the least, let us study the case for the transaction log file. Interestingly (and as is known and documented in many places) the log file is always zero-initialized. Here is a ProcMon trace (which was taken when IFI was already leveraged for the data file creation):

image

The above operations are largely related to zeroing out the entire file and then formatting the Virtual Log Files within the initial chunk. The log file (2MB in size) was zero-initialized in one shot in the above case. It took 30 milliseconds to do that on my system. Obviously more real world sizes would take proportionately more time to finish.

FYI – you can see the progress of the log fixups by using undocumented trace flag 3004.

What Next?

So that’s it, I hope you enjoyed this spelunking into the internals of the OS and SQL. Next up, we will see how this optimization applies (or does not apply) to other key components within SQL. For further reading, the following resources are excellent resources on the topic of SQL I/O internals:

SQL Server and ‘Instant File Initialization’ Under the Hood – Part 1

Recently a colleague of mine popped up a very interesting question around whether the SQL Server ‘Buffer Pool Extension’ feature in SQL 2014 uses the ‘instant file initialization’ optimization (or not). While answering that question I found some useful information which I believe will help many of us. So here we go… firstly, we need to understand what ‘instant file initialization’ is really all about, from the Windows perspective.

Background

At the OS level every file has three important attributes which are recorded in the metadata of the NTFS file system:

  • Physical file size
  • Allocation file size
  • Valid data size

In this post, we are mostly concerned with Physical and Valid Data sizes. More details are available at the this MSDN page but for simplicity, let me put it this way:

  • When you create a file with the CreateFile API, it starts with a 0 byte length
  • One way to ‘grow’ the file is of course to sequentially write bytes to it.
  • But if you want to ‘pre-size’ the file to a specific size, then you may not want to explicitly write data upfront.
  • In those cases the OS provides a SetEndOfFile() API to ‘resize’ the file, but as you will see below, there are still some things which will hold up the thread when the first write operation is done to the pre-sized file

Let’s work through this step-by-step. A bit of programming knowledge will help, though it should be fairly easy to figure out what’s going on by reading the comments inline in the code! Smile 

Growing a file: C++ example

Here is a simple program which will demonstrate how you can grow a file to 3GB without having to write individual bytes till the 3GB mark:

#include <Windows.h>

int _tmain(int argc, _TCHAR* argv[])
{
    // create a file first. it will start as an empty file of course
    HANDLE myFile = ::CreateFile(L"l:\temp\ifi.dat",
        GENERIC_WRITE,
        0,
        NULL,
        CREATE_ALWAYS,
        FILE_ATTRIBUTE_NORMAL,
        NULL);

    if (INVALID_HANDLE_VALUE == myFile)
    {
        return -1;
    }

    // let’s now make the file 3GB in size
    LARGE_INTEGER newpos;
    newpos.QuadPart = (LONGLONG) 3 * 1024 * 1024 * 1024;

    LARGE_INTEGER newfp;

    // navigate to the new ‘end of the file’
    ::SetFilePointerEx(myFile,
        newpos,
        &newfp,
        FILE_BEGIN);

    // ‘seal’ the new EOF location
    ::SetEndOfFile(myFile);

    // now navigate to the EOF – 1024 bytes.
    newpos.QuadPart = (LONGLONG)3 * 1024 * 1024 * 1024 – 1024;
    ::SetFilePointerEx(myFile, newpos, &newfp, FILE_BEGIN);

    DWORD dwwritten = 0;

    // try to write 5 bytes to the 3GB-1024th location
    ::WriteFile(myFile,
        "hello",
        5,
        &dwwritten,
        NULL);

    return 0;
}

When we execute the above code, you will see that though we used the SetEndOfFile() API to locate the EOF marker without us explicitly writing anything, there is some work being done by the OS underneath our code to ‘zero’ out the contents of the clusters allocated to us. This is done for data privacy reasons and since it is physical I/O, it does take a while. You may want to refer the documentation for the SetFilePointerEx function:

Note that it is not an error to set the file pointer to a position beyond the end of the file. The size of the file does not increase until you call the SetEndOfFile, WriteFile, or WriteFileEx function. A write operation increases the size of the file to the file pointer position plus the size of the buffer written, leaving the intervening bytes uninitialized.

Snooping in with Process Monitor

You can actually look at the proof of what is happening underneath the hood by using Process Monitor from the Sysinternals suite. Here is a complete call stack of the application. Notice the call in the kernel to zero out data (CcZeroDataOnDisk). Notice that these are not our API calls. We simply called WriteFile() and that triggered off these underlying ‘zeroing’ writes.

image

In the same ProcMon trace you will also notice a bunch of I/O operations (corresponding to the above stack) just after I triggered my 5 bytes I/O:

image

The key takeaway from this walkthrough is that when we called SetEndOfFile(), we do not affect the ‘valid data length’ of that file stream. In that case, the OS will play it safe by zeroing out the allocations from the previous valid file length (which in our case above was actually 0) leading up to the location of the write (which in our case is 1024 bytes prior to the physical end of the file.) This operation is what causes the thread to block.

Growing a file – the ‘fast’ way

Instant File Initialization as we know it in SQL Server really reduces to an API call in Windows. To see that, we tweak the above sample and add in the ‘secret sauce’ which is the call to SetFileValidData() API:

// ‘seal’ the new EOF location
::SetEndOfFile(myFile);

// now ‘cleverly’ set the valid data length to 3GB
if (0 == ::SetFileValidData(myFile, newpos.QuadPart))
{
    printf("Unable to use IFI, error %d", GetLastError());
}
else
{
    printf("IFI was used!!!");
}

// now navigate to the EOF – 1024 bytes.
newpos.QuadPart = (LONGLONG)3 * 1024 * 1024 * 1024 – 1024;

You will then see that the same code executes almost instantly. The reason for this is because the OS will no longer need to zero any bytes underneath the hood, because the valid data length (as set by the above API call) == file size. This can be seen in Process Monitor as well:

image

Dangers of SetFileValidData()

The important thing to note is that SetFileValidData() is a dangerous API in a way, because it can potentially expose underlying fragments of data. Much has been said about this, and you can check out Raymond’s blog post on this topic. The MSDN page for this API is also very clear on the caveats:

You can use the SetFileValidData function to create large files in very specific circumstances so that the performance of subsequent file I/O can be better than other methods. Specifically, if the extended portion of the file is large and will be written to randomly, such as in a database type of application, the time it takes to extend and write to the file will be faster than using SetEndOfFile and writing randomly. In most other situations, there is usually no performance gain to using SetFileValidData, and sometimes there can be a performance penalty.

What next?

Of course, if you are like me, you are probably wondering what this all equates to. Remember, we are trying to explore some of the basis and background on the ‘instant file initialization’ optimization that SQL Server can leverage to quickly size new and grown chunks for data files. As the documentation and our team’s blog post explain in detail, this setting can be very useful in certain cases and is in fact recommended for deployments on Microsoft Azure IaaS VMs.

Next time, I will correlate this information we learnt above to how SQL Server leverages it in the process of creating new data files or growing existing ones. Till then, goodbye!

Beware: Compatibility Level 90 is no longer available in SQL 2014

This is a quick, perhaps Level 200 post – a rarity on my blog Smile but I am seeing this so often that I think it important to get this out there.

Those who are running databases with compatibility level 90 in SQL Server 2008 / 2012 need to be aware that there is no more support for level 90 in SQL Server 2014. The SSMS 2014 UI is a bit confusing in this respect, because it does show you the option to set level as 90:

image

But when you try to actually change this to 90, it fails:

Msg 15048, Level 16, State 3, Line 1
Valid values of the database compatibility level are 100, 110, or 120.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

This information is indeed documented in Books Online: “When a database is upgraded to SQL Server 2014 from any earlier version of SQL Server, the database retains its existing compatibility level if it is at least 100. Upgrading a database with a compatibility level of 90 sets the database to compatibility level 100. “

Call to action to everyone who plans to upgrade to SQL Server 2014 and is running a database on compatibility level 90 is to thoroughly test with compatibility level as 120 (or perhaps 110 if you feel the changes in the cardinality estimator will warrant more testing in your workload case) before you upgrade. Both functional and stress testing must be done. Once the testing is signed off, you can confidently do the upgrade in production with the higher compatibility levels – 120 (recommended; subject to testing outcome) or 110.

My team of Premier Field Engineers within Microsoft Services can also be of help to your upgrade efforts and help detect and mitigate risks related to upgrading to the current version. Do engage us if you need help!

Microsoft Mumbai Tech Community meeting on 31 Jan 2015

A community is a social unit that shares common values and what better than technology as a binding force. We are very excited to announce the launch of the Microsoft Mumbai Tech Community. Our goal as part of this community is to:

 

  1. Knowledge sharing by our experts here at Microsoft
  2. Create awareness about new product and services offerings and features
  3. Build a cohesive community

In line with this aspiration, We are kicking off the Microsoft Mumbai Tech Community with 2 expert sessions. If you are in Mumbai, India, please let my colleague Mahendraprasad know (details below) that you will be attending!

 

Date 31 Jan 2015
Timing 10AM – 1PM
Address Microsoft Corporation India Pvt. Ltd.
Training rooms Oak / Juniper
4th Floor, ‘Windsor’,
off CST Road, Kalina,
Santacruz (East)
Mumbai, 400098
Contact MADUBEY at microsoft dot com

Here is what our team is planning to talk about:

 

Presenter Narendra Angane
Schedule 10:00 till 11:15 AM
Topic Introduction to Power BI
Abstract

Power BI for Office 365 is a self-service business intelligence (BI) solution delivered through Excel and Office 365 that provides information workers with data analysis and visualization capabilities to identify deeper business insights about their data. Will see how we can use the below components for data analysis and visualization:

  • Power Query
  • Power Map
  • Power View
  • Q&A

Presenter Mahendraprasad Dubey
Schedule 11:30 AM till 12:45 PM
Topic Evangelizing Data Collection Strategy
Abstract

Data collection has been nightmare. Several time either we can’t collect data as it has performance impact on server or we miss data that needs to be analyzed. Data collection has been made simpler. Let’s discover different way of collecting data.

  • XEvent Metadata
  • Extended Events Tutorial
  • Finding Your Events
  • Actions and Targets
  • Demo
  • Q&A

We are looking forward to hosting you at this event. Do RSVP your participation to MADUBEY at microsoft dot com.

Error message “Copying cluster-resource …fssres.dll to C:Windowssystem32fssres.dll failed…” during SQL Server 2014 setup

First and foremost, let me reproduce the complete error message I refer to above: “Copying cluster-resource C:Program FilesMicrosoft SQL ServerMSSQL12.SQL14MSSQLBinnfssres.dll to C:Windowssystem32fssres.dll failed because the destination file exists and is read-only”

Investigation

I received the above error message while setting up a SQL Server 2014 instance side by side with an existing copy of SQL Server 2012. The setup caused the “Database Engine” component to fail and later it had to be removed before re-attempting setup.

The root cause of such setup failures is often quite easy to locate when you look at the setup logs, which are conveniently placed under the “C:Program FilesMicrosoft SQL Server120Setup BootstrapLog” folder on your machine. When I looked at the Detail.txt file from such the problematic server, I could sport the following section:

(01) 2015-01-04 08:09:34 Slp: UpdateClusterResourceAction: Resource type = ‘SQL Server FILESTREAM Share’ ResourceDllName = fssres.dll Source Location = ‘C:Program FilesMicrosoft SQL ServerMSSQL12.SQL14MSSQLBinn’ Target Location = ‘C:Windowssystem32’.
(01) 2015-01-04 08:09:40 Slp: Type ‘SQL Server FILESTREAM Share’ not found. Performing copy directly …
.…
(01) 2015-01-04 08:09:40 Slp: Failed to version-copy file ‘C:Program FilesMicrosoft SQL ServerMSSQL12.SQL14MSSQLBinnfssres.dll’ to ‘C:Windowssystem32fssres.dll’. Exception data is: System.IO.IOException: The process cannot access the file ‘C:Windowssystem32fssres.dll’ because it is being used by another process.
   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.File.InternalCopy(String sourceFileName, String destFileName, Boolean overwrite)
   at Microsoft.SqlServer.Configuration.Cluster.UpdateClusterResourceAction.VersionCopy(String source, String target).
….
Microsoft.SqlServer.Configuration.Cluster.UpdateClusterResourceException:  Copying cluster-resource C:Program FilesMicrosoft SQL ServerMSSQL12.SQL14MSSQLBinnfssres.dll to C:Windowssystem32fssres.dll failed because the destination file exists and is read-only. —> System.IO.IOException: The process cannot access the file ‘C:Windowssystem32fssres.dll’ because it is being used by another process.
….
Microsoft.SqlServer.Configuration.Cluster.UpdateClusterResourceAction.VersionCopy(String source, String target)

From the above, it is quite clear that the FSSRES.DLL already exists previously. Now, when I checked the file version it was clear that the existing version is of version SQL Server 2012:

image

On executing a TASKLIST /M fssres.dll command it quickly became clear that the process which has this DLL already loaded is the RHS.exe for the existing instance of SQL 2012. That instance had an Availability Group already configured hence the RHS.exe was loading the DLL.

Given that the DLL was already loaded by the RHS.exe, there was no way for SQL setup to update it. That is why the SQL 2014 instance setup failed.

Workaround

The workaround I had to use in the above case was to firstly remove the SQL 2014 Database Engine instance; shutdown the cluster service on the failed node and then re-attempt the SQL 2014 setup. Less than ideal, but till the above issue is handled in the setup, we have very little options. I have logged also a Connect item requesting the Product team to investigate this issue. If you think this issue should be fixed, please vote it up on Connect!

Correctly handling Linked Server objects in SSDT

Recently I had spoken at the SQL Bangalore User Group about using SSDT and Visual Studio Online to have an efficient and agile development process for the cloud-first world. After my talk meeting, an attendee had asked me about correctly handling four-part names (objects referenced in other servers).

Problem

In his case he had added the referenced DB as a reference, but was still getting the following warning (error if you set the ‘treat warnings as errors’ property):

SQL71561: View: [dbo].[myView] has an unresolved reference to object [remserver].[someDB].[dbo].[RefTable].

Here, myView is the referencing object and someDB is the referenced project (which was already added by him as a database reference.

Solution

Here is how to meet this requirement. Firstly, you must have SSDT projects for both the databases in question. Add them to the same solution, and then setup a reference from your referencing project to the project which contains the object. The important thing is when you do that you can set the “Database Location” option. Make sure you set that as “Different database, different server”:

image

Once you add this reference using the above option, it is very easy to reference the linked server object in your main project. You do that by qualifying the referenced object with the prefix of the above variables defined in the Database Reference. For example, in the above case it is [$(OtherServer)].[$(RefDB)].<schema>.<object>:

CREATE PROCEDURE [dbo].[Mismatch]
    @param1 int = 0
AS
    SELECT COUNT(*) FROM dbo.Orders
    where CustomerID = @param1

    SELECT Id FROM [$(OtherServer)].[$(RefDB)].dbo.RefTable
RETURN 0

And here is a sample view:

CREATE VIEW myView
    As
    SELECT Id FROM [$(OtherServer)].[$(RefDB)].dbo.RefTable

Hope this helps!

NUL or NULL?

Ever since the old MS-DOS days, the Windows OS family has supported a special ‘null device’ – NUL. Notice the single ‘L’. This is a ‘reserved’ device identifier which you can use as a path. The special thing about this null device (NUL) is that it essentially swallows whatever is sent to it.

NUL and SQL

So what does this have to do with let’s say SQL? Many of us use the null device to test raw backup throughput. The correct way to do this is to specify DISK = ‘NUL’ in the backup command. In that case, you will see an entry similar to the below in the error log, and no physical destination file will be created:

2014-09-20 21:47:43.890 Backup       Database backed up. Database: master, creation date(time): 2014/09/18(16:33:26), pages dumped: 459, first LSN: 234:304:73, last LSN: 234:352:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘nul’}). This is an informational message only. No user action is required.

Now, in some cases, you might accidentally or unknowingly type in NULL (with the double L) instead of NUL. What that does is actually end up with a physical file called NULL Smile You can see proof of that here:

2014-09-20 21:47:03.480 Backup       Database backed up. Database: master, creation date(time): 2014/09/18(16:33:26), pages dumped: 460, first LSN: 234:208:128, last LSN: 234:280:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘C:\Program Files\Microsoft SQL Server\MSSQL12.SQL14\MSSQL\Backup\null’}). This is an informational message only. No user action is required.

Conclusion

So if you are using the NUL device in your tests, make sure you spell it as NUL (without the double L!)

Have a good weekend!

Strange issue when enabling AlwaysOn AG feature in SQL 2014

Today I ran into an issue, which eventually turned out to be a really silly thing, but I think it is important that I share the details.

I was trying to install SQL 2014 side-by-side with an already existing SQL 2012 AG setup. Everything went fine till the customary step when I tried to ‘tick the box’ and enable the AG feature for the newly installed SQL 2014 instances. But that was not to be, with the following error dialog (“The AlwaysOn Availability Groups feature requires the x86(non-WOW) or x64 Enterprise Edition of SQL Server 2012 (or later version) running on Windows Server 2008 (or later version) with WSFC hotfix KB 2494036 installed.)

image

The hotfix in question was already installed and most importantly, my SQL 2012 AG was already functional on this cluster. So it was definitely not something with the OS or cluster.

Not to be deterred, I used the PowerShell option, but that failed as well:

PS SQLSERVER:SQLSQL108W2K8R22SQL14> Enable-SqlAlwaysOn
Enable-SqlAlwaysOn : Could not obtain information about SQL Server Service ‘MSSQL$SQL14’.
At line:1 char:19
+ Enable-SqlAlwaysOn <<<<
    + CategoryInfo          : ResourceUnavailable: (MSSQL$SQL14:String) [Enable-SqlAlwaysOn], SqlPowerShellServiceException
    + FullyQualifiedErrorId : ServiceError,Microsoft.SqlServer.Management.PowerShell.Hadr.EnableSqlHADRServiceCommand

Providence!

Suddenly it dawned upon me that maybe in my haste I had installed an edition of SQL 2014 which did not permit the AG feature. And indeed, it was the Business Intelligence edition that I had installed, which does not permit the usage of AGs.

The cool thing though about SQL 2008 and above is the “Edition Upgrade” feature of the setup, which allows you to easily change editions (provided the upgrade path is allowed – for example you cannot ‘upgrade’ from BI edition to Evaluation) just by entering the correct product key.

The whole edition upgrade thing took just under a minute and after that I was able to successfully enable the SQL 2014 installation for AlwaysOn AGs.

Hope this tip helps someone!

Changes in SQL 2014 RTM ScriptDom

It has been over 3 months now that SQL Server 2014 is generally available. The well-known and much talked about features such as in-Memory OLTP and Clustered ColumnStore indexes are big motivations to move your workload to the new platform.

With the new features, the T-SQL language surface has also changed of course. Under compatibility level 120, there are new grammar elements to consider. I recently went through one such round of impact analysis, specifically around the ScriptDom parser.

The good news is that the move from SQL 2012 ScriptDom to the latest one is very incremental and almost all of the changes are only due to new syntax elements. I prepared a list of these statements (as depicted in the parser) for your convenience.

 

In-Memory OLTP support

No coincidence that the changes related to in-Memory OLTP are the biggest! This section also includes related support such as ATOMIC block statements, delayed durability, inline index definitions etc.

 

Buffer Pool Extension, Resource Governor, HADR and FCI related Server DDL

This set of changes has to do with the support for configuring the BPE, I/O parameters in Resource Governor, and some minor but important DDL to configure HADR availability groups and FCI instances from T-SQL itself.

 

Clustered ColumnStore related

This group includes the index DDL and Archive compression related changes.

 

Incremental Statistics related

 

Managed Lock Priority related

Managed lock priority changes extend to quite a few operations such as index maintenance, partition switch etc.

 

Backup Encryption + Security DDL

Here you notice the changes to support encryption while backing up; and also the new encryption algorithm enumeration to support the same. We also have several new event notifications available.

 

Miscellaneous

In conclusion, while I believe I have covered all the changes (and I’m glad there seem to be none which are BREAKING changes as such) – if you still find something which has been added or changed and I did not mention it in this post, please feel free to leave a comment below!