Some notes on database snapshots


I recently conducted some tests to double check the exact behavior of database snapshots when:

  • Snapshots are created on volumes which are much smaller than the actual database size
  • Metadata-only operations such as DROP TABLE are executed
  • DDL operations execute on the main database and the snapshot runs out of disk space

Here is my testing methodology and the customary summary of observations.


  • Drive C: contains the original data file for database TestSS. Data file size = 1.95GB
  • TestSS.dbo.T1 is a table which contains 1GB of data
  • TestSS_1 is a snapshot which was created on TestSS after the 1GB data insert was completed.

Test Case A: Can we create a snapshot on volume smaller than the database size?

  1. The snapshot TestSS_1 was created on volume T: which is 10MB in size.
  2. Size on disk for snapshot at creation = 192KB
  3. The snapshot creation succeeded without any errors.

If you are curious as to what I mean by ‘size on disk’, please refer to this snapshot:


It is a property of the NTFS sparse file that the actual space occupied on disk is much lesser initially than the ‘Size’ which is displayed for that file in Explorer.

Test Case B: Metadata-only operations and snapshot files

  1. Next, I executed a DROP TABLE T1 on TestSS, which returned immediately without any errors. At this time the snapshot did not have a large ‘on disk size’ – just 1.48MB
  2. At this time the data is accessible through the snapshot just fine. If you do a select count(1) from testSS..T1 it fails (as expected). So the main operation (DROP TABLE) worked just fine.

How is this possible? It turns out that certain operations such as TRUNCATE TABLE, DROP TABLE etc. directly operate on the IAM chain and possibly other internal allocation pages to efficiently ‘clean up’ data and metadata. In such cases the copy-on-write to the snapshot is deferred till such time that the actual data page and extent are touched.

Test Case 3: What effect does failure to write to the snapshot have on the main database?

  1. Next, I reverted the database TestSS to the snapshot TestSS_1 and then dropped the TestSS_1 snapshot. How do you revert a database to a snapshot? You use the RESTORE DATABASE … command like this: RESTORE DATABASE TestSS FROM DATABASE_SNAPSHOT = TestSS_1
  2. I recreated TestSS_1 immediately using the CREATE DATABASE … AS SNAPSHOT …
  3. Next, I executed DELETE T1 to force a logged data operation and it executed for some time before finally returning these errors:

Msg 5128, Level 17, State 2, Line 1

Write to sparse file ‘’ failed due to lack of disk space.

Msg 5128, Level 17, State 2, Line 1

Write to sparse file ‘’ failed due to lack of disk space.

Msg 5128, Level 17, State 2, Line 1

Write to sparse file ‘’ failed due to lack of disk space.

Though, interestingly the main DELETE did not fail; all rows from the table T1 have been deleted. It is by-design that any failures to write to the snapshot are not going to affect the primary operation on the database.


So in conclusion this is what we can learn from the above. Most of it is logical when you think about it, but it is better to be sure than to guess!

  • Database snapshots can most definitely be created on volumes which are much smaller than the original database size
  • Metadata only operations initially do not affect the snapshot. Only when the underlying pages are really ‘cleaned up’ by subsequent system processes, will the copy of the affected pages be moved into the database snapshot’s sparse file
  • Failures to write to the snapshot will never affect the original database on which the DDL is being executed

Additional Reading


Howto: Have a Clustered instance of SQL Server listen on multiple virtual IP addresses

Recently a colleague asked me if there were any changes specific to Windows Server 2008 which allow SQL Server (clustered instance) to listen on multiple subnets. There was only one change in Windows 2008 to my knowledge (which I have detailed in this post later on) but that had nothing specific to do with SQL listening on different subnets. I did some testing and have posted the results below.

Network configuration

Assuming the following network configuration on the local nodes, there are 3 networks on this cluster:

  • Cluster_Public: subnet is (used only for mixed communications in the cluster)
  • Second_Public: subnet is (used only for public communications in the cluster)
  • Cluster_Private: subnet is (used only for private communications in the cluster)


SQL Server 2000

This version had the capability to bind to multiple virtual IP addresses. When setting up SQL Server, on the following page, you can specify both the IP addresses. The same page can also be subsequently accessed by re-running the setup program and selecting the Advanced option to Maintain Failover Clustering.


Assuming that setup succeeds, you can check the SQL Server error log (shown below) and verify that the SQL instance is successfully listening on both virtual IPs:


The key to this is actually that the SQL Server resource in the cluster is now dependent on the 2 VIPs:


However, due to a limitation in SQL 2000, we cannot directly do this in the cluster administrator, you always have to use the setup program to do the same. As we will see later on the limitation no longer exists in SQL 2005 or 2008.

SQL Server 2005 / 2008

Things are different and easier in these versions. One option is at setup time, if you want to configure SQL Server to listen on multiple virtual IPs, you need to specify them in the setup program:


However, later on if you want to maintain the cluster and ADD a new virtual IP, all you need to do to have SQL listen on multiple Virtual IP addresses is to follow the steps below:

  • Take the SQL Server network name offline
  • Add an additional  IP resource with the new (additional) VIP for SQL to listen on
  • Add the new IP resource as an additional dependency for the sql network name resource
  • Bring the SQL Server resource online, the errorlog should show sql listening on the 2 IPs:


Windows 2008 cluster, multiple subnets, ‘OR’ dependencies and SQL Server

So, with all this background, what has changed in Windows 2008? Well, it is a question of dependencies and how they affect the dependent resource (in this case SQL Server.) By default, in Windows 2000 and 2003, the failure of any one of multiple virtual IP address resource dependencies for SQL Server, will cause the SQL Server resource to restart. This is because the dependencies by default work in a ‘AND’ mode.

In Windows 2008, there is now support for each node to be on a different subnet (see this MVP article for a nice explanation) due to which support for an OR dependency has been introduced. However, for SQL Server to co-exist in this environment, both the subnets must be up for SQL Server to function correctly. I will cover this topic (SQL 2005 / 2008 on Windows Server 2008 with nodes in different subnets) in a later post.

Concurrency of Full, Differential and Log backups on the same database

NOTE: As of Dec 2020, the information in this blog post should be considered out-of-date, and provided for historical reference only.


Many a time, we are asked by customers about the effect of overlapping backup schedules: for example, whether they can backup the transaction log at the same time that a long-running complete backup of the database is taking place.  Books Online does talk about the concurrency, but does not cover all versions or combinations.

Here is a short test case to illustrate the concepts involved. At the end of the post, I have provided a summary of what works in what version.

Test Case

I ran the 3 types of backups on a SQL 2005 / 2008 database at the same time:

  • COMPLETE a.k.a. FULL (backup database mydb to disk = ‘c:tempmydb.bak’ with init)
  • DIFFERENTIAL (backup database mydb to disk = ‘c:tempmydb_diff.bak’ with differential, init)
  • LOG (backup log mydb to disk = ‘c:tempmydb.trn’ with init)

Here is the output of each command for verbose reference. Also included immediately below is a snapshot from sysprocesses showing the blocking of the DIFFERENTIAL backup and the concurrent execution of the LOG backup.

spid blocked waittype lastwaittype waitresource cmd
62 61 0x0004 LCK_M_U DB: 16 [BULKOP_BACKUP_DB] BACKUP DATABASE (this is the differential backup)

Output from FULL backup

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

Processed 17512 pages for database ‘mydb’, file ‘mydbData’ on file 1.

Processed 1 pages for database ‘mydb’, file ‘mydbLog’ on file 1.

BACKUP DATABASE successfully processed 17513 pages in 18.986 seconds (7.206 MB/sec).

SQL Server Execution Times:

CPU time = 70 ms,  elapsed time = 20256 ms.

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

Output from DIFFERENTIAL backup

SQL Server parse and compile time:

   CPU time = 0 ms, elapsed time = 0 ms.

Processed 32 pages for database ‘mydb’, file ‘mydbData’ on file 1.

Processed 1 pages for database ‘mydb’, file ‘mydbLog’ on file 1.

BACKUP DATABASE WITH DIFFERENTIAL successfully processed 33 pages in 0.321 seconds (0.801 MB/sec). <– actual time for the differential backup to complete

SQL Server Execution Times:

CPU time = 51 ms,  elapsed time = 19777 ms.   <– total time to execute the command, including wait and runnable time

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

Note that the time taken by the command is just 0.321 seconds but the elapsed time is 19.777 seconds, which clearly demonstrates the blocking.

Output from LOG backup

SQL Server parse and compile time:

   CPU time = 0 ms, elapsed time = 0 ms.

Processed 3 pages for database ‘mydb’, file ‘mydbLog’ on file 1.

BACKUP LOG successfully processed 3 pages in 7.958 seconds (0.002 MB/sec).

SQL Server Execution Times:

CPU time = 60 ms,  elapsed time = 15106 ms.

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms.

Time taken by command is around 52% of time elapsed, which is probably due to the fact that we are on a single-CPU system and the task had to wait for runnable time on the CPU.


Here is the summary of the testing I conducted. I hope the summary is useful to all of you.

What is requested?

What is already running? (rows below)




Complete backup is running

Blocks in all versions *

Blocked in all versions *

Concurrently allowed in SQL 2005 / 2008. Not allowed in 2000.

Differential backup is running

Blocks in all versions *

Blocked in all versions *

Concurrently allowed in SQL 2005 / 2008. Not allowed in 2000.

Log backup already is running

Concurrently allowed in SQL 2005 / 2008.

Concurrently allowed in SQL 2005 / 2008.

Blocked in all versions *

* Note: wait type of the blocked process will be LCK_M_U with a waittype of BULKOP_BACKUP_DB

SQL Server setup log locations and more

In order to efficiently troubleshoot problems involving failed setup, we need to use the setup logs. Once you know where the log files are located, you can start troubleshooting. What to look for in the log files is a separate topic in itself, but at least if you know where the log files are, then you can attempt to look for logical clues in the files. So, where are these log files located? Here’s a summary, listed by product and version. I took some efforts to compile the list of available rollup hotfixes and their relevant KB articles, hopefully that in itself is a very useful thing.

So, I hope the list and the notes below are very useful for you; please feel free to rate the article and provide comments as well!

Product Major Version 1

Product Level 2

Version post setup 3

Bulletins / KB Articles

Location of setup logs 4

SQL Server 2000 (8.0)









Post-SP4 GDR security rollup 7


MS08-040; 948110

%WINDIR%SQL2000-KB948110-v8.00.2050-x86-ENU.log; SQLServerClient_Hotfix_KB948110_*.log

[Recommended] Post-SP4 QFE security rollup 8.a




%WINDIR%SQL2000-KB948111-v8.00.2273-x86-ENU.log; SQLServerClient_Hotfix_KB948111_*.log 11

IA64 versions of the security rollups 13

Per respective packages mentioned above



Clustered instance 14



%WINDIR%Sqlclstr.log 5;

%WINDIR%sqlstpN.log from both nodes


Service Pack only:

%WINDIR%sqlspN.log from both nodes;

%WINDIR%remsetup.log from active node


Hotfix only:

Relevant hotfix files as described above from both nodes

SQL Server 2005 (9.0)




%ProgramFiles%Microsoft SQL Server90SetupBootstrapLOGFiles




%WINDIR%Hotfix 10




%ProgramFiles%Microsoft SQL Server90Setup BootstrapLOGHotfix; files will have KB921896 as part of their name.

Post-SP2 GDR security rollup (engine only)




Same as above; files will have KB948109 as part of their name.

Post-SP2 GDR security rollup (engine + Reporting Services)


MS08-052; 954606


Same as above; files will have KB954606 as part of their name.

Post-SP2 QFE security rollup (engine only)




Same as above; files will have KB948108 as part of their name.

Post-SP2 QFE security rollup (engine + Reporting Services)


MS08-052; 954607

Same as above; files will have KB954607 as part of their name.

CU #11 for SP2 8



Same as above; files will have KB958735 as part of their name.

[Recommended] SP3



Same as above; files will have KB955706 as part of their name.

CU #1 for SP3 8



Same as above; files will have KB959195 as part of their name.

Clustered instance


%WINDIR%tasksSchedLgU.Txt 6; Setup bootstrap will copy all the files from remote node to active node.

SQL Server 2008




%ProgramFiles%Microsoft SQL Server100Setup BootstrapLog 9

CU #2




Clustered instance


At the time of writing no special considerations are known.



  1. This is just the common name which we all tend to use, the value in brackets is the major version number used by the product itself.
  2. To determine the product level use SELECT SERVERPROPERTY(‘ProductLevel’). However this will not reflect the GDR / QFE hotfix status. It just shows the latest service pack.
  3. To determine the exact SQL Server build number / version, use SELECT SERVERPROPERTY(‘ProductVersion’). (More information about other product levels and version numbers is summarized in this KB article.)
  4. Here %WINDIR% refers to the location where Windows is installed (typically C:WINDOWS) and %PROGRAMFILES% denotes location of the Program Files folder (typically C:PROGRAM FILES).
  5. This file is only generated for SQL 2000 clustered instance installation. Service Pack installation will update this file, but subsequent hotfix installers will not update this file.
  6. The SQL Server 2005 RTM and service pack installers will launch the remote node setup program using Task Scheduler service. This log file is used to check if the remote task was actually started or not. Check this file (in addition to the regular setup files) on each node of your SQL failover cluster.
  7. GDR vs. QFE releases can be a confusing topic. Luckily they are explained very well in the following blog links:
  8. In general, note that applying a Cumulative Update should be done only if you are facing specific issues fixed by that update; doing so will take you to the QFE tree and you should apply the appropriate QFE rollup (if and when available.) This note applies to all rollups and Cumulative Updates.
    • 8.a However for SQL 2000 SP4, we might recommend being on the QFE tree due to the fact that there will be no further service packs for that product.
  9. There are no separate folders for RTM and hotfix / Service Pack installers. Instead, look at the relevant sub-folder of this folder with the latest time stamp to get to the latest installation files. Historical logs will be maintained in the subfolder with the respective timestamps.
  10. SQL Server 2005 SP1 is unique in the SQL 2005 chain because it stores its logs under the WindowsHotfix folder. Under that there are subfolders for each component, such as SQL9, SQLTools9 etc.
  11. The exact filename is dependent on the name of the self-extracting package which was executed. The SQL 2000 QFE rollup generates many more log files than the GDR installer. This is expected because the GDR security rollup only contains security fixes. The QFE security rollup contains bug fixes and the requisite security fixes. See this KB article for details.
  12. For what to look for in the log files, consider this topic from SQL Books Online. An additional reference for troubleshooting Windows Installer logs is found here.
  13. The hotfix installer for SQL 2000 IA64 is described in this KB article.
  14. Cluster delta specific instructions only; regular log files still apply.

Windows, SQL Server and Multi-Core / Hyper-Threading

Updated 14 March 2019: the information in this blog post is NO LONGER RELEVANT. Official licensing information should be obtained through your regular Microsoft sales / licensing specialists.

Very often we get asked a question about ‘how many processors does my SQL Server 200x Edition really support’. This post hopes to summarize the considerations and actual limits.

SQL Licensing is per socket

First, let’s take a look at the Books Online topic ‘Maximum Number of Processors Supported by the Editions of SQL Server‘. It says: SQL Server is licensed per processor socket, and not per logical CPU basis.

So what does that really mean? Again from the above Books Online topic:

For example, the following is considered a single processor for purposes of this table:

  • A single-core, hyper-threaded processor with 2 logical CPUs per socket.
  • A dual-core processor with 2 logical CPUs.
  • A quad-core processor with 4 logical CPUs.

Windows Licensing is also per socket

Just like SQL Server, Windows Server licensing has been based around sockets. This document summarizes it nicely:

For Microsoft software with processor limits, each processor counts as a single processor regardless of the number of cores and/or threads that the processor contains. For example, Windows Server 2003 R2 Standard Edition can be used on a four-processor system, whether the processors in the system are single-core, hyperthreaded, or multicore.

So, what’s in my box?

A natural question for customers is, how to determine what is in my server hardware? Is it socket, or cores, or logical CPUs from HyperThreading?

Option 1: Use Intel’s utility

From our KB article on SQL and Hyperthreading, we recommend using Intel’s CPUCount utility to distinguish this for IA32 or Intel 64 class CPUs. The utility distinguishes between CPUs (sockets), cores, and logical CPUs (HyperThreading). The current article describing it is here and the sample code is here (both are from the Intel website.) If you build the samples using Visual Studio you can see the output below:

Option 2: Use SysInternals CoreInfo

Mark Russinovich has released CoreInfo which can also dump similar information. The only concern I have is that it uses the term ‘physical processor’ a little too freely, which can confuse someone. Here is sample output for a 1-socket, 2-core, no HT CPU:

Option 3: Windows Vista / Server 2008

MSInfo32 (a.k.a. System Information program) can be used to distinguish between sockets, cores and logical processors.

Option 4: Third Party Tools

A variety of 3rd party tools can report detailed CPU information, such as number of cores, logical processors and more.

Product Limitations

The Windows 2003 CPU (physical socket) limitations can be drawn from this page and an updated one for Windows 2008 at this page:

Windows Edition Number of physical sockets supported
Windows Server 2003 / 2008, Standard Edition 4
Windows Server 2003 / 2008, Enterprise Edition 8
Windows Server 2003 / 2008, Datacenter Edition 32 (32-bit) or 64 (64-bit)

SQL Server limitations from Books Online:

SQL Edition Number of physical sockets supported
SQL Server 2005 / 2008, Express Edition 1
SQL Server 2005 / 2008, Standard Edition 4
SQL Server 2005 / 2008, Enterprise Edition OS Maximum (as per the table above)
SQL Server 2000, Desktop Engine (MSDE) 2 on currently supported OS editions *
SQL Server 2000, Standard Edition 4 on currently supported OS
SQL Server 2000, Enterprise Edition Check link below *

* There are specific notes for older unsupported OS platforms such as Windows 98 and NT 4.0 which are described in this SQL 2000 Books Online entry.

Also note that SQL 2000 only introduced HyperThreading awareness in SP3 so that the number of logical processors are handled correctly when it comes to enforcing the licensing limits.

There is a related blog post from our CSS SQL brethren which I would recommend all to read.

T-SQL Anti-pattern of the day: ‘all-in-one’ queries


A common requirement for enquiry queries on an OLTP database is to have search criteria which are very specific (‘get me details for for OrderID = NNNN’) and also the occasional reports which ask for all the orders (‘get me all the orders, no questions asked’.) Here is a sample from AdventureWorks which illustrates the problem:

    SELECT *
    FROM Sales.SalesOrderHeader
    WHERE (SalesOrderID = @OrderID OR @OrderID IS NULL)

What is the meaning of the underlined predicate in the above WHERE clause? It is actually a ‘special case’ where the developer intends to get back all the rows, regardless of the OrderID. This ‘special case’ is triggered by passing in a value of NULL for the @OrderID parameter.


So while this construct looks good in theory, it lends itself to very poor performance. Take a look at the 2 cases where this procedure is executed.

Case A: with specific OrderID

EXEC RptOrder 43672

Case B: asking for all records


The plan, it turns out, is the same for both cases and a scan is used! This is despite a seekable index being present on SalesOrderID column for the SalesOrderHeader table:


The reason the optimizer chooses to scan the SalesOrderHeader (in this case it chooses a non-clustered index scan) is because it has no way to determine at compile and optimization time, as to what the specific value of @OrderID would be. Hence it has no way to ‘fold’ the (@OrderID IS NULL) expression and therefore has no option but to look at all the records.


‘IF-ELSE’ Workaround: The straightforward workaround in simple cases like the one above is to separate out the 2 cases into an IF-ELSE block:

ALTER PROCEDURE RptOrder(@OrderID int)
    IF (@OrderID IS NOT NULL)
        SELECT *
        FROM Sales.SalesOrderHeader
        WHERE (SalesOrderID = @OrderID)
        SELECT *
        FROM Sales.SalesOrderHeader

Now, the 2 test cases work as expected. Here are the execution plans:

EXEC RptOrder 43672




Dynamic SQL Workaround: However, as the number of predicates in the WHERE clause increase, and if all those predicates (or most of them) have such ‘catch-all’ handling then the IF – ELSE construct becomes unviable. In those cases, a dynamic SQL construct should be considered. Of course, when dealing with dynamic SQL, we must consider security first, including the possibility of SQL Injection and also the Execution Context of the dynamic SQL statement. But that is a topic for another post. Right now, here is how we could handle something like that:

— NOTE: This code is highly simplified and does not provide for any screening

— or protection against SQL injection!!! Provided as-is, confers no warranties.

ALTER PROCEDURE RptOrder(@OrderID int)
    DECLARE @sDynamicSQL nvarchar(4000)
    SELECT @sDynamicSQL = ‘SELECT * FROM Sales.SalesOrderHeader ‘

    IF (@OrderID IS NOT NULL)
        SELECT @sDynamicSQL = @sDynamicSQL + ‘ WHERE (SalesOrderID = @OrderID)’

    EXEC sp_executesql @sDynamicSQL, N’@OrderID int’, @OrderID = @OrderID

Different Code Paths: The cleanest way of course is to consider having separate procedures for each kind of query. For example we can have a procedure called RptSpecificOrder for the case where we are searching by specific OrderID, and another one called RptAllOrders for the ‘get-me-everything’ case. This does have the advantage of clean isolation, but it does not scale easily when the number of predicates are larger. But is does also have the advantage that if we are querying for specific orders 99% of the time, that code path is simplified and optimized accordingly.


Beware of this T-SQL anti-pattern as it is one of the most common ones we see and it does have a huge (negative) impact on query performance. As you can see, if they are not done with these patterns in mind, application design and reporting requirements can have a detrimental effect on OLTP query execution. Separating reporting and OLTP workloads could be the key to solving these kinds of issues. But if separation is not possible, then clever use of separate code paths and stored procedures could help ensure that the most efficient execution plan is selected for each case. For complex queries, dynamic SQL may offer the simplest way out, but due care has to be taken to ensure that permissions and SQL injection issues are kept in mind when dealing with dynamic SQL statements.

T-SQL Anti-pattern of the day: comparing DATETIME field with date-only literal


It is a pretty common situation to have transaction date-time stored in a DATETIME field. The problems start with the fact most applications used GETDATE() or some such equivalent at the client side to record the order date-time stamp. So a typical entry for an OrderDate would actually end up with a time component as well. In versions prior to SQL Server 2008, there was only this option – no specific options like the DATE data type.


While INSERT code worked pretty adequately in this case, the problem starts with running reports of some kind. Most reports take parameters with the date (or date range) for which the reports are required. The (anti-)pattern around the WHERE clause predicates in a typical report query (based on the AdventureWorks database) would look like this:

FROM Sales.SalesOrderHeader
WHERE CONVERT (varchar(10), OrderDate, 112) = ‘20040731’


FROM Sales.SalesOrderHeader
WHERE OrderDate LIKE ’31 Jul 2004%’

Both these predicates are non-optimal for usage of any index (and hopefully it would be a clustered index, to support range scans efficiently) on OrderDate:

  • The first query fails to use an index (and hence scans) because it explicitly converts the OrderDate field to a ISO-format date (varchar) in order to perform the comparison:


  • The second query has an implicit conversion to a varchar data type and hence again fails to use any indexes on OrderDate:



From a design perspective there are potentially 2 problems rolled into 1 in this scenario:

  • The fact that we are storing date and time values in the same datetime field, which is required for an OLTP system
  • The fact that the report is running on the same database as the OLTP system

In many cases the 2nd aspect (reporting and OLTP on the same database) is unavoidable due to hardware constraints. In such cases, the possible workarounds could be:

Option 1: Use explicit timestamps in the literal portion of the predicate in the WHERE clause

This one is perhaps the most practical workaround for most workloads. Here the query is re-written as follows:

FROM Sales.SalesOrderHeader
WHERE OrderDate BETWEEN ’31 Jul 2004 00:00:00′ AND ’31 Jul 2004 23:59:59:997′

The ensuing query plan looks more efficient. Of course the cost of the key lookup is due to the ‘SELECT *’ usage, and that is something which can be overcome by either:

  • Specifying only the relevant fields in the output list OR
  • Considering a clustered index on OrderDate (this probably makes sense in a reporting-only database)

Another thing to note is the usage of 997 milliseconds in the above query. Why not 999 milliseconds? Because then SQL Server would round it up to ‘1 Aug 2004 00:00:00’. Check this linked KB article for more details.


Option 2: Store only the date portion

If the time portion of the order timestamp is not important, have the application INSERT the CONVERTed varchar (or better still, perform that conversion in the application) and store only the date time portion. If you are in a position to upgrade to SQL Server 2008 then the DATE datatype may be of interest here.

Option 3: Have separate fields for datetime and date portion

Modify the database schema to have separate fields for the OrderDateTime and OrderDate. Modify the INSERT code to insert the raw date-timestamp into OrderDateTime, and the a CONVERTed date portion into the OrderDate field. Use the OrderDate field for reports and OrderDateTime for other queries which need it.


Separating reporting from OLTP processing is the key design aspect to always be kept in mind. In situations where that is not possible, some refactoring of the query can help. Table denormalization, coupled with minor changes in the application can also help in some situations to optimize query execution.

There may be other innovative solutions to the problem, and I am eager to hear from any readers about those comments!