Using PowerShell to attach PST files to Outlook

This one is a really quick note. Yet another example to show off the power of the PowerShell pipeline and its interoperability with the COM world.

Recently I had to add 30+ existing PST files to an instance of Outlook. I hate using the mouse and keyboard for repetitive tasks (File –> Open Outlook Data File –> browse to the path –> click OK), so I set out to do the same through some form of scripting.

Luckily, PowerShell can create instances of COM objects, and Outlook provides a COM library to automate Outlook. It was a matter of a few minutes then to look around on Bing and compose the script:

Add-type -assembly "Microsoft.Office.Interop.Outlook" | out-null
$outlook = new-object -comobject outlook.application
$namespace = $outlook.GetNameSpace("MAPI")
dir “c:mypath*.pst” | % { $namespace.AddStore($_.FullName) }

One interesting thing with this is that you may need to run this with the correct version of PowerShell (32-bit is at c:windowssyswow64windowspowershellv1.0powershell.exe) to match the bitness of Outlook installed on your machine.

Also, if you do plan to use the above script, make sure you replace the path with the correct ones. Also do note that if the PST file path is incorrectly specified, Outlook will create a PST at that location, so be careful!

The AddStore method is documented here. Do note though this COM interface does not permit the addition of password protected PST files. In that case you may want to explore the third-party Redemption library.

Disclaimer

Note: Please note third-party links are provided as-is and Microsoft does not offer any guarantees or warranties regarding the content on the third party site.

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.  We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm.

Advertisements

WOW64 madness: debugging through the confusion

Last week I was teaching unmanaged code debugging to one of my customers. They were using 32-bit applications on 64-bit servers, which does create some unique problems. For example, capturing a ‘hang’ dump using right click on the Task Manager –> Processes list, will create a ‘64-bit dump’ (if that makes sense Smile) and the stacks visible by default will be that of the wow64 emulation layer, which in turn cause problems for regular commands, debugging extensions like PSSCOR / SOS and even for some of the inbuilt extensions.

So here is a comprehensive listing of what happens when you use a specific style of capturing a dump / ETW log / using other tools. A similar themed post on all the tools we have is here.

 

32-bit OS + 32-bit application

64-bit OS + 32-bit application (i.e. WOW64)

64-bit OS + 64-bit application

Production Live Debugging

Attach 32-bit WinDbg / CDB Attach 32-bit WinDbg / CDB Attach 64-bit WinDbg / CDB

Gathering a ‘hang’ dump *

1. Use ProcDump, preferably with –r switch

2. Use the Processes tab in DebugDiag

3. Use the 32-bit WinDbg / CDB in non-invasive mode and issue the .dump command

1. Use ProcDump, preferably with –r switch

2. Use the Processes tab in DebugDiag

3. Use the 32-bit WinDbg / CDB in non-invasive mode and issue the .dump command

Avoid using Task Manager.

1. Use ProcDump, preferably with –r switch

2. Use the Processes tab in DebugDiag

3. Use the 64-bit WinDbg / CDB in non-invasive mode and issue the .dump command

Gathering a ‘crash’ dump * 1. Create a crash rule using DebugDiag x86 – ‘just works’

2. ProcDump –e ‘just works’

3. Use the appropriate debugger as per above guidance for hang dumps

1. Create a crash rule using DebugDiag x64 – ‘just works’. #

2. ProcDump –e ‘just works’

3. Use the appropriate debugger as per above guidance for hang dumps

1. Create a crash rule using DebugDiag x64 – ‘just works’

2. ProcDump –e ‘just works’

3. Use the appropriate debugger as per above guidance for hang dumps

Troubleshooting High CPU usage using XPerf

Use 32-bit Windows Performance Toolkit

Use 64-bit Windows Performance Toolkit

Use 64-bit Windows Performance Toolkit

Troubleshooting a Performance issue using PerfView PerfView works correctly regardless of the ‘bitness mix’ – same – – same –
Dump analysis of a dump containing only native code ^
Any debugger should work but the same ‘bitness’ is recommended as far as possible. – same – – same –
Dump analysis of a dump containing any managed code ^ Debug using a 32-bit debugger and 32-bit PSSCOR / SOS. Debug using a 32-bit debugger and 32-bit PSSCOR / SOS.

Debug using a 64-bit debugger and 64-bit PSSCOR / SOS.

‘Live’ kernel debugging using LiveKD LiveKD ‘just works’ provided you have installed x86 Debugging Tools for Windows. LiveKD ‘just works’ provided you have installed x64 Debugging Tools for Windows. LiveKD ‘just works’ provided you have installed x64 Debugging Tools for Windows.
GFLAGS utility Use the x86 version of GFlags Use the x86 version of GFlags Use the x64 version of GFlags
Application Verifier Install x86 version of Application Verifier Install x64 version of Application Verifier (automatically includes x86 binaries) and launch the ‘Application Verifier (WOW)’ program. $ Install x64 version of Application Verifier.

# Note that you will not be able to / should not install the 32-bit version of DebugDiag on 64-bit OS.

* Note: for ProcDump, must also use –ma switch for detailed debugging, especially if you are debugging memory issues and for managed code dumps it is a must as well.

^ Note: Dump must be gathered ‘correctly’ with one of the above supported mechanisms.

$ Empirically, running either the native or WOW version of the GUI seems to put entries under both sets of keys. Please check this post from Chris Jackson for some related details.

The other thing you need to understand before proceeding further is the role of the WOW64 layer. The following blog posts would help you in that case:

Hope this cheat sheet is useful for you. And if you do, I would really appreciate you taking a minute to rate this post and leave a comment!

Correctly handling Linked Server objects in SSDT

Recently I had spoken at the SQL Bangalore User Group about using SSDT and Visual Studio Online to have an efficient and agile development process for the cloud-first world. After my talk meeting, an attendee had asked me about correctly handling four-part names (objects referenced in other servers).

Problem

In his case he had added the referenced DB as a reference, but was still getting the following warning (error if you set the ‘treat warnings as errors’ property):

SQL71561: View: [dbo].[myView] has an unresolved reference to object [remserver].[someDB].[dbo].[RefTable].

Here, myView is the referencing object and someDB is the referenced project (which was already added by him as a database reference.

Solution

Here is how to meet this requirement. Firstly, you must have SSDT projects for both the databases in question. Add them to the same solution, and then setup a reference from your referencing project to the project which contains the object. The important thing is when you do that you can set the “Database Location” option. Make sure you set that as “Different database, different server”:

image

Once you add this reference using the above option, it is very easy to reference the linked server object in your main project. You do that by qualifying the referenced object with the prefix of the above variables defined in the Database Reference. For example, in the above case it is [$(OtherServer)].[$(RefDB)].<schema>.<object>:

CREATE PROCEDURE [dbo].[Mismatch]
    @param1 int = 0
AS
    SELECT COUNT(*) FROM dbo.Orders
    where CustomerID = @param1

    SELECT Id FROM [$(OtherServer)].[$(RefDB)].dbo.RefTable
RETURN 0

And here is a sample view:

CREATE VIEW myView
    As
    SELECT Id FROM [$(OtherServer)].[$(RefDB)].dbo.RefTable

Hope this helps!

Potential data latency with AlwaysOn Availability Groups and Memory Optimized Tables

Today I was testing a scenario with a readable secondary in an Availability Group (AG). The database in the AG contained a memory-optimized table and I was testing read-only query support on the secondary.

The mystery

While changes for on-disk tables are replicated near-instantaneously (of course in my test setup there was no other overhead) and were almost immediately visible on the secondary replica, I was surprised to see that it was not the case with the memory-optimized table.

For example, I would insert a row into the table at the primary replica. That row would not appear in the readable secondary, not even after waiting a few seconds. Note that there was no other workload related to memory optimized tables at the time.

Interestingly, if I would insert a new row in the table on the primary, it would cause the previously inserted row to show up at the secondary! Smile I was quite puzzled to see this behavior. On a hunch, I issued a manual CHECKPOINT on the primary. This ‘flushed’ the queue and I was able to see both rows on the secondary.

Truth revealed!

Since this was a bit strange, I dug around a bit and found a gem in the documentation which explains this behavior. There is something called a ‘safe timestamp’ which is used on the readable secondary to only return rows older than a ‘safe point’ which is normally updated by the garbage collection thread on the primary.

Normally on an active system, this safe timestamp would be periodically changed and periodically sent across to the secondary on a regular basis. But if there is not much activity on memory optimized tables in the database (like in my case) then there is an inordinate delay in sending this safe timestamp across.

Conclusion

If such data latency for in-memory tables is not tolerable on the readable secondary, the product team recommends the usage of ‘dummy’ transactions to propagate this safe timestamp more periodically:

Changes made by transactions on the primary replica since the last safe-timestamp update are not visible on the secondary replica till the next transmission and update of the safe-timestamp. If transactional activity on the primary replica stops before the internal threshold for safe-timestamp update is crossed, the changes made since the last update to safe-timestamp will not be visible on the secondary replica. To alleviate this issue, you may need to run a few DML transactions on a dummy durable memory-optimized table on the primary replica. Alternatively, though not recommended, you can force shipping of safe-timestamp by running a manual checkpoint.

Be aware of this behavior if you plan to use readable secondary and have some phases where there is minimal activity in the system but you still need the changes to in-memory tables to show up on the secondary as soon as possible.

NUL or NULL?

Ever since the old MS-DOS days, the Windows OS family has supported a special ‘null device’ – NUL. Notice the single ‘L’. This is a ‘reserved’ device identifier which you can use as a path. The special thing about this null device (NUL) is that it essentially swallows whatever is sent to it.

NUL and SQL

So what does this have to do with let’s say SQL? Many of us use the null device to test raw backup throughput. The correct way to do this is to specify DISK = ‘NUL’ in the backup command. In that case, you will see an entry similar to the below in the error log, and no physical destination file will be created:

2014-09-20 21:47:43.890 Backup       Database backed up. Database: master, creation date(time): 2014/09/18(16:33:26), pages dumped: 459, first LSN: 234:304:73, last LSN: 234:352:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘nul’}). This is an informational message only. No user action is required.

Now, in some cases, you might accidentally or unknowingly type in NULL (with the double L) instead of NUL. What that does is actually end up with a physical file called NULL Smile You can see proof of that here:

2014-09-20 21:47:03.480 Backup       Database backed up. Database: master, creation date(time): 2014/09/18(16:33:26), pages dumped: 460, first LSN: 234:208:128, last LSN: 234:280:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘C:\Program Files\Microsoft SQL Server\MSSQL12.SQL14\MSSQL\Backup\null’}). This is an informational message only. No user action is required.

Conclusion

So if you are using the NUL device in your tests, make sure you spell it as NUL (without the double L!)

Have a good weekend!

Strange issue when enabling AlwaysOn AG feature in SQL 2014

Today I ran into an issue, which eventually turned out to be a really silly thing, but I think it is important that I share the details.

I was trying to install SQL 2014 side-by-side with an already existing SQL 2012 AG setup. Everything went fine till the customary step when I tried to ‘tick the box’ and enable the AG feature for the newly installed SQL 2014 instances. But that was not to be, with the following error dialog (“The AlwaysOn Availability Groups feature requires the x86(non-WOW) or x64 Enterprise Edition of SQL Server 2012 (or later version) running on Windows Server 2008 (or later version) with WSFC hotfix KB 2494036 installed.)

image

The hotfix in question was already installed and most importantly, my SQL 2012 AG was already functional on this cluster. So it was definitely not something with the OS or cluster.

Not to be deterred, I used the PowerShell option, but that failed as well:

PS SQLSERVER:SQLSQL108W2K8R22SQL14> Enable-SqlAlwaysOn
Enable-SqlAlwaysOn : Could not obtain information about SQL Server Service ‘MSSQL$SQL14’.
At line:1 char:19
+ Enable-SqlAlwaysOn <<<<
    + CategoryInfo          : ResourceUnavailable: (MSSQL$SQL14:String) [Enable-SqlAlwaysOn], SqlPowerShellServiceException
    + FullyQualifiedErrorId : ServiceError,Microsoft.SqlServer.Management.PowerShell.Hadr.EnableSqlHADRServiceCommand

Providence!

Suddenly it dawned upon me that maybe in my haste I had installed an edition of SQL 2014 which did not permit the AG feature. And indeed, it was the Business Intelligence edition that I had installed, which does not permit the usage of AGs.

The cool thing though about SQL 2008 and above is the “Edition Upgrade” feature of the setup, which allows you to easily change editions (provided the upgrade path is allowed – for example you cannot ‘upgrade’ from BI edition to Evaluation) just by entering the correct product key.

The whole edition upgrade thing took just under a minute and after that I was able to successfully enable the SQL 2014 installation for AlwaysOn AGs.

Hope this tip helps someone!

Changes in SQL 2014 RTM ScriptDom

It has been over 3 months now that SQL Server 2014 is generally available. The well-known and much talked about features such as in-Memory OLTP and Clustered ColumnStore indexes are big motivations to move your workload to the new platform.

With the new features, the T-SQL language surface has also changed of course. Under compatibility level 120, there are new grammar elements to consider. I recently went through one such round of impact analysis, specifically around the ScriptDom parser.

The good news is that the move from SQL 2012 ScriptDom to the latest one is very incremental and almost all of the changes are only due to new syntax elements. I prepared a list of these statements (as depicted in the parser) for your convenience.

 

In-Memory OLTP support

No coincidence that the changes related to in-Memory OLTP are the biggest! This section also includes related support such as ATOMIC block statements, delayed durability, inline index definitions etc.

 

Buffer Pool Extension, Resource Governor, HADR and FCI related Server DDL

This set of changes has to do with the support for configuring the BPE, I/O parameters in Resource Governor, and some minor but important DDL to configure HADR availability groups and FCI instances from T-SQL itself.

 

Clustered ColumnStore related

This group includes the index DDL and Archive compression related changes.

 

Incremental Statistics related

 

Managed Lock Priority related

Managed lock priority changes extend to quite a few operations such as index maintenance, partition switch etc.

 

Backup Encryption + Security DDL

Here you notice the changes to support encryption while backing up; and also the new encryption algorithm enumeration to support the same. We also have several new event notifications available.

 

Miscellaneous

In conclusion, while I believe I have covered all the changes (and I’m glad there seem to be none which are BREAKING changes as such) – if you still find something which has been added or changed and I did not mention it in this post, please feel free to leave a comment below!