DIY: Apache Spark and ADLS Gen 2 support

Warning: this walkthrough is not to be considered as official guidance or recommendation from Microsoft. It is presented for educational purposes only, and comes “as-is” and confers no rights, no warranties or guarantees.

There are several options for customers who want to deploy Apache Spark based solutions on Microsoft Azure: Azure Databricks and Azure HDInsight being the most popular ones. In addition there is also the open-source Azure Distributed Data Engineering Toolkit (AZTK) option as well if you want a more IaaS experience. Of course, with Spark providing analytical compute capabilities, what you also need is a first-class cloud storage which offers HDFS-like capabilities: distributed data storage, redundancy and security. Azure Data Lake Storage Gen 2 (ADLS Gen 2) offers exactly that with world-wide availability and competitive pricing.

In order to connect to ADLS Gen 2 from Apache Hadoop or Apache Spark, you need to leverage the ABFS driver, which was shipped publicly with Apache Hadoop 3.2.0. The associated work item HADOOP-15407 has some more information about this implementation, and best of all, the ABFS driver is part of the Hadoop source.

Given that most distributions of Spark tend to come with Hadoop 2.x versions, the ABFS driver is absent in those cases, leading to a blocker for customers who want to “roll their own” Spark infrastructure but also want to use ADLS Gen 2. I was curious to find out if there is a way to get (let’s say) Spark 2.3.3 to work with Hadoop 3.2.0 (which does include the ABFS driver) and thereby offer at least a path forward (albeit subject to the disclaimers around supportability and stability).

The good news is that Spark comes with a “Hadoop-free” binary distribution which does allow users to associate it with any release of Hadoop, thereby allowing them to “mix and match” Spark and Hadoop versions. Here’s a set of commands that I used to do exactly this on a dev setup, just to see if it works.

The first few steps are just to get the binary tarballs for Spark 2.3.3 (without Hadoop) and separately, for Hadoop 3.2.0. Then extract those as well:

cd ~
wget https://www-eu.apache.org/dist/spark/spark-2.3.3/spark-2.3.3-bin-without-hadoop.tgz
wget https://www-eu.apache.org/dist/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz
tar -zxvf spark-2.3.3-bin-without-hadoop.tgz
tar -zxvf hadoop-3.2.0.tar.gz

Then we proceed to setup environment variables. The below also assumes that you have OpenJDK 8 installed. The crucial step is to specify SPARK_DIST_CLASSPATH which as described in the Spark documentation, tells Spark to look within the appropriate Hadoop lib folders to get the JARs needed by appropriate Spark code. Further, you will notice that we also add the hadoop/tools/lib/* into the classpath. That is where the ABFS driver lives. Unfortunately, the Spark documentation does not include this vital step.

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 
export HADOOP_HOME=~/hadoop-3.2.0
export PATH=${HADOOP_HOME}/bin:${PATH}
export SPARK_DIST_CLASSPATH=$(hadoop classpath):~/hadoop-3.2.0/share/hadoop/tools/lib/*
export SPARK_HOME=~/spark-2.3.3-bin-without-hadoop
export PATH=${SPARK_HOME}/bin:${PATH}

Then running spark-shell and trying to read from ADLS Gen 2 works fine, out of the box! I used the below sample code to test with the SharedKey authentication option. I have not tested OAuth 2.0 authentication using this custom deployment, though.

spark.conf.set("fs.azure.account.key.<<storageaccount>>.dfs.core.windows.net",  "<<key>>")
spark.read.csv("abfss://<<container>>@<<storageaccount>>.dfs.core.windows.net/<<topfolder>>/<<subfolder>>/file").count

In closing, I want to re-emphasize that the above should strictly be considered as an experiment and is by no means production-ready. For production workloads, I strongly recommend using services like Azure Databricks or Azure HDInsight, which are tested much more and are fully supported by Microsoft CSS.

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.  We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code. This posting is provided “AS IS” with no warranties, and confers no rights.

Advertisement

Strange errors with SQLBindR

I recently encountered a strange problem while trying to bind a SQL Server 2016 instance to Microsoft R Server 9.1. On running the below command:

sqlbindR /list

I got the error:

An unexpected error has occurred: Value cannot be null.
Parameter name: input

I had a hunch that this was related to an ‘orphaned’ pre-release SQL Server 2017 instance which I had previously installed and manually deleted. I found some registry keys related to that (non-existent) instance and deleted those. But SQLBindR was still erroring out:

sqlbindR /list
An unexpected error has occurred: Version string portion was too short or too long

I finally figured out that there was still an entry for the “non-existent” SQL instance at the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL. There was a value (on the right hand pane when viewed in RegEdit) for that old instance. When I deleted that value for the non-existent SQL instance, I was able to run SQLBindR  successfully!

Please note this troubleshooting tip is intended for non-production systems and for advanced users. If you have questions or concerns, or this error is happening on a production system, I strongly recommend you contact Microsoft Support.

Disclaimers Please note that this troubleshooting tip is provided as-is and neither me nor Microsoft offers any guarantees or warranties regarding the same. Using tools like RegEdit incorrectly can cause serious problems that may require you to reinstall your operating system. Use RegEdit at your own risk. This Information is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS INFORMATION IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

Create your own database with all PASS Summit 2017 Sessions using SQL Server 2017 ML Services (Python)

Next week is the biggest event for the SQL Server community: PASS Summit 2017! I will be presenting the Real-world SQL Server R Services session with two of our customers – APT and Financial Fabric. I will also be part of the SQL Clinic, so I’m really excited and pumped about next week!

Being a Data Nut, it suddenly occurred to me today: what if we can import all the PASS Summit 2017 session titles, abstracts and other details into a SQL Server table? And once this thought was in my mind, there was no stopping me! The wonderful thing is that with Python in SQL Server 2017 you can leverage rich libraries such as Beautiful Soup 4, URLLib3 to parse HTML and then present it as a structured table (using Pandas) which SQL can then consume.

The code below will do exactly that for you. It leverages the above mentioned Python libraries, so prior to executing the script you must install two libraries (bs4 & urllib3) using PIP. (Note that pip.exe is present under the C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\scripts folder; change drive letters as appropriate for your installation.) Also note that PIP must be executed from within an administrative CMD prompt in Windows.

The other step for the below script to run correctly is that outgoing Internet access must be allowed for the SQL instance’s R / Python scripts. By default, we block such access. To edit the rule temporarily (we strongly recommend to not disable this rule on a permanent basis) use wf.msc to open the Firewall with Advanced Security console, and then locate the rule “Block network access for R local user accounts in SQL Server instance ” in the Outbound Rules section. Right click and Disable the rule for now; and DO NOT FORGET to enable it later on!

Alright, here is the code!

CREATE DATABASE PASS2017
GO

USE PASS2017
GO

DROP TABLE IF EXISTS PASS2017Sessions

CREATE TABLE PASS2017Sessions
(Abstract nvarchar(max)
		, SessionLevel int
		, Speaker1 nvarchar(100)
		, Speaker2 nvarchar(100)
		, Speaker3 nvarchar(100)
		, Title nvarchar(4000)
		, Track nvarchar(50)
		, SessionId int
		);
GO

INSERT PASS2017Sessions
exec sp_execute_external_script @language = N'Python',
@script = N'
from bs4 import BeautifulSoup
import urllib3
import re
import pandas as pd

http = urllib3.PoolManager()
schedpage = http.request("GET", "http://www.pass.org/summit/2017/Sessions/Schedule.aspx")
schedpage.status
soup_schedpage = BeautifulSoup(schedpage.data, "lxml")
schedpage.close()

documents = []
processedsessions = []

sessioncells = soup_schedpage.find_all("div", class_ = "session-schedule-cell", recursive=True)
for currsess in sessioncells:
    hrefs = currsess.find_all("a")
    if (len(hrefs) >= 1):
        rowdict = {}

        # session title

        rowdict["Title"] = hrefs[0].text

        # session level
        sesslevel = currsess.find("p", id = re.compile("plcLevel"))
        if (sesslevel != None):
            rowdict["Level"] = sesslevel.text.replace("Level: ", "")
        else:
            rowdict["Level"] = None

        # session track
        allps = currsess.find_all("p")
        rowdict["Track"] = allps[len(allps) -2].text

        # get into session page itself
        if ("href" in hrefs[0].attrs):
            sessurl = hrefs[0].attrs["href"]

            # session ID
            mtch = re.search(r"sid=(d+)", sessurl)
            if (mtch is None):
                continue

            # check if this session ID was already processed
            sessionid = mtch.group(1)

            if (sessionid in processedsessions):
                continue

            processedsessions.append(sessionid)

            rowdict["sessionid"] = sessionid

            sesspage = http.request("GET", sessurl)
            soup_sesspage = BeautifulSoup(sesspage.data, "lxml")
            sesspage.close()

            # session abstract
            sessabstract = soup_sesspage.find("pre", class_ ="abstract")
            rowdict["Abstract"] = sessabstract.text
            if (len(rowdict["Abstract"]) == 0):
                continue

            # speakers
            allspeakers = soup_sesspage.find_all("a", id=re.compile("Detail.+lnkSpeaker"))

            rowdict["Speaker1"] = None
            rowdict["Speaker2"] = None
            rowdict["Speaker3"] = None

            if (len(allspeakers) >= 1):
                rowdict["Speaker1"] = allspeakers[0].text

            if (len(allspeakers) >= 2):
                rowdict["Speaker2"] = allspeakers[1].text

            if (len(allspeakers) == 3):
                rowdict["Speaker3"] = allspeakers[2].text
        else:
            continue

        documents.append(rowdict)

OutputDataSet = pd.DataFrame(documents)'
GO

select *
from PASS2017Sessions
GO

If all goes well you should see the results:

httpsmsdnshared.blob.core.windows.netmedia201710PASS2017Sessions

Isn’t that so cool! Play around with it and let me know what you think. And if you are at the Summit and interested in SQL Server ML Services, a friendly reminder to come to the Real-world SQL Server R Services session. See you later!

Disclaimer

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.  We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code. This posting is provided “AS IS” with no warranties, and confers no rights.

The ‘aggregate concatenation’ T-SQL (anti-)pattern

[Update 20 March 2019: this functionality is now natively provided in SQL Server 2017+ / Azure SQL DB through the STRING_AGG intrinsic]

Some time back, Dimitri Furman who works as a Senior Consultant in the Microsoft Services team shared a very useful tip. It pertains to a pattern of usage wherein strings from multiple rows are concatenated into a single large (typically comma delimited) string. The usual way that we see this being accomplished is by code like the below:

DECLARE @res NVARCHAR(max)

SELECT @res = COALESCE(@res + ',', '') + FirstName + LastName
FROM AdventureWorks2016CTP3.Person.Person

SELECT @res

While the above is quite simple to read, it is quite slow (as you will see soon) and worse – it does not ‘inline’ meaning you cannot use the above inside a inline table-valued function. This last property was very important for us, because in the case we were looking at, the above COALESCE based concatenation code was in a scalar UDF, which clearly is a big ‘no-no’ from a performance perspective. Also, we did not want to utilize SQLCLR so the usage of the GROUP_CONCAT CLR UDA was ruled out.

Now, here is a much better way to refactor the above concatenation ‘loop’ into the below:

SELECT STUFF((
SELECT ',' + FirstName + LastName
FROM AdventureWorks2016CTP3.Person.Person
FOR XML PATH('')
,TYPE
).value('.', 'nvarchar(max)'), 1, 1, '') AS RawResult

On first glance, it may seem counter-intuitive that the second version could be faster, but that is indeed the case. For the above queries, here is the comparison:

COALESCE loop XML Path
CPU time = 7969 ms,  elapsed time = 7984 ms CPU time = 15 ms,  elapsed time = 16 ms

The usage of XML PATH in this way is not exactly ‘breaking news’ but the performance difference is dramatic. And more importantly for the case we were discussing at work, the XML PATH based code can be inlined, making it suitable for use in TVFs. And in our real case, that’s exactly what we wanted to do (refactor the UDF into a TVF.)

Have a great week ahead!

T-SQL ranking functions to the rescue once again!

This week I happened to assist with an internal application where there was a performance issue with a specific query. One of my observations with that query was that it was doing self-joins in order to determine the latest iteration of a specific record. That’s a common anti-pattern which can be fixed by using T-SQL’s windowing functions, such as ROW_NUMBER() or RANK().

The Inefficient Way

To give more context, here is an example from the Contoso Retail DW database. The requirement (very similar to the actual customer case) here is to obtain a list of all the customers who have placed an order with us, and for each of those customers, obtain the date of the latest order. The implementation that I first noticed was using a self-join as given below; notice the special case handling where the customer has 2 orders on the same day and then the order number (the OnlineSalesKey column here) becomes the tie-breaker:

SELECT l.CustomerKey
	,l.DateKey AS LatestOrder
FROM dbo.FactOnlineSales l
LEFT JOIN dbo.FactOnlineSales r ON l.CustomerKey = r.CustomerKey
	AND (
		l.DateKey < r.DateKey
		OR (
			l.DateKey = r.DateKey
			AND l.OnlineSalesKey > r.OnlineSalesKey
			)
		)
WHERE r.DateKey IS NULL

This query is *extremely* inefficient, burning 100% CPU on my i7 laptop with 8 logical CPUs! On my laptop it will run for well over 10 minutes before I get impatient and cancel. For reference, here is the estimated execution plan for the above query (scroll to the right and note the query cost of 167692).

badplan

Just for kicks, I ran the above query with MAXDOP 144 (NOT a good idea, but this was for fun) on our lab machine which has 144 logical CPUs and here is the picture I see there 🙂 Obviously NOT something you want in production!

144procs

Rewriting the query

Now, the efficient way to re-write this is to use Ranking Functions in T-SQL. These have been around a while (SQL 2005 actually!) but I feel they are under-utilized. Here is the query re-written using the ROW_NUMBER() function. This solution also elegantly takes care of the above tie-breaking logic which required the disjunction (OR predicate) in the previous query.

WITH NumberedOrders
AS (
	SELECT CustomerKey
		,Datekey
		,ROW_NUMBER() OVER (
			PARTITION BY CustomerKey ORDER BY Datekey DESC
				,OnlineSalesKey DESC
			) AS RowNumber
	FROM FactOnlineSales
	)
SELECT CustomerKey
	,Datekey
FROM NumberedOrders
WHERE RowNumber = 1

Here is the new execution plan. Note that the cost is also much lesser: 447.

goodplan

Here are the execution statistics of this query, it completes in 43 seconds compared the self-join approach taking forever!

  • Table ‘FactOnlineSales’. Scan count 9, logical reads 92516, physical reads 0, read-ahead reads 72663, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  • Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 31263, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
  • SQL Server Execution Times:
  • CPU time = 19231 ms, elapsed time = 43834 ms.

Moral of the story: next time you see query patterns involving self joins to achieve this kind of ‘latest record of a particular type’ take a pause and see if T-SQL Ranking Functions can help!

Common sub-expression elimination in SQL Server: what you need to know

In working with our customers, I come across some very interesting patterns in T-SQL. Not all of these patterns are complex on the face of it, but their impact can sometimes be substantial. Here is a simple example to demonstrate what I am referring to:

SELECT CASE WHEN EXISTS (SELECT *
                         FROM   Person.Person AS P
                         WHERE  P.BusinessEntityID = SOD.ProductID) THEN 1 ELSE 0 END AS Col1,
       CASE WHEN EXISTS (SELECT *
                         FROM   Person.Person AS P
                         WHERE  P.BusinessEntityID = SOD.ProductID) THEN 1 ELSE 0 END AS Col2
FROM   Sales.SalesOrderDetail AS SOD;

This query is clearly fictional, but it will suffice for our walkthrough here 🙂 Now here’s the execution plan for the above statement (you can also test this by using AdventureWorks2012 and above):

httpsmsdnshared.blob.core.windows.netmedia201606image922

Observe that the Person.Person table is accessed twice in the above plan. In the real customer scenario that I was looking at, the table involved was accessed using a nested loop join, for a total of 13 million seeks. And these 13 million were repeated again for the second usage of the subquery.

My attempt to rewrite the query was to use a LEFT OUTER JOIN syntax, as reflected in this simple example:

select CASE WHEN P.BusinessEntityID IS NOT NULL THEN 1 ELSE 0 END as Col1,

CASE WHEN P.BusinessEntityID IS NOT NULL THEN 1 ELSE 0 END as Col2

from Sales.SalesOrderDetail SOD

LEFT OUTER JOIN Person.Person P ON P.BusinessEntityID = SOD.ProductID

Here is the execution plan for the optimized version:

httpsmsdnshared.blob.core.windows.netmedia201606image923These two queries are functionally equivalent (because in this case there is a unique key on the BusinessEntityId column) and return the same set of results. As you can see the Person table is just accessed once in the revised query. The cost of the re-written query is also ~ 6 times lesser than the original query! This kind of optimization is referred to as Constant Subexpression Elimination. There are some strategies used by SQL Server (see this paper and this one as well for some ideas) but not every possible scenario (such as the above one) can be optimized. Also, this paper from Microsoft Research covers future ideas on the topic.

For now, it is best to review your T-SQL code for multiple instances of the same subquery and replace those with an explicit JOIN instead.

Unable to launch runtime for ‘R’ script: Check this first

This post is a quickie, hoping that the tip will come in handy to others who are facing the same issue.

Note: the steps below were written for SQL Server 2016 pre-release versions. They are NOT valid for RTM and subsequent releases.

Today I was trying to enable the R Services (a.k.a. Advanced Analytics) feature in SQL Server 2016, which brings the world of R into SQL Server. I thought that I had diligently followed instructions on installing and configuring the integration. Next, I tried to execute the basic script below (sort of like a ‘loopback’ test for this feature to check if the configuration is okay:

sp_execute_external_script @language =N’R’,@script =N’OutputDataSet <- InputDataSet’, @input_data_1 =N’SELECT 1 as Col’ WITH RESULT SETS ((col int not null));

Unfortunately it kept returning the error messages below:

Msg 39021, Level 16, State 1, Line 1
Unable to launch runtime for ‘R’ script. Please check the configuration of the ‘R’ runtime.
Msg 39019, Level 16, State 1, Line 1
An external script error occurred:
Unable to launch the runtime. ErrorCode 0x80070490: 1168(Element not found.).
Msg 11536, Level 16, State 1, Line 1
EXECUTE statement failed because its WITH RESULT SETS clause specified 1 result set(s), but the statement only sent 0 result set(s) at run time.

I went back to the instructions and realized I had forgotten one step which is to download an updated version of the post-installation configuration script from the Microsoft Download Center. Clicking on the previous link actually gives us an updated registerRext.exe. Running the registerRext.exe /uninstall followed by registerRext.exe /install with this updated version fixed my problem!

Do note that the errors mentioned above could have been caused by other issues, but at a minimum please ensure that you are using the updated registerRext.exe.

What’s new in the Server 2016 CTP 3.3 ScriptDom Parser

We just made SQL Server 2016 CTP 3.3 publicly available a few days ago, and it has some new features which are also reflected in the managed T-SQL parser (Microsoft.SqlServer.TransactSql.ScriptDom). As readers of this blog may know, I am a big fan of this parser and to see it keeping pace with the latest preview release of the SQL Server engine feels great!

Here are the main changes you can leverage if you are using the SQL Server 2016 CTP 3.3 ScriptDom parser:

  • The ScriptDom parser is up to date with the new support for delayed compression of delta rowgroups in Columnstore indexes: the new COMPRESSION_DELAY option in ALTER INDEX helps if you are using non-clustered indexes in a real-time analytics scenario. See this article for more details.
  • The RSA_3072, RSA_4096 algorithms are now added as algorithms inside the ScriptDom parser. These are typically used in the CREATE ASYMMETRIC KEY syntax.

I also wanted to inform anyone who is planning to upgrade their previous installation of SQL Server 2016 preview (CTP 3.2 typically) to the release notes, which contain a very important step (if you are using FILESTREAM) you must take BEFORE upgrading to CTP 3.3. So please take a minute and check the release notes prior to upgrading!

Hope this helps!

Error 0xC1900101 – 0x20017 upgrading to Windows 10 build 10162

We are all excited about the upcoming release of Windows 10 on the 29th of July. In the meantime, you can install a preview version from the Windows Insider web page, or if you’d like to wait, you can reserve your free upgrade by following the steps here.

At the beginning…

At Microsoft, we are at the forefront of the ‘self-hosting’ ballgame. Some time back, I had installed Windows 10 preview build 10074. This week, with the newer builds being released in rapid succession, I decided to switch the Preview Build frequency to ‘Windows Insider Fast’ so that I could get hold of the latest (at the time) 10162 build:

image

Everything seemed to work alright with the update to 10162…

image

… till the point where it rebooted the computer:

image

 

Ahem…

Instead of seeing the usual ‘Upgrading Windows, sit tight..’ screen, this is what I saw:

clip_image002

Well. the keyboard layout screen was certainly not expected! But when I selected US keyboard, it asked me whether I wanted to Troubleshoot or shutdown; which is again quite strange:

clip_image004

So I played along and tried to troubleshoot, at which stage I was basically left with options such as Reset the PC:

clip_image006

And even Reset PC failed with a helpful error message that the ‘drive where Windows is installed is locked’:

clip_image010

Left with no option – post reboot, I was greeted with the infamous error 0xC1900101 – 0x20017, which has plagued some users in the Windows 8.1 upgrade as well:

image

 

Some clues

At this time, I also noticed a very strange issue: any USB mass storage device I would plug in to the laptop was not assigned a drive letter. It would show up in Disk Management, but when I would try to give it a drive letter, Disk Management would popup a cryptic message: ‘The operation failed to complete because the Disk Management console view is not up-to-date. Refresh the view by using the refresh task. If the problem persists close the Disk Management console, then restart Disk Management or restart the computer’.

Well, this was getting stranger and stranger! But somehow, it appeared that the two issues were inter-related. Internally, I was told that the error code 0xC1900101 – 0x20017 was also related to USB drivers. And the above disk management error was in the past attributed to ‘ghost’ drives.

Hide and seek

With that clue in mind, I took at look at my Device Manager screen and enabled the ‘Show Hidden Devices’ under the View menu. In my case, I found some devices under Storage Volumes which only showed up when Hidden Devices were enabled. And those were clearly not attributed to any devices I had installed on my laptop (i.e. I had nothing related to Roxio Saib in reality on my laptop):

image

So I decided to delete these hidden Roxio devices (in another case where I had seen an identical problem the devices showed up as Unknown Device under Storage Volumes, in which case I proceeded to delete those Unknown Devices under Storage Volumes).

WARNING: Using Device Manager incorrectly can cause serious problems that may require you to reinstall Windows. Microsoft cannot guarantee that problems resulting from deleting incorrect devices can be solved. Use these steps at your own risk. Please note again that in this walkthrough, we have only deleted Hidden, Unknown devices which were listed under the Storage Volumes category.

clip_image002[5]

There were two of these Roxio Saib devices (which clearly did not apply to my laptop) and I proceeded to clean both instances up, followed by a reboot:

image

 

We’re there!

After the reboot, I re-tried the Update:

image

At the end of this, and post restart, lo and behold! The expected ‘Upgrading Windows’ screen appeared:

clip_image002[7]

And after around 20 minutes, the upgrade to 10162 was done successfully!

clip_image004[5]

For those of you who had the misfortune to run into the error code 0xc1900101 – 0x20017 when upgrading to Windows 10, I hope this walkthrough helps you get unblocked! And in case you are still unsure or not confident of what to do, please contact Microsoft Customer Services and Support to get more direct help on your issue.