Visual Studio

Recursive U-SQL With PowerShell (U-SQL Looping)

In its natural form U-SQL does not support recursive operations and for good reason. This is a big data, scale out, declarative language where the inclusion of procedural, iterative code would be very unnatural. That said, if you must pervert things PowerShell can assist with the looping and dare I say the possibility for dynamic U-SQL.

A couple of caveats…

  • From the outset, I accept this abstraction with PowerShell to achieve some iterative process in U-SQL is a bit of a hack and very inefficient, certainly in the below example.
  • The scenario here is not perfect and created using a simple situation for the purposes of explanation only. I know the same data results could be achieved just by extending the aggregate grouping!

Hopefully that sets the scene. As I’m writing, I’m wondering if this blog post will be hated by the purists out there. Or loved by the abstraction hackers. I think I’m both 🙂

Scenario

As with most of my U-SQL blog posts I’m going to start with the Visual Studio project available as part of the data lake tools extension called ‘U-SQL Sample Application’. This gives us all the basic start up code and sample data to get going.

Input: within the solution (Samples > Data > Ambulance Data) we have some CSV files for vehicles. These are separated into 16 sources datasets covering 4 different vehicle ID’s across 4 days.

Output: let’s say we have a requirement to find out the average speed of each vehicle per day. Easily enough with a U-SQL wildcard on the extractor. But we also want to output a single file for each day of data. Not so easy, unless we write 1 query for each day of data. Fine with samples only covering 4 days, not so fine with 2 years of records split by vehicle.

Scenario set, lets look at how we might do this.

The U-SQL

To produce the required daily outputs I’m going to use a U-SQL query in isolation to return a distinct list of dates across the 16 input datasets, plus a parameterised U-SQL stored procedure to do the aggregation and output a single day of data.

First getting the dates. The below simply returns a text file containing a distinct list of all the dates in our source data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
DECLARE @InputPath string = "/Samples/Data/AmbulanceData/{filename}";
 
@DATA =
    EXTRACT 
        [vehicle_id] INT,
        [entry_id] long,
        [event_date] DateTime,
        [latitude] FLOAT,
        [longitude] FLOAT,
        [speed] INT,
        [direction] string,
        [trip_id] INT?,
        [filename] string
    FROM 
        @InputPath
    USING 
        Extractors.Csv();
 
@DateList =
    SELECT DISTINCT 
        [event_date].ToString("yyyyMMdd") AS EventDateList
    FROM 
        @DATA;
 
OUTPUT @DateList
TO "/output/AmbulanceDataDateList.txt"
USING Outputters.Csv(quoting : FALSE, outputHeader : FALSE);

Next, the below stored procedure. This uses the same input files, but does the required aggregation and outputs a daily file matching the parameter passed giving us a sinlge output.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
CREATE PROCEDURE IF NOT EXISTS [dbo].[usp_OutputDailyAvgSpeed]
    (
    @OutputDate string
    )
AS
BEGIN
 
    //DECLARE @OutputDate string = "20140914"; //FOR dev
    DECLARE @InputPath string = "/Samples/Data/AmbulanceData/{filename}";
    DECLARE @OutputPath string = "/output/DailyRecords/VehicleAvgSpeed_" + @OutputDate + ".csv";
 
    @DATA =
        EXTRACT 
            [vehicle_id] INT,
            [entry_id] long,
            [event_date] DateTime,
            [latitude] FLOAT,
            [longitude] FLOAT,
            [speed] INT,
            [direction] string,
            [trip_id] INT?,
            [filename] string
        FROM 
            @InputPath
        USING 
            Extractors.Csv();
 
    @VAvgSpeed =
        SELECT DISTINCT 
            [vehicle_id],
            AVG([speed]) AS AverageSpeed
        FROM 
            @DATA
        WHERE
            [event_date].ToString("yyyyMMdd") == @OutputDate
        GROUP BY
            [vehicle_id];
 
    OUTPUT @VAvgSpeed
    TO @OutputPath
    USING Outputters.Csv(quoting : TRUE, outputHeader : TRUE);
 
END;

At this point, we could just execute the stored procedures with each required date, manually crafted from the text file. Like this:

1
2
3
4
[dbo].[usp_OutputDailyAvgSpeed]("20140914");
[dbo].[usp_OutputDailyAvgSpeed]("20140915");
[dbo].[usp_OutputDailyAvgSpeed]("20140916");
[dbo].[usp_OutputDailyAvgSpeed]("20140917");

Fine, for small amounts of data, but we can do better for larger datasets.

Enter PowerShell and some looping.

The PowerShell

As with all things Microsoft PowerShell is our friend and the supporting cmdlets for the Azure Data Lake services are no exception. I recommend these links if you haven’t yet written some PowerShell to control ADL Analytics jobs or upload files to ADL Storage.

Moving on. How can PowerShell help us script our data output requirements? Well, here’s the answer, in my PowerShell script below I’ve done the following.

  1. Authenticate against my Azure subscription (optionally create yourself a PSCredential to do this).
  2. Submit the first U-SQL query as a file to return the distinct list of dates.
  3. Wait for the ADL Analytics job to complete.
  4. Download the output text file from ADL storage.
  5. Read the contents of the text file.
  6. Iterate over each dates listed in the text file.
  7. Submit a U-SQL job for each stored procedure with the date passed from the list.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#Params...
$WhereAmI = $MyInvocation.MyCommand.Path.Replace($MyInvocation.MyCommand.Name,"")
 
$DLAnalyticsName = "myfirstdatalakeanalysis" 
$DLAnalyticsDoP = 10
$DLStoreName = "myfirstdatalakestore01"
 
 
#Create Azure Connection
Login-AzureRmAccount | Out-Null
 
$USQLFile = $WhereAmI + "RecursiveOutputPrep.usql"
$PrepOutput = $WhereAmI + "AmbulanceDataDateList.txt"
 
#Summit Job
$job = Submit-AzureRmDataLakeAnalyticsJob `
    -Name "GetDateList" `
    -AccountName $DLAnalyticsName `
    –ScriptPath $USQLFile `
    -DegreeOfParallelism $DLAnalyticsDoP
 
Write-Host "Submitted USQL prep job."
 
#Wait for job to complete
Wait-AdlJob -Account $DLAnalyticsName -JobId $job.JobId | Out-Null
 
Write-Host "Downloading USQL output file."
 
#Download date list
Export-AzureRmDataLakeStoreItem `
    -AccountName $DLStoreName `
    -Path $myrootdir\output\AmbulanceDataDateList.csv `
    -Destination $PrepOutput | Out-Null
 
Write-Host "Downloaded USQL output file."
 
#Read dates
$Dates = Get-Content $PrepOutput
 
Write-Host "Read date list."
 
#Loop over dates with proc call for each
ForEach ($Date in $Dates)
    {
    $USQLProcCall = '[dbo].[usp_OutputDailyAvgSpeed]("' + $Date + '");'
    $JobName = 'Output daily avg dataset for ' + $Date
 
    Write-Host $USQLProcCall
 
    $job = Submit-AzureRmDataLakeAnalyticsJob `
        -Name $JobName `
        -AccountName $DLAnalyticsName `
        –Script $USQLProcCall `
        -DegreeOfParallelism $DLAnalyticsDoP
 
    Write-Host "Job submitted for " $Date
    }
 
Write-Host "Script complete. USQL jobs running."

At this point I think its worth reminding you of my caveats above 🙂

I would like to point out the flexibility in the PowerShell cmdlet Submit-AzureRmDataLakeAnalyticsJob. Allowing us to pass a U-SQL file (step 2) or build up a U-SQL string dynamically within the PowerShell script and pass that as the execution code (step 7), very handy. Switches: -Script or -ScriptPath.

If all goes well you should have jobs being prepared and shortly after running to produce the daily output files.

I used 10 AU’s for my jobs because I wanted to burn up some old Azure credits, but you can change this in the PowerShell variable $DLAnalyticsDoP.

Conclusion

It’s possible to archive looping behaviour with U-SQL when we want to produce multiple output files, but only when we abstract the iterative behaviour away to our friend PowerShell.

Comments welcome on this approach.

Many thanks for reading.

 

Ps. To make life a little easier. I’ve stuck all of the above code and sample data into a GitHub repostiory to save you copy and pasting things from the code windows above.

https://github.com/mrpaulandrew/RecursiveU-SQLWithPowerShell

 

 


Calling U-SQL Stored Procedures with C# Code Behind

So friends, some more lessons learnt when developing with U-SQL and Azure Data Lake. I’ll try and keep this short.

Problem

You have a U-SQL stored procedure written and working fine within your Azure Data Lake Analytics service. But we need to add some more business logic or something requiring a little C# magic. This is the main thing I love about U-SQL, having that C# code behind file where I can extend my normal SQL behaviour. So, being a happy little developer you write your class and method to support the U-SQL above and you recreate your stored procedure. Great!

Next, you try to run that stored procedure…

[ExampleDatabase].[dbo].[SimpleProc]();

But are hit with an error, similar to this:

E_CSC_USER_INVALIDCSHARP: C# error CS0103: The name ‘SomeNameSpaceForCodeBehind’ does not exist in the current context.


Why?

Submitting U-SQL queries containing C# code behind methods works fine normally. But once you wrap it up as a stored procedure within the ADL analytics database the complied C# is lost. Almost as if the U-SQL file/procedure no longer has its lovely code behind file at all!

Just to be explicit with the issue. Here is an example stored procedure that I’ve modified from the Visual Studio U-SQL Sample Application project. Note my GetHelloWord method that I’ve added just for demonstration purposes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
DROP PROCEDURE IF EXISTS [dbo].[SimpleProc];
 
CREATE PROCEDURE [dbo].[SimpleProc]()
AS
BEGIN
 
    @searchlog =
        EXTRACT UserId INT,
                START DateTime,
                Region string,
                Query string,
                Duration INT?,
                Urls string,
                ClickedUrls string
        FROM "/Samples/Data/SearchLog.tsv"
        USING Extractors.Tsv();
 
    @WithCodeBehind =
        SELECT 
            *,
            SomeNameSpaceForCodeBehind.MyCodeBehind.GetHelloWorld() AS SomeText
        FROM @searchlog;
 
    OUTPUT @WithCodeBehind
    TO "/output/SearchLogResult1.csv"
    USING Outputters.Csv();
 
END;

This U-SQL file then has the following C#, with my totally original naming conventions. No trolls please, this is not the point of this post 🙂

1
2
3
4
5
6
7
8
9
10
11
namespace SomeNameSpaceForCodeBehind
{
    public class MyCodeBehind
    {
        static public string GetHelloWorld()
        {
            string text = "HelloWorld";
            return text;
        }
    }
}

So, this is what doesn’t work. Problem hopefully clearly defined.

Solution

To work around this problem, instead of using a C# code behind file for the procedure we need to move the class into its own assembly. This requires a little more effort and plumbing, but does solve this problem. Plus, this approach is probably more familiar to people that have ever worked with CLR functions in SQL Sever that they want to use within a stored procedure.

This is what we need to do.

  • Add a C# class library to your Visual Studio solution and move the U-SQL code behind into a library name space.

  • Build the library and use the DLL to create an assembly within the ADL analytics database. The DLL can live in your ADL store root, in line it or create it from Azure Blob Store. I have another post on that here if your interested.
CREATE ASSEMBLY IF NOT EXISTS [HelloWorld] FROM "assembly/ClassLibrary1.dll";
  • Finally, modify your stored procedure to use the assembly instead of the code behind name space. The new stored procedure should look like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
DROP PROCEDURE IF EXISTS [dbo].[SimpleProc];
 
CREATE PROCEDURE [dbo].[SimpleProc]()
AS
BEGIN
 
    //Complied library:
    REFERENCE ASSEMBLY [HelloWorld];
 
    @searchlog =
        EXTRACT UserId INT,
                START DateTime,
                Region string,
                Query string,
                Duration INT?,
                Urls string,
                ClickedUrls string
        FROM "/Samples/Data/SearchLog.tsv"
        USING Extractors.Tsv();
 
    @WithCodeBehind =
        SELECT *,
               //Changed TO USE assembly:
               HelloWorld.ClassLibrary1.GetHelloWorld() AS SomeText
        FROM @searchlog;
 
    OUTPUT @WithCodeBehind
    TO "/output/SearchLogResult1.csv"
    USING Outputters.Csv();
 
END;

This new procedure executes without error and gets around the problem above.

I hope this helps and allows you to convert those complex U-SQL scripts to procedures, while retaining any valuable code behind functionality.

Many thanks for reading

Using Azure Data Factory Configuration Files

Like most things developed its very normal to have multiple environments for the same solution; dev, test, prod etc. Azure Data Factory is no exception to this. However, where it does differ slightly is the way it handles publishing to different environments from the Visual Studio tools provided. In this post we’ll explore exactly how to create Azure Data Factory (ADF) configuration files to support such deployments to different Azure services/directories.

For all the examples in this post I’ll be working with Visual Studio 2015 and the ADF extension available from the market place or via the below link.

https://marketplace.visualstudio.com/items?itemName=AzureDataFactory.MicrosoftAzureDataFactoryToolsforVisualStudio2015

Before we move on lets take a moment to say that Azure Data Factory configuration files are purely a Visual Studio feature. At publish time Visual Studio simply takes the config file content and replaces the actual JSON attribute values before deploying in Azure. That said, to be explicit.

  • An ADF JSON file with attribute values missing (because they come from config files) cannot be deployed using the Azure portal ‘Author and Deploy’ blade. This will just fail validation as missing content.
  • An ADF JSON config file cannot be deployed using the Azure portal ‘Author and Deploy’ blade. It is simply not understood by the browser based tool as a valid ADF schema.

Just as an aside. Code comments in ADF JSON files are also purely a Visual Studio feature. You can only comment your JSON in the usual way in Visual Studio, which at publish time will strip these out for you. Any comments left in code that you copy and paste into the Azure portal will return as syntax errors! I have already given feedback to the Microsoft product team that code comments in the portal blades would be really handy. But I digress.

Apologies in advance if I switch between the word publish and deploy too much. I mean the same thing. I prefer deploy, built in a Visual Studio ADF solution its called publish in the menu.

Creating a ADF Configuration File

First lets use the Visual Studio tools to create a common set of configuration files. In a new ADF project you have the familiar tree including Linked Services, Pipelines etc. Now right click on the project and choose Add > New Item. In the dialogue presented choose  Config and add a Configuration File to the project, with a suitable name.

I went to town and did a set of three 🙂

Each time you add a config file to your ADF project. Or any component for that matter. You’ll be aware that Visual Studio tries to help you out by giving you a JSON template or starter for what you might want. This is good, but in the case of ADF config files isn’t that intuitive. Hence this blog post. Lets move on.

Populating the Configuration File

Before we do anything let me attempt to put into word what we need to do here. Every JSON attribute has a reference of varying levels to get to its value.  When we recreate a value in our config file we need to recreate this reference path exactly from the top level of the component name. In the config file this goes as a parent (at the same level as schema) followed by square brackets [ ] which then contain the rest of the content we want to replace. Next within the square brackets of the component we need pairs of attributes (name and value). These represent the references to the actual component structure. In the ‘name’ value we start with a $. which represents the root of the component file. Then we build up the tree reference with a dot for each new level. Lastly, the value is as it says. Just the value to be used instead of whatever may be written in the actual component file.

Make sense? Referencing JSON with JSON? I said it wasn’t intuitive. Lets move on and see it.

Lets populate our configuration files with something useful. This of course greatly depends on what your data factory is doing as to what values you might want to switch between environments, but lets start with a few common attributes. For this example lets alter a pipelines schedule start, end and paused values. I always publish to dev as paused to give me more control over running the pipeline.

At the bottom of our pipeline component file I’ve done the following.

1
2
3
4
5
6
7
8
9
    //etc...
	//activities block
	],
    "start": "1900-01-01", /*<get from config file>*/
    "end": "1900-01-01", /*<get from config file>*/
    "isPaused": /*<get from config file>*/,
    "pipelineMode": "Scheduled"
  }
}

… Which means in my config file I need to create the equivalent set of attribute references and values. Note; the dollar for the root, then one level down into the properties namespace. Then another dot before the attribute.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
  "ExactNameOfYourPipeline": [ // <<< Component name. Exactly!
    {
      "name": "$.properties.isPaused",
      "value": true
    },
    {
      "name": "$.properties.start",
      "value": "2016-08-01"
    },
    {
      "name": "$.properties.end",
      "value": "2017-06-01"
    }
  ]
}

A great thing about this approach with ADF tools in Visual Studio is that any attribute value can be overridden with something from a config file. It’s really flexible and each component can be added in the same way regardless of type. There are however some quirks/features to be aware of, as below.

  • All parent and child name referencing within the config file must match its partner in the actual component JSON file exactly.
  • All referencing is case sensitive. But Visual Studio won’t validate this for you in intellisense or when building the project.
  • In the actual component file some attribute values can be left blank as they come from config. Others cannot and will result in the ADF project failing to build.
  • For any config referencing that fails. You’ll only figure this out when you publish and check the Azure portal to see that the JSON file in place has its original content. Fun.

Right then. Hope that’s all clear as mud 🙂

Publishing using Different Configurations

Publishing is basically the easy bit. Involving a wizard so I don’t need to say much here.

Right click on the project in Visual Studio and choose Publish. In the publish items panel of the wizard simply select the config file you want to use for the deployment.

I hope this post of helpful and saved you some time when developing with ADF.

Many thanks for reading.


Writing a U-SQL Merge Statement

Unlike T-SQL, U-SQL does not currently support MERGE statements. Our friend that we have come to know and love since its introduction in SQL Server 2008. Not only that, but U-SQL also doesn’t currently support UPDATE statements either… I know… Open mouth emoji required! This immediately leads to the problem of change detection in our data and how, for example, we should handle the ingestion of a daily rolling 28-day TSV extract, requiring a complete year to date output. Well in this post we will solve that very problem.

Now before we move on it’s worth pointing out that U-SQL is now our/my language of choice for working with big data in Azure using the Data Lake Analytics service. It’s not yet the way of things for our on premises SQL Server databases, so relax. T-SQL or R are still our out-of-the-box tools there (SQL Server 2016). Also if you want to take a step back from this slightly deeper U-SQL topic and find out What is U-SQL first, I can recommend my purple amphibious colleagues blog, link below.

https://www.purplefrogsystems.com/blog/2016/02/what-is-u-sql/

Assuming you’re comfortable with the U-SQL basics let’s move on. For the below examples I’m working with Azure Data Lake (ADL) Analytics, deployed in my Azure MSDN subscription. Although we can do everything here in the local Visual Studio emulator, without the cloud service (very cool and handy for development). I also have the Visual Studio Data Lake tools for the service available and installed. Specifically for this topic I have created a ‘U-SQL Sample Application’ project to get us started. This is simply for ease of explanation and so you can get most of the setup code for what I’m doing here without any great difficulty. Visual Studio Data Lake tools download link below if needed.

https://www.microsoft.com/en-us/download/details.aspx?id=49504

newdatalakevssampleapp

Once we have this solution available including its Ambulance and Search Log samples please find where your Visual Studio Cloud Explorer panel (Ctrl + \, Ctrl + X) is hiding as we’ll use this to access the local Data Lake Analytics database on your machine.

vscloudandsolutionpanelsDatabase Objects

To get things rolling open the U-SQL file from the sample project called ‘SearchLog-4-CreatingTable’ and execute AKA ‘Submit’ this to run locally against your ADL Analytics instance. This gives us a database and target table to work with for the merge operation. It also inserts some tab separated sample data into the table. If we don’t insert this initial dataset you’ll find joining to an empty table will prove troublesome.

Now U-SQL is all about extracting and outputting at scale. There isn’t any syntax sugar to merge datasets. But do we really need the sugar? No. So, we are going to use a database table as our archive or holding area to ensure we get the desired ‘upsert’ behaviour. Then write our MERGE statement long hand using a series of conventional joins. Not syntax sugar. Just good old fashioned joining of datasets to get the old, the new and the changed.

Recap of the scenario; we have a daily rolling 28-day input, requiring a full year to date output.

Merging Data

Next open the U-SQL file from the sample project called ‘SearchLog-1-First_U-SQL_Script’. This is a reasonable template to adapt as it contains the EXTRACT and OUTPUT code blocks already.

For the MERGE we next need a set of three SELECT statements joining both the EXTRACT (new/changed data) and table (old data) together. These are as follows, in mostly English type syntax first 🙂

  • For the UPDATE, we’ll do an INNER JOIN. Table to file. Taking fields from the EXTRACT only.
  • For the INSERT, we’ll do a LEFT OUTER JOIN. File to table. Taking fields from the EXTRACT where NULL in the table.
  • To retain old data, we’ll do a RIGHT OUTER JOIN. File to table. Taking fields from the table where NULL in the file.

Each of the three SELECT statements can then have UNION ALL conditions between them to form a complete dataset including any changed values, new values and old values loaded by a previous file. This is the code you’ll want to add for the example in your open file between the extract and output code blocks. Please don’t just copy and paste without understanding it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
@AllData =
    --update current
    SELECT e1.[UserId],
           e1.[START],
           e1.[Region],
           e1.[Query],
           e1.[Duration],
           e1.[Urls],
           e1.[ClickedUrls]
    FROM [SearchLogDemo].[dbo].[SearchLog] AS t1
         INNER JOIN
             @searchlog AS e1
         ON t1.[UserId] == e1.[UserId]
 
    UNION ALL
 
    --insert new
    SELECT e2.[UserId],
           e2.[START],
           e2.[Region],
           e2.[Query],
           e2.[Duration],
           e2.[Urls],
           e2.[ClickedUrls]
    FROM @searchlog AS e2
         LEFT OUTER JOIN
             [SearchLogDemo].[dbo].[SearchLog] AS t2
         ON t2.[UserId] == e2.[UserId]
    WHERE
    t2.[UserId] IS NULL
 
    UNION ALL
 
    --keep existing
    SELECT t3.[UserId],
           t3.[START],
           t3.[Region],
           t3.[Query],
           t3.[Duration],
           t3.[Urls],
           t3.[ClickedUrls]
    FROM @searchlog AS e3
         RIGHT OUTER JOIN
             [SearchLogDemo].[dbo].[SearchLog] AS t3
         ON t3.[UserId] == e3.[UserId]
    WHERE
    e3.[UserId] IS NULL;

This union of data can then OUTPUT to our usable destination doing what U-SQL does well before resetting our ADL Analytics table for the next load. By reset, I mean TRUNCATE the table and INSERT everything from @AllData back into it. This preserves our history/our old data and allows the MERGE behaviour to work again and again using only SELECT statements.

Replacing the OUTPUT variable from @searchlog, you’ll then want to add the following code below the three SELECT statements.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
OUTPUT @AllData
TO "/output/SearchLogAllData.csv"
USING Outputters.Csv();
 
TRUNCATE TABLE [SearchLogDemo].[dbo].[SearchLog];
 
INSERT INTO [SearchLogDemo].[dbo].[SearchLog]
(
    [UserId],
    [START],
    [Region],
    [Query],
    [Duration],
    [Urls],
    [ClickedUrls]
)
SELECT [UserId],
       [START],
       [Region],
       [Query],
       [Duration],
       [Urls],
       [ClickedUrls]
FROM @AllData;

If all goes well you can edit the ‘SearchLog.tsv’ file removing and changing data and keep rerunning the U-SQL script performing the MERGE behaviour. Please test away. Don’t just believe me that it works. As a bonus you get this pretty job diagram too…

localusqlmergejob

The only caveat here is that we can’t deal with deletion detection from the source file… Unless we do something a little more complex for the current loading period. Lets save that for a later blog post.

A couple of follow up general tips.

  • Have a USE condition at the top of your scripts to ensure you hit the correct database. Just like T-SQL.
  • If your struggling for fields to join on as you don’t have a primary key. You could use UNION instead of UNION ALL. But this of course takes more effort to work out the distinct values. Just like T-SQL.
  • Be careful with C# data types and case sensitivity. U-SQL is not as casual as T-SQL with such things.

That’s it. U-SQL merge behaviour achieved. I guess the bigger lesson here for the many T-SQL people out there is; don’t forget the basics, its still a structured query language. Syntax sugar is sweet, but not essential.

Hope this was helpful.

Many thanks for reading


Azure Data Lake Authentication from Azure Data Factory

rereTo set the scene for the title of this blog post lets firstly think about other services within Azure. You’ll probably already know that most services deployed require authentication via some form of connection string and generated key. These keys can be granted various levels of access and also recycled as required, for example an IoT Event Hub seen below (my favourite service to play with).

levelskeysandconnectionstrings

Then we have other services like SQLDB that require user credentials to authenticate as we would expect from the on premises version of the product. Finally we have a few other Azure services that handle authentication in a very different way altogether requiring both user credentials initially and then giving us session and token keys to be used by callers. These session and token keys are a lot more fragile than connection strings and can expire or become invalid if the calling service gets rebuilt or redeployed.

In this blog post I’ll explore and demonstrate specifically how we handle session and token based authentication for Azure Data Lake (ADL), firstly when calling it as a Linked Service from Azure Data Factory (ADF), then secondly within ADF custom activities. The latter of these two ADF based operations becomes a little more difficult because the .Net code created and compiled is unfortunately treated as a distant relative to ADF requiring its own authentication to ADL storage as an Azure Application. To further clarify a Custom Activity in ADF does not inherit its authorising credentials from the parent Linked Service, it is responsible for its own session/token. Why? Because as you may know from reading my previous blog post; Custom Activates get complied and executed by an Azure Batch Service. Meaning the compute for the .Net code is very much detached from ADF.

At this point I should mention that this applies to Data Lake Analytics and Data Lake Storage. Both require the same approach to authentication.

Data Lake as a Service Within Data Factory

adf-author-and-deploy-buttonThe easy one first, adding an Azure Data Lake service to your Data Factory pipeline. From the Azure portal within the ADF Author and Deploy blade you simply add a new Data Lake Linked Service which returns a JSON template for the operation into the right hand panel. Then we kindly get provided with an Authorize button (spelt wrong) at the top of the code block.

Clicking this will pop up with the standard Microsoft login screen requesting work or personal user details etc. Upon competition or successful authentication your Azure subscription will be inspected. If more than one applicable service exists, you’ll of course need to select which you require authorisation for. But once done you’ll return to the JSON template now with a completed Authorization value and SessionId.

adf-adl-json-templateJust for information and to give you some idea of the differences in this type of authorisation compared to other Azure services. When I performed this task for the purpose of creating screen shots in this post the resulting Authorization URL was 1219 characters long and the returned SessionId was 1100! Or half a page of a standard Word document each. By comparison an IoT Hub key is only 44 characters. Furthermore, the two values are customised to the service that requested them and can only be used within the context where they were created.

For completeness, because we can also now develop ADF pipelines from Visual Studio it’s worth knowing that a similar operation is now available as part of the Data Factory extension. In Visual Studio within your ADF project on the Linked Service branch you are able to Right Click > Add > New Item and choose Data Lake Store or Analytics. You’ll then be taken through a wizard (similar in look to that of the ADF deployment wizard) which requests user details, the ADF context and returns the same JSON template with populated authorising values.

vs-adf-adl-addservice

A couple of follow up tips and lessons learnt here:

  • If you tell Visual Studio to reverse engineer your ADF pipeline from a current Azure deployed factory where an existing ADL token and session ID are available. These will not be brought into Visual Studio and you’ll need to authorise the service again.
  • If you copy an ADL JSON template from the Azure portal ‘Author and Deploy’ area Visual Studio will not popup the wizard to authorise the service and you’ll need to do it again.
  • If you delete the ADL Linked Service within the portal ‘Author and Deploy’ area. The same Linked Service tokens in Visual Studio will become invalid and you’ll need to authorise the service again.
  • If you sneeze to loudly while Visual Studio is open you’ll need to authorise the service again.

Do you get the idea when I said earlier that the authorisation method is fragile? Very sophisticated, but fragile when chopping and changes things during development.

What you may find yourself doing fairly frequently is:

  1. Deploying an ADF project from Visual Studio.
  2. The deployment wizard failing telling you the ADL tokens have expired or are no longer authorised.
  3. Adding a new Linked Service to the project just to get the user authentication wizard popup.
  4. Then copying the new token and session values into the existing ADL Linked Service JSON file.
  5. Then excluding the new services you created just to re-authorise from the Visual Studio project.

Fun! Moving on.

Update: you can use an Azure AD service principal to authenticate both Azure Data Lake Store and Azure Data Lake Analytics services from ADF. Details are included in this post: https://docs.microsoft.com/en-gb/azure/data-factory/v1/data-factory-azure-datalake-connector#azure-data-lake-store-linked-service-properties

Data Factory Custom Activity Call Data Lake

Next the slightly more difficult way to authenticate against ADL, using an ADF .Net Custom Activity. As mentioned previously the .Net code once sent to Azure as a DLL is treated as a third party application requiring its own credentials.

The easiest way I’ve found to getting this working is firstly to use PowerShell to register the application in Azure which using the correct CMDLets returns an application GUID and password which when combined give the .Net code its credentials. Here’s the PowerShell you’ll need below. Be sure you run this with elevated permissions locally.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Sign in to Azure.
Add-AzureRmAccount
 
#Set this variables
$appName = "SomeNameThatYouWillRegoniseInThePortal"
$uri = "AValidURIAlthoughNotApplicableForThis"
$secret = "SomePasswordForTheApplication"
 
# Create a AAD app
$azureAdApplication = New-AzureRmADApplication `
    -DisplayName $appName `
    -HomePage $Uri `
    -IdentifierUris $Uri `
    -Password $secret
 
# Create a Service Principal for the app
$svcprincipal = New-AzureRmADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId
 
# To avoid a PrincipalNotFound error, I pause here for 15 seconds.
Start-Sleep -s 15
 
# If you still get a PrincipalNotFound error, then rerun the following until successful. 
$roleassignment = New-AzureRmRoleAssignment `
    -RoleDefinitionName Contributor `
    -ServicePrincipalName $azureAdApplication.ApplicationId.Guid
 
# The stuff you want:
 
Write-Output "Copy these values into the C# sample app"
 
Write-Output "_subscriptionId:" (Get-AzureRmContext).Subscription.SubscriptionId
Write-Output "_tenantId:" (Get-AzureRmContext).Tenant.TenantId
Write-Output "_applicationId:" $azureAdApplication.ApplicationId.Guid
Write-Output "_applicationSecret:" $secret
Write-Output "_environmentName:" (Get-AzureRmContext).Environment.Name

My recommendation here is to take the returned values and store that in something like the Class Library settings, available from the Visual Studio project properties. Don’t store them as constants at the top of your Class as its highly likely you’ll need them multiple times.

Next, what to do with the application GUID etc. Well in your Custom Activity C# will need something like the following. Apologies for dumping massive code blocks into this post, but you will need all of this in your Class if you want to use details from your ADF service and work with ADL files.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
class SomeCustomActivity : IDotNetActivity
{
	//Get credentials for app
	string domainName = Settings.Default.AzureDomainName;
	string appId = Settings.Default.ExcelExtractorAppId; //From PowerShell &lt;&lt;&lt;&lt;&lt;
	string appPass = Settings.Default.ExceExtractorAppPass; //From PowerShell &lt;&lt;&lt;&lt;&lt;
	string appName = Settings.Default.ExceExtractorAppName; //From PowerShell &lt;&lt;&lt;&lt;&lt;
 
	private static DataLakeStoreFileSystemManagementClient adlsFileSystemClient;
	//and or:
	private static DataLakeStoreAccountManagementClient adlsAccountManagerClient;
 
	public IDictionary&lt;string, string&gt; Execute(
		IEnumerable linkedServices,
		IEnumerable datasets,
		Activity activity,
		IActivityLogger logger)
	{
		//Get linked service details from Data Factory
		Dataset inputDataset = new Dataset();
		inputDataset = datasets.Single(dataset =&gt; 
			dataset.Name == activity.Inputs.Single().Name);
 
		AzureDataLakeStoreLinkedService inputLinkedService;
 
		inputLinkedService = linkedServices.First(
			linkedService =&gt;
			linkedService.Name ==
			inputDataset.Properties.LinkedServiceName).Properties.TypeProperties
			as AzureDataLakeStoreLinkedService;
 
		//Get account name for data lake and create credentials for app
		var creds = AuthenticateAzure(domainName, appId, appPass);
		string accountName = inputLinkedService.AccountName;
 
		//Authorise new instance of Data Lake Store
		adlsFileSystemClient = new DataLakeStoreFileSystemManagementClient(creds);
 
		/*
			DO STUFF...
 
			using (Stream input = adlsFileSystemClient.FileSystem.Open
				(accountName, completeInputPath)
				)
		*/	
 
 
		return new Dictionary&lt;string, string&gt;();
	}
 
 
	private static ServiceClientCredentials AuthenticateAzure
		(string domainName, string clientID, string clientSecret)
	{
		SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
 
		var clientCredential = new ClientCredential(clientID, clientSecret);
		return ApplicationTokenProvider.LoginSilentAsync(domainName, clientCredential).Result;
	}
}

Finally, before you execute anything be sure to grant the Azure app permissions to the respective Data Lake service. In the case of the Data Lake Store. From the portal you can use the Data Explorer blades to assign folder permissions.

adl-grant-permissions

I really hope this post has saved you some time in figuring out how to authorise Data Lake services from Data Factory. Especially when developing beyond what the ADF Copy Wizard gives you.

Many thanks for reading.


Creating Azure Data Factory Custom Activities

When creating an Azure Data Factory (ADF) solution you’ll quickly find that currently it’s connectors are pretty limited to just other Azure services and the T within ETL (Extract, Transform, Load) is completely missing altogether. In these situations where other functionality is required we need to rely on the extensibility of Custom Activities. A Custom Activity allows the use of .Net programming within your ADF pipeline. However, getting such an activity setup can be tricky and requires a fair bit of messing about. In this post a hope to get you started with all the basic plumbing needed to use the ADF Custom Activity component.

Visual Studio

Firstly, we need to get the Azure Data Factory tools for Visual Studio, available via the below link. This makes the process of developing custom activities and ADF pipelines a little bit easier. Compared to doing all the development work in the Azure portal. But be warned, because this stuff is still fairly new there are some pain points/quirks to overcome which I’ll point out.

https://visualstudiogallery.msdn.microsoft.com/371a4cf9-0093-40fa-b7dd-be3c74f49005

Once you have this extension available in Visual Studio create yourself a new solution with 2x projects. Data Factory and a C# Class Library. You can of course use VB if you prefer.
vsdatafactoryproject

Azure Services

Next, like the Visual Studio section above this is really a set of prerequisites for making the ADF custom activity work. Assuming you already have an ADF service running in your Azure subscription you’ll also need:

  • Azure Batch Service (ABS) – this acts as the compute for your C# called by the ADF custom activity. absThe ABS is a strange service which you’ll find when you spin one up. Under the hood it’s basically a virtual machine requiring CPU, RAM and an Operating System. Which you have to choose when deploying it (Windows or Linux available). But none of the graphical interface is available to use in a typical way, no RDP access to the Windows server below. Instead you give the service a compute Pool, where you need to assign CPU cores. The pool in turn has Tasks created in it by the calling services. Sadly because ADF is just for orchestration we need this virtual machine style glue and compute layer to handle our compiled C#.
  • Azure Storage Account (ASC) – this is required to house your compiled C# in it’s binary .DLL form. Aascs you’ll see further down this actually gets zipped up as well with all it’s supporting packages. It would be nice if the ABS allowed access to the OS storage for this, but no such luck I’m afraid.

At this point, if your doing this for the first time you’ll probably be thinking the same as me… Why on earth do I need all this extra engineering? What are these additional services going to cost? And, why can I not simply inline my C# in the ADF JSON pipeline and get it to handle the execution?

Well, I have voiced these very questions to the Microsoft Azure Research team and the Microsoft Tiger Team. The only rational answer is to keep ADF as a dum orchestrator that simply runs other services. Which is fine if it didn’t need this extensibility to do such simple things. This then leads into the argument about ADF being designed for data transformation. Should it just be for E and L, not T?

Let’s bottle up these frustrations for another day before this blog post turns into a rant!

C# Class Library

Moving on. Now for those of you that have ever read my posts before you’ll know that I don’t claim to be a C# expert. Well today is no exception! Expect fluffy descriptions in the next bit 🙂

First in your class project lets add the NuGet packages and references you’ll need for the library project to work with ADF. Using the Package Manager Console (Visual Studio > Tools > NuGet Package Manager > Package Manager Console) run the following installation lines to add all your required references.

Install-Package Microsoft.Azure.Management.DataFactories
Install-Package Azure.Storage

Next the fun bit. Whatever class name you decide to use it will need to inherit from IDotNetActivity which is the interface used at runtime by ADF. Then within the your new class you need to create an IDictionary method called Execute. It is this method that will be ran by the ABS when called from ADF.

Within the IDictionary method. Extended properties and details about the datasets and services on each side of the custom activity pipeline can be accessed. Here is the minimum of what you’ll need to connect the dots between ADF and your C#.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
using System;
using System.Collections.Generic;
using System.Linq;
 
using Microsoft.Azure;
using Microsoft.Azure.Management.DataFactories.Models;
using Microsoft.Azure.Management.DataFactories.Runtime;
 
namespace ClassLibrary1
{
    public class Class1 : IDotNetActivity
    {
        public IDictionary&lt;string, string&gt; Execute(
                IEnumerable linkedServices,
                IEnumerable datasets,
                Activity activity,
                IActivityLogger logger)
        {
            logger.Write("Start");
 
            //Get extended properties
            DotNetActivity dotNetActivityPipeline = (DotNetActivity)activity.TypeProperties;
 
            string sliceStartString = dotNetActivityPipeline.ExtendedProperties["SliceStart"];
 
            //Get linked service details
            Dataset inputDataset = datasets.Single(dataset =&gt; dataset.Name == activity.Inputs.Single().Name);
            Dataset outputDataset = datasets.Single(dataset =&gt; dataset.Name == activity.Outputs.Single().Name);
 
            /*
                DO STUFF
            */
 
            logger.Write("End");
 
            return new Dictionary&lt;string, string&gt;();
        }
    }
}

How you use the declared datasets will greatly depend on the linked services you have in and out of the pipeline. You’ll notice that I’ve also called the IActivityLogger using the write method to make user log entries. I’ll show you where this gets written to later from the Azure portal.

adfreferencetoclassesI appreciate that the above code block isn’t actually doing anything and that it’s probably just raised another load of questions. Patience, more blog posts are coming! Depending on what other Azure services you want your C# class to use next we’ll have to think about registering it as an Azure app so the compiled program can authenticate against other components. Sorry, but that’s for another time.

The last and most important thing to do here is add a reference to the C# class library in your ADF project. This is critical for a smooth deployment of the solution and complied C#.

Data Factory

Within your new or existing ADF project you’ll need to add a couple of things, specifically for the custom activity. I’m going to assume you have some datasets/data tables defined for the pipeline input and output.

Linked services first, corresponding to the above and what you should now have deployed in the Azure portal;

  • Azure Batch Linked Service – I would like to say that when presented with the JSON template for the ABS that filling in the gaps is pretty intuitive for even the most none technical peeps amongst us. However the names and descriptions are wrong within the typeProperties component! Here’s my version below with the corrections and elaborations on the standard Visual Studio template. Please extend your sympathies for the pain it took me to figure out where the values don’t match the attribute tags!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
  "$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/
              Microsoft.DataFactory.LinkedService.json",
    "name": "AzureBatchLinkedService1",
    "properties": {
        "type": "AzureBatch",
      "typeProperties": {
        "accountName": "<Azure Batch account name>",
        //Fine - get it from the portal, under service properties.
 
        "accessKey": "<Azure Batch account key>",
        //Fine -  get it from the portal, under service properties.
 
        "poolName": "<Azure Batch pool name>",
        //WRONG - this actually needs to be the pool ID
        //that you defined when you deployed the service.
        //Using the Pool Name will error during deployment.
 
        "batchUri": "<Azure Batch uri>",
        //PARTLY WRONG - this does need to be the full URI that you
        //get from the portal. You need to exclude the batch
        //account name. So just something like https://northeurope.batch.azure.com
        //depending on your region.
        //With the full URI you'll get a message that the service can't be found!
 
        "linkedServiceName": "<Specify associated storage linked service reference here>"
        //Fine - as defined in your Data Factory. Not the storage
        //account name from the portal.
      }
    }
}
  • Azure Storage Linked Service – the JSON template here is ok to trust. It only requires the connection string for your blob store which can be retrieved from the Azure Portal and inserted in full. Nice simple authentication.

Once we have the linked services in place lets add the pipeline. Its worth noting that by pipeline I mean the ADF component that houses our activities. A pipeline is not the entire ADF end to end solution in this context. Many people do use it as a broad term for all ADF things incorrectly.

  • Dot Net Activity – here we need to give ADF all the bits it needs to go away and execute our C#. Which is again defined in the typeProperties. Below is a JSON snippet of just the typeProperties block that I’ve commented on to go into more detail about each attribute.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
"typeProperties": {
  "assemblyName": "<Name of the output DLL to be used by the activity. e.g: MyDotNetActivity.dll>",
  //Once your C# class library has been built the DLL name will come from the name as the
  //project in Visual Studio by default. You can also change this in the project properties
  //if you wish.
 
  "entryPoint": "<Namespace and name of the class that implements the IDotNetActivity interface e.g: MyDotNetActivityNS.MyDotNetActivity>",
  //This needs to include the namespace as well as the class. Which is what the default is
  //alluding to where the dot separation is used. Typically your namespace will be inheritated
  //from the project default. You might override this to be the CS filename though so be careful.
 
  "packageLinkedService": "<Name of the linked service that refers to the blob that contains the zip file>",
  //Just to the clear. Your storage account linked service name.
 
  "packageFile": "<Location and name of the zip file that was uploaded to the Azure blob storage e.g: customactivitycontainer/MyDotNetActivity.zip>"
  //Here's the ZIP file. If you haven't already you'll need to create a container in your
  //storage account under blobs. Reference that here. The ZIP filename will be the same
  //as the DDL file name. Don't worry about where the ZIP files gets created just yet.
}

adfsolutionwithcsharpBy now you should have a solution that looks something like the solution explorer panel on the right. In mine I’ve kept all the default naming conventions for ease of understanding.

Deployment Time

If you have all the glue in place you can now right click on your ADF project and select Publish. This launches a wizard which takes you through the deployment process. Again I’ve made an assumption here that you are logged into Visual Studio with the correct credentials for your Azure subscription. The wizard will guide you through where the ADF project is going to be deployed, it will also validate the JSON content before sending it up and it will also detect if files in the target ADF service can be deleted.

With the reference in place to the C# class library the deployment wizard will detect the project dependency and zip up the compiled DLLs from your bin folder and upload them into the blob storage linked service referenced in the activity pipeline.

Sadly there is no local testing available for this lot and we just have to develop by trial/deploy/run and error.

 

 

 

 

Runtime

adfmonandmanageTo help with debugging from the portal if you go to the ADF Monitor & Manage area you should have your pipeline displayed. Clicking on the custom activity block will reveal the log files in the right hand panel. The first is the default system stack trace and the other is anything written out by the C# logger.Write call(s). These will become your new best friend when trying to figure out what isn’t working.

Of course you don’t need to perform a full publish of the ADF project every time if your only developing the C# code. Simply build the solution and upload a new ZIP file to your blob storage account using something like Microsoft Azure Storage Explorer. Then rerun the time slice for the output dataset.
adfmonitoring
If nothing appears to be happening you may also want to check on your ABS to ensure tasks are being created from ADF. If you haven’t assigned the compute pool any CPU cores it will just sit there and your ADF pipeline activity will time out with no errors and no clues as to what might have gone wrong. Trust me, I’ve been there too.
azurebatchmon
I hope this post was helpful and gave you a steer as to the requirements for extending your existing ADF solutions with .Net activity.

Many thanks for reading.

Changing the Start Up App on Windows 10 IoT Core

Changing the start up application on your IoT device running Windows IoT Core seems to be a common requirement once the device is out in the field so I hope this post exploring a few different ways of doing this will be of value to people.

To clarify the behaviour we will achieve below is from device power on our Windows 10 IoT Core operating system will boot up. But instead of starting the default application which is originally named ‘IoTCoreDefaultApp’ we will load our own custom created UWP app executable file. I will not talk about deploying your UWP app to the device, the assumption here is that has already been done and a complied .EXE file is living on the devices local storage.

Application & Package Name

Before diving into any changes it’s worth taking a moment to figure what our UWP application is called. This might seem obvious but depending on how you make this change (via the web portal or PowerShell) it’s referred to by different names. Both are configured in Visual Studio under the Package.AppxManifest and do not have to be the same. See below with screen shots from a new vanilla UWP project.

  • Firstly, the application name which is the friendly display name given to solution on creation and the EXE file presented in the settings web portal.

UWPAppName

 

  • Secondly, the package name which is the not so friendly value (defaulted as a GUID) is used as a prefix for the package family name. This is what PowerShell uses and needs to be exact.

UWPPackName

Assumed you’ve navigated that mine field and you know what your UWP app is called depending on the context where its used lets move on.

Headed vs Headless

Sorry, a tiny bit more detail to gain an appreciation of before we actually do this. Start up apps here can take 2x different forms; headed or headless. The difference between them is best thought of as running in the background or running in the foreground. With the foreground apps being visual via the devices display output.

  • Headed = foreground. Visible via display.
  • Headless = background. Not visible, like a Windows service. Also you have no head 🙂

Choose as required and be mindful that you can only have one of each running.

Web Portal

Ok, the easy way to make this change and make your app start up post OS boot: navigate to your devices IP address and authenticate against the web portal. Here in the Apps section you’ll find anything currently deployed to your Windows IoT Core system. Under the Startup field in the Apps table simply click the application you now want to start after the operating system has booted up using the Set as Default App link.

UWPPortalStartupAPP

At the point of making this selection the table content will refresh to reflect the change and the application will also start running.

You can now restart you device and your app will load without any further intervention.

PowerShell

Given the ease of the above why would you want to use PowerShell Paul? Well if you’ve got to do this on mass for 30x devices you’ll want to script things. Or, as I found out when Windows 10 IoT Core was still in preview for the Raspberry Pi 3 the App panel in the web portal above had a bug and just didn’t display anything!

Moving on, create yourself a remote management session to your IoT device. If your unfamiliar with doing this the IoT Dashboard can help. Check out my previous blog post Controlling Your Windows 10 IoT Core Device for guidance.

Within the remote management session you’ll need the command IoTStartUp. When using a new command for the first time I always like to add the help switch /? to check out what it can do.

Next work through the following commands in your PowerShell session.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
## 1.
## Review the list of apps currently available on your device.
IoTStartUp List
## If your expected app isn't there then it hasn't been deployed to the device.
 
## 2.
## Copy the package family name from the list or from Visual Studio if you still have it open.
 
## 3.
## Set the app as headed using the following command.
IoTStartUp Add Headed YOURAPPFAMILYNAME
 
## 4.
## To confirm which apps are now set for startup use:
IoTStartUp Startup

Here’s the same thing in a PowerShell window just to be clear.

UWPPowerShellStartupApp

Again to prove this you can now restart the device. From PowerShell use shutdown -r.

Restoring the Start Up Default

To change this back to the default application either repeat the above PowerShell steps for the package family name IoTCoreDefaultApp_1w720vyc4ccym!App or in the web portal set the app name IoTCoreDefaultApp as the default.

Many thanks for reading


Paul’s Frog Blog

Paul is a Microsoft Data Platform MVP with 10+ years’ experience working with the complete on premises SQL Server stack in a variety of roles and industries. Now as the Business Intelligence Consultant at Purple Frog Systems has turned his keyboard to big data solutions in the Microsoft cloud. Specialising in Azure Data Lake Analytics, Azure Data Factory, Azure Stream Analytics, Event Hubs and IoT. Paul is also a STEM Ambassador for the networking education in schools’ programme, PASS chapter leader for the Microsoft Data Platform Group – Birmingham, SQL Bits, SQL Relay, SQL Saturday speaker and helper. Currently the Stack Overflow top user for Azure Data Factory. As well as very active member of the technical community.
Thanks for visiting.
@mrpaulandrew