PowerShell

Azure SSIS – How to Setup, Deploy, Execute & Schedule Packages

Welcome back to work in 2018! 🙂

Let’s get stuck in with a hot topic. How do we actually use our beloved SQL Server Integration Services (SSIS) packages in Azure with all this new platform as a service (PaaS) stuff? Well, in this post I’m going to go through it end to end.

Post Contents

First, some caveats:

  1. Several of the Azure components required for this are still in public preview and can be considered as ‘not finished’. Meaning this is going to seem a little painful.
  2. The ADFv2 developer UI is still in private preview. But I’ve cheated and used it to generate the JSON to help you guys. Hopefully it’ll be available publicly soon.
  3. I’ve casually used my Microsoft sponsored Azure subscription and not had to worry about the cost of these services. I advise you check with the bill payer.
  4. Everything below has been done in a deliberate order. Especially the service setup.
  5. Everything below has been deployed in the same Azure region to avoid any cross data centre authentication unpleasantness. I suggest doing the same. I used EastUS for this post.

Ok, moving on…

Azure Services Setup

Now, let’s set some expectations. To get our SSIS packages in Azure we new a collection of services. When working on premises this gets neatly wrapped up with a pretty bow into something called SQL Server. Sadly in Azure there is no wrapping, no pretty bow and nothing that neat. Yet!

Azure Data Factory Version 2 (ADFv2)

First up, my friend Azure Data Factory. As you’ll probably already know, now in version 2 it has the ability to create recursive schedules and house the thing we need to execute our SSIS packages called the Integration Runtime (IR). Without ADF we don’t get the IR and can’t execute the SSIS packages. My hope would be that the IR would be a stand alone service, but for now its contained within ADF.

To deploy the service we can simply use the Azure portal blades. Whatever location you choose here make sure you use the same location for everything that follows. Just for ease. Also, it might be worth looking ahead to ensure everything you want is actually available in your preferred Azure region.

Lets park that service and move on.

Azure SQL Server Instance

Next, we need a logical SQL Server instance to house the SSIS database. Typically you deploy one of these when you create a normal Azure SQLDB (without realising), but they can be created on there own without any databases attached. To be clear, this is not an Azure SQL Server Managed Instance. It does not have a SQL Agent and is just the endpoint we connect to and authenticate against with some SQL credentials.

Again to deploy the service we can simply use the Azure portal blades. On this one make sure the box is checked to ‘Allow azure services to access server’ highlighted with the orange arrow below and of course make a note of the user name and password. If you don’t check the box ADF will not be able to create the SSISDB in the logical instance later on.

Once the SQL instance service is deployed. Go into the service blades and update the firewall rules to allow access from your current external IP address. This isn’t anything specifically required for SSIS, you need to do it for any SQLDB connections. Which is something that I always forget, so I’m telling you to help me remember! Thanks.


Azure SSIS IR

Next on the list, we need the shiny new thing, the SSIS IR, it needs creating and then starting up. In my opinion this is a copy of the SQL Server MsDtsSrvr.exe taken from the on premises product and used in the cloud on a VM that we don’t get access to… Under the covers it probably is, but I’m guessing.

Sadly for this we don’t have any nice Azure portal user interface for this yet. It’s going to need some PowerShell. Make sure you have your Azure modules up to date and run the following with the top set of variables assigned as required.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# Azure Data Factory version 2 information:
$SubscriptionId = ""
$ResourceGroupName = ""
$DataFactoryName = "" 
$DataFactoryLocation = ""
 
# Azure-SSIS integration runtime information:
$AzureSSISName = ""
$AzureSSISDescription = ""
 
$AzureSSISNodeSize = "Standard_A4_v2"
$AzureSSISNodeNumber = 2 
$AzureSSISMaxParallelExecutionsPerNode = 2 
$SSISDBPricingTier = "S1" 
 
# Azure Logical SQL instance information:
$SSISDBServerEndpoint = ".database.windows.net"
$SSISDBServerAdminUserName = ""
$SSISDBServerAdminPassword = ""
 
 
<# LEAVE EVERYTHING ELSE BELOW UNCHANGED #>
 
$SSISDBConnectionString = "Data Source=" + $SSISDBServerEndpoint + ";User ID="+ $SSISDBServerAdminUserName +";Password="+ $SSISDBServerAdminPassword
$sqlConnection = New-Object System.Data.SqlClient.SqlConnection $SSISDBConnectionString;
Try
{
    $sqlConnection.Open();
}
Catch [System.Data.SqlClient.SqlException]
{
    Write-Warning "Cannot connect to your Azure SQL DB logical server/Azure SQL MI server, exception: $_"  ;
    Write-Warning "Please make sure the server you specified has already been created. Do you want to proceed? [Y/N]"
    $yn = Read-Host
    if(!($yn -ieq "Y"))
    {
        Return;
    } 
}
 
Login-AzureRmAccount
Select-AzureRmSubscription -SubscriptionId $SubscriptionId
 
Set-AzureRmDataFactoryV2 -ResourceGroupName $ResourceGroupName `
                        -Location $DataFactoryLocation `
                        -Name $DataFactoryName
 
$secpasswd = ConvertTo-SecureString $SSISDBServerAdminPassword -AsPlainText -Force
$serverCreds = New-Object System.Management.Automation.PSCredential($SSISDBServerAdminUserName, $secpasswd)
Set-AzureRmDataFactoryV2IntegrationRuntime  -ResourceGroupName $ResourceGroupName `
                                            -DataFactoryName $DataFactoryName `
                                            -Name $AzureSSISName `
                                            -Type Managed `
                                            -CatalogServerEndpoint $SSISDBServerEndpoint `
                                            -CatalogAdminCredential $serverCreds `
                                            -CatalogPricingTier $SSISDBPricingTier `
                                            -Description $AzureSSISDescription `
                                            -Location $DataFactoryLocation `
                                            -NodeSize $AzureSSISNodeSize `
                                            -NodeCount $AzureSSISNodeNumber `
                                            -MaxParallelExecutionsPerNode $AzureSSISMaxParallelExecutionsPerNode
 
write-host("##### Starting your Azure-SSIS integration runtime. This takes 20 to 30 minutes to complete. #####")
Start-AzureRmDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
                                             -DataFactoryName $DataFactoryName `
                                             -Name $AzureSSISName `
                                             -Force
 
write-host("##### Completed #####")
write-host("If any cmdlet is unsuccessful, please consider using -Debug option for diagnostics.")

I confess I’ve stolen this from Microsoft in there documentation here and tweaked it slightly to use the more precise subscription ID parameter as well as a couple of other things that I felt made life easier. While this is running you should get a process bar from the PowerShell ISE for the SSIS IR service starting, which really does take around 30mins. Be patient.

If you’d prefer to do this through the ADF PowerShell deployment cmdlets here is the JSON to use. Again assign values to the attributes as required. The JSON will create the SSIS IR, but it won’t start it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"name": "",
"properties": {
	"type": "Managed",
	"description": "",
	"typeProperties": {
		"computeProperties": {
			"location": "EastUS",
			"nodeSize": "Standard_A4_v2",
			"numberOfNodes": 2,
			"maxParallelExecutionsPerNode": 2
		},
		"ssisProperties": {
			"catalogInfo": {
				"catalogServerEndpoint": "Your Instance.database.windows.net",
				"catalogAdminUserName": "user",
				"catalogAdminPassword": {
					"type": "SecureString",
					"value": "password"
				},
				"catalogPricingTier": "S1"
}}}}}

For info. The new developer UI gives you a wizard to go through the steps and a nice screen to see that the IR now exists. Until you get public access to this you’ll just have to assume its there.

Anyway, moving on. Once the thing has deployed and started you’ll now have an SSIS IR and also in your logical SQL instance the SSISDB. Exciting!

Open SSMS, making sure your are using version 17.2 or later. In the connection dialogue box as well as the usual bits go to options and explicitly set which database your connecting to. If you don’t the Integration Services branch won’t appear in SSMS Object Explorer. You’ll see the database tables, views, stored procs, but won’t have any of the SSIS options to control packages.

If all goes well you should get a very familiar sight…

Creating & Deploying an SSIS Package

As this is a ‘how do’ guide I’ve done something very simple in my package. It basically copies and pastes a CSV file from one Azure Data Lake Storage (ADLs) folder to another. I’m going to assume we are all familiar with more complex SSIS packages. Plus, the point of this post was getting the services working, not to do any data transformations.

SSIS Azure Feature Pack

What is probably worth pointing out is that if you want to work with Azure services in SSIS SQL Server Data Tools you need to install the Azure Feature Pack. Download and install it from the below link:
https://docs.microsoft.com/en-us/sql/integration-services/azure-feature-pack-for-integration-services-ssis

Once installed in your SSIS Toolbox (Control Flow/Data Flow) and Connection Manager you’ll have Azure services available.

For info, the Azure Data Lake Storage connection manager now offers the option to use a service principal to authenticate.


Package Deployment

Now I’m not going to teach a granny to suck eggs (or whatever the phrase is). To deploy the package you don’t need to do anything special. I simply created the ISPAC file in SSDT and used the project deployment wizard in SSMS. The deployment wizard from the project didn’t work in my version of SSDT running in Visual Studio 2015. Not sure why at this point so I used SSMS.

Package Execution

Similarly I’m going to assume we all know how to execution an SSIS package from management studio. It’s basically the same menu on the right where the deployment wizard gets launched. Granny, eggs, etc.

Or, we can execute a couple of stored procedures using some good old fashioned T-SQL (remember that?). See below.
 

1
2
3
4
5
6
7
8
9
10
11
DECLARE @execution_id bigint;  
 
EXEC [SSISDB].[catalog].[create_execution] 
	@package_name=N'DataLakeCopy.dtsx', 
	@execution_id=@execution_id OUTPUT,
	@folder_name=N'Testing',
	@project_name=N'AzureSSIS',
	@use32bitruntime=False; 
 
EXEC [SSISDB].[catalog].[start_execution] 
	@execution_id;

I mention this because we’ll need it when we schedule the package in ADF later.

Scheduling with ADFv2

Ok, now the fun part. Scheduling the package. Currently we don’t have a SQL Agent on our logical instance and we don’t have Elastic DTU Jobs (coming soon). Meaning we need to use ADF.

Thankfully in ADFv2 this does not involve provisioning time slices! Can I get a hallelujah? 🙂

This is the part where I cheated and used the new developer UI, but I’ll share all the JSON in case you don’t have a template for these bits in ADFv2 yet.

Linked Service to SQLDB

To allow ADF to access and authenticate against our logical SQL instance we need a linked service. We did of course already provide this information when creating the SSIS IR. But ADF needs them again to store and call for activity executions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
    "name": "SSISDB",
    "properties": {
        "type": "AzureSqlDatabase",
        "typeProperties": {
            "connectionString": {
                "type": "SecureString",
                "value": "
Integrated Security=False;
Encrypt=True;
Connection Timeout=30;
Data Source=;
Initial Catalog=;
User ID="
            }
        }
    }
}

The Pipeline

Nothing extra here, a very very simple pipeline similar to what you’ve previously seem in ADFv1. Only without the time slice schedule values and other fluff.

1
2
3
4
5
6
{
    "name": "RunSSISPackage",
    "properties": {
        "activities": []
    }
}

Stored Procedure Activity

Next, the main bit of the instruction set, the activity. You’ll know from the T-SQL above that in the SSISDB you need to first create an instance of the execution for the SSIS package. Then pass the execution ID to the start execution stored procedure. ADF still can’t handle this directly with one activity giving its output to the second, meaning we have to wrap up the T-SQL we want into a parameter for the sp_executesql stored procedure. Everything can be solved with more abstraction, right? 🙂

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
            {
                "name": "CreateExecution",
                "type": "SqlServerStoredProcedure",
                "dependsOn": [],
                "policy": {
                    "timeout": "7.00:00:00",
                    "retry": 0,
                    "retryIntervalInSeconds": 20
                },
                "typeProperties": {
                    "storedProcedureName": "sp_executesql",
                    "storedProcedureParameters": {
                        "stmt": {
                            "value": "
Declare @execution_id bigint;  
EXEC [SSISDB].[catalog].[create_execution] 
@package_name=N'DataLakeCopy.dtsx', 
@execution_id=@execution_id OUTPUT,
@folder_name=N'Testing',
@project_name=N'AzureSSIS',
@use32bitruntime=False; 
 
EXEC [SSISDB].[catalog].[start_execution] 
@execution_id;"
                        }
                    }
                },
                "linkedServiceName": {
                    "referenceName": "SSISDB",
                    "type": "LinkedServiceReference"
                }
            }

Scheduled Trigger

Last but not least our scheduled trigger. Very similar to what we get in the SQL Agent, but now called ADF! For this post I went for 1:30pm daily as a test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
    "name": "Daily",
    "properties": {
        "runtimeState": "Stopped", //change to Started
        "pipelines": [
            {
                "pipelineReference": {
                    "referenceName": "RunSSISPackage",
                    "type": "PipelineReference"
                },
                "parameters": {}
            }
        ],
        "type": "ScheduleTrigger",
        "typeProperties": {
            "recurrence": {
                "frequency": "Day",
                "interval": 1,
                "startTime": "2018-01-05T13:23:16.395Z",
                "timeZone": "UTC",
                "schedule": {
                    "minutes": [
                        30
                    ],
                    "hours": [
                        13
                    ]
                }
            }
        }
    }
}

The new UI gives you a nice agent style screen to create more complex schedules, even allowing triggers every minute if you wish. Here’s a teaser screen shot:

I hope this gave you the an end to end look at how to get your SSIS packages running in Azure and saved you looking through 10 different bits of Microsoft documentation.

Many thanks for reading

Back To Top


 

What’s New in Azure Data Factory Version 2 (ADFv2)

I’m sure for most cloud data wranglers the release of Azure Data Factory Version 2 has been long overdue. Well good news friends. It’s here! So, what new features does the service now offer for handling our Azure data solutions?… In short, loads!

In this post, I’ll try and give you an overview of what’s new and what to expect from ADFv2. However, I’m sure more questions than answers will be raised here. As developers we must ask why and how when presented with anything. But let’s start somewhere.

Note: the order of the sub headings below was intentional.

Before diving into the new and shiny I think we need to deal with a couple of concepts to understand why ADFv2 is a completely new service and not just an extension of what version 1 offered.

Let’s compare Azure Data Factory Version 1 and Version 2 at a high level.

  • ADFv1 – is a service designed for the batch data processing of time series data.
  • ADFv2 – is a very general-purpose hybrid data integration service with very flexible execution patterns.

This makes ADFv2 a very different animal and something that now can handle scale out control flow and data flow patterns for all our ETL needs. Microsoft seemed to have got the message here, following lots of feedback from the community, that this is the framework we want for developing our data flows. Plus, is how we’ve been working for a long time with the very mature SQL Server Integration Services (SSIS).
 
 
 

Concepts:

Integration Runtime (IR)

Everything done in Azure Data Factory v2 will use the Integration Runtime engine. The IR is the core service component for ADFv2. It is to the ADFv2 JSON framework of instructions what the Common Language Runtime (CLR) is to the .Net framework.

Currently the IR can be virtualised to live in Azure, or it can be used on premises as a local emulator/endpoint. To give each of these instances their proper JSON label the IR can be ‘SelfHosted’ or ‘Managed’. To try and put that into context, consider the ADFv1 Data Management Gateway as a self-hosted IR endpoint (for now). This distinction between hosted and managed IR’s will also be reflected in the data movement costs on your subscription bill, but let’s not get distracted with pricing yet.

The new IR is designed to perform three operations:

  1. Move data.
  2. Execute ADF activities.
  3. Execute SSIS packages.

Of course, points 1 and 2 here aren’t really anything new as we could already do this in ADFv1, but point 3 is what should spark the excitement. It is this ability to transform our data that has been missing from Azure that we’ve badly needed.

With the IR in ADFv2 this means we can now lift and shift our existing on premises SSIS packages into the cloud or start with a blank canvas and create cloud based scale out control flow and data flow pipelines, facilitated by the new capabilities in ADFv2.

Without crossing any lines, the IR will become the way you start using SSIS in Azure, regardless of whether you decide to wrap it in ADFv2 or not.

Branching

This next concept I assume for anyone that’s used SSIS won’t be new. But it’s great to learn that we now have it available in the ADFv2 control flow (at an activity level).

Post execution our downstream activities can now be dependent on four possible outcomes as standard.

  • On success
  • On failure
  • On completion
  • On skip

Also, custom ‘if’ conditions will be available for branching based expressions (more on expressions later).


That’s the high-level concepts dealt with. Now, for ease of reading let’s break the new features down into two main sections. The service level changes and then the additions to our toolkit of ADF activities.

Service Features:

Web Based Developer UI

This won’t be available for use until later in the year but having a web based development tool to build our ADF pipelines is very exciting!… No more hand crafting the JSON. I’ll leave this point just with a sneaky picture. I’m sure this explains more than I can in words.

It will include an interface to GitHub for source control and the ability the execute the activities directly in the development environment.

For field mappings between source and destination the new UI will also support a drag and drop panel, like SSIS.

Better quality screen shots to follow as soon as its available.

Expressions & Parameters

Like most other Microsoft data tools, expressions give us that valuable bit of inline extensibility to achieve things more dynamically when developing. Within our ADFv2 JSON we can now influence the values of our attributes in a similar way using a rich new set of custom inner syntax, secondary to the ADF JSON. To support the expressions factory-wide, parameters will become first class citizens in the service.

As a basic example, before we might do something like this:

1
"name": "value"

Now we can have an expression and return the value from elsewhere, maybe using a parameter like this:

1
"name": "@parameters('StartingDatasetName')"

With the @ symbol becoming important here for the start of the inline expression. The expression syntax is rich and offers a host of inline functions to call and manipulate our service. These include:

  • String functions – concat, substring, replace, indexof etc.
  • Collection functions – length, union, first, last etc.
  • Logic functions – equals, less than, greater than, and, or, not etc.
  • Conversation functions – coalesce, xpath, array, int, string, json etc.
  • Math functions – add, sub, div, mod, min, max etc.
  • Date functions – utcnow, addminutes, addhours, format etc.

System Variables

As a good follow on from the new expressions/parameters available we now also have a handful of system variables to support our JSON. These are scoped at two levels with ADFv2.

  1. Pipeline scoped.
  2. Trigger scoped (more on triggers later).

The system variables extend the parameter syntax allowing us to return values like the data factory name, the pipeline name and a specific run ID. Variables can be called in the following way using the new @ symbol prefix to reference the dynamic content:

1
"attribute": "@pipeline().RunId"

Inline Pipelines

For me this is a deployment convenience thing. Before and currently our linked services, datasets and pipelines were separate JSON files within our Visual Studio solution. Now an inline pipeline can house all its required parts within its own properties. Personally, I like having a single reusable linked service for various datasets in one place that only needs updating with new credentials once. Why would you duplicate these settings as part of several pipelines? Maybe if you want some complex expressions to influence your data handling and you are limited by the scope of a system variable, an inline pipeline may then be required.

Anyway, this is what the JSON looks like:

1
2
3
4
5
6
7
8
9
{
    "name": "SomePipeline",
    "properties": {
		"activities": [], 		//before
		"linkedServices": [], 		//now available
		"datasets": [],			//now available
		"parameters": []		//now available
		}
}

Beware, if you use the ADF copy wizard via the Azure portal. An inline pipeline is what you’ll now get back.

Activity Retry & Pipeline Concurrency

In ADFv2 our activities will be categorised as control and non-control types. This is mainly to support the use of our new activities like ‘ForEach’ (more on the activity itself later). A ‘ForEach’ activity sits within the category of a control type. Meaning it will not have retry, long retry and concurrency options available within its JSON policy block. I think it’s logical that something like a sequential looping can’t concurrency run, so just be aware that such JSON attributes will now be validated depending on the category of the activity.

Our familiar and existing activities like ‘Copy’, ‘Hive’ and ‘U-SQL’ will therefore be categorised as non-control types with policy attributes remaining the same.

Event Triggers

Like our close friend Azure Logic Apps, ADFv2 can perform actions based on triggered events. So far, the only working example of this requires an Azure Blob Storage account that will output a file arrival event. It will be great to replace those time series polling activities that needed to keep retrying until the file appeared with this event based approach.

Scheduled Triggers

You guessed it. We can now finally schedule our ADF executions using a defined recursive pattern (with enough JSON). This schedule will sit above our pipelines as a separate component within ADFv2.

  • A trigger will be able to start multiple pipelines.
  • A pipeline can be started by multiple scheduled triggers.

Let’s look at some JSON to help with the understanding.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
  "properties": {
    "type": "ScheduleTrigger",
    "typeProperties": {
      "recurrence": {
        "frequency": Minute, Hour, Day, Week, Year,
        "interval": ,  // optional, how often to fire (default to 1)
        "startTime": ,
        "endTime": ,
        "timeZone": 
        "schedule": {  // optional (advanced scheduling specifics)
          "hours": 0-24,
          "weekDays": ": ,
          "minutes": 0-60,
          "monthDays": 1-31,
          "monthlyOccurences": [
               {
                    "day": ,
                    "occurrence": 1-5
               }
           ] 
      }
    },
   "pipelines": [ // pipeline here
            {
                "pipelineReference": {
                    "type": "PipelineReference",
                    "referenceName": ""
                },
                "parameters": {
                    "": {
                        "type": "Expression",
                        "value": ""
                    },
                    " : ""
                }
           }
      ]
  }
}

Tumbling Window Triggers

For me, ADFv1 time slices simply have a new name. A tumbling window is a time slice in ADFv2. Enough said on that I think.

Depends On

We know that ADF is a dependency driven tool in terms of datasets. But now activities are also dependency driven with the execution of one providing the necessary information for the execution of the second. The introduction of a new ‘DependOn’ attribute/clause can be used within an activity to drive this behaviour.

The ‘DependsOn’ clause will also provide the branching behaviour mentioned above. Quick example:

1
"dependsOn": [ { "dependencyConditions": [ "Succeeded" ], "activity": "DownstreamActivity" } ]

More to come with this explanation later when we talk about the new ‘LookUp’ activity.

Azure Monitor & OMS Integration

Diagnostic logs for various other Azure services have been available for a while in Azure Monitor and OMS. Now with a little bit of setup ADFv2 will be able to output much richer logs with various metrics available across a data factory services. These metrics will include:

  • Successful pipeline runs.
  • Failed pipeline runs.
  • Successful activity runs.
  • Failed activity runs.
  • Successful trigger runs.
  • Failed trigger runs.

This will be a great improvement on the current PowerShell or .Net work required with version 1 just to monitor issues at a high level.
If you want to know more about Azure Monitor go here: https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-azure-monitor

PowerShell

It’s worth being aware that to support ADFv2 there will be a new set of PowerShell cmdlets available within the Azure module. Basically, all named the same as the cmdlets used for version 1 of the service, but now including ‘V2’ somewhere in the cmdlet name and accepting parameters specific to the new features.

Let’s start with the obvious one:

1
2
3
4
New-AzureRmDataFactoryV2 `
	-ResourceGroupName "ADFv2" `
	-Name "PaulsFunFactoryV2" `
	-Location "NorthEurope"

Or, a splatting friendly version for the PowerShell geeks 🙂

1
2
3
4
5
6
$parameters = @{
    Name = "PaulsFunFactoryV2"
    Location = "NorthEurope"
    ResourceGroupName = "ADFv2"
}
New-AzureRmDataFactoryV2  @parameters

Pricing

This isn’t a new feature as such, but probably worth mentioning that with all the new components and functionality in ADFv2 there is a new pricing model that you’ll need to do battle with. More details here: https://azure.microsoft.com/en-gb/pricing/details/data-factory/v2

Note: the new pricing tables for SSIS as a service with variations on CPU, RAM and Storage!


Activities:

Lookup

This is not an SSIS data transformation lookup! For ADFv2 we can lookup a list of datasets to be used in another downstream activity, like a Copy. I mentioned earlier that we now have a ‘DependsOn’ clause in our JSON, lookup is a good example of why we might use it.

Scenario: we have a pipeline containing two activities. The first lookups of some list of datasets (maybe some tables in a SQLDB). The second performs the data movement using the results of the lookup so it knows what to copy. This is very much a dataset level handling operation and not a row level data join. I think a picture is required:

Here’s a JSON snippet, which will probably be a familiar structure for those of you that have ever created an ARM Template.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"name": "SomePipeline",
"properties": {
    "activities": [
        {
            "name": "LookupActivity", //First
            "type": "Lookup"
        },
        {
            "name": "CopyActivity", //Second
            "type": "Copy",              
            "dependsOn": [  //Dependancy
                {
                    "activity": "LookupActivity"
                }
            ],
            "inputs": [],  //From Lookup
            "outputs": []
        }
    ]        
}}

Currently the following sources can be used as lookups, all of which need to return a JSON dataset.

  • Azure Storage (Blob and Table)
  • On Premises Files
  • Azure SQL DB

HTTP

With the HTTP activity, we can call out to any web service directly from our pipelines. The call itself is a little more involved than a typical web hook and requires an XML job request to be created within a workspace. Like other activities ADF doesn’t handle the work itself. It passes off the instructions to some other service. In this case it uses the Azure Queue Service. The queue service is the compute for this activity that handles the request and HTTP response, if successful this get thrown back up to ADF.

There’s something about needing XML inside JSON for this activity that just seems perverse. So much so that I’m not going to give you a code snippet 🙂

Web (REST)

Our new web activity type is simply a REST API caller. Which I assume doesn’t require much more explanation. In ADFv1 if we wanted to make a REST call a custom activity was required and we needed C# for the interface interaction. Now we can do it directly from the JSON with child attributes to cover all the usual suspects for REST APIs:

  • URL
  • Method (GET, POST, PUT)
  • Headers
  • Body
  • Authentication

ForEach

The ForEach activity is probably self-explanatory for anyone with an ounce of programming experience. ADFv2 brings some enhancements to this. You can use a ForEach activity to simply iterate over a collection of defined items one at a time as you would expect. This is done by setting the IsSequential attribute of the activity to True. But you also have the ability to perform the activity in parallel, speeding up the processing time and using the scaling power of Azure.

For example: if you had a ‘ForEach’ Activity iterating over a ‘Copy’ operation, with 10 different items, with the attribute “isSequential” set to false, all copies will execute at once. ForEach then offers a new maximum of 20 concurrent iterations, compared to a signal non-control activity with its concurrency supporting only a maximum of 10.

To try and clarify, the ForEach activity accepts items and is developed as a recursive thing. But on execution you can chosoe to process them sequentially or in parallel (up to a maxuimum of 20). Maybe a picture will help:

Going even deeper, the ‘ForEach’ activity is not confined to only processing a single activity, it can also iterate over a collection of other activities, meaning we can nest activities in a workflow where ‘ForEach’ is the parent/master activity. The items clause for the looping still needs to be provided as a JSON array, maybe by an expression and parameter within your pipeline. But those items can reference another inner block of activities.

There will definitely be a follow up blog post on this one with some more detail and a better explanation, come back soon 🙂

Meta Data

Let’s start by defining what metadata is within the context of ADFv2. Meta data includes the structure, size and last modified date information about a dataset. A metadata activity will take a dataset as an input, and output the various information about what it’s found. This output could then be used as a point of validation for some downstream operation. Or, for some dynamic data transformation task that needs to be told what dataset structure to expect.

The input JSON for this dataset type needs to know the basic file format and location. Then the structure will be worked out based on what it finds.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name": "MyDataset",
"properties": {
"type": "AzureBlob",
	"linkedService": {
		"referenceName": "StorageLinkedService",
		"type": "LinkedServiceReference"
	},
	"typeProperties": {
		"folderPath":"container/folder",
		"Filename": "file.json",
		"format":{
			"type":"JsonFormat"
			"nestedSeperator": ","
		}
	}
}}

Currently, only datasets within Azure blob storage are supported.

I’m hoping you are beginning to see how branching, depends on condititions, expressions and parameters are bringing you new options when working with ADFv2, where one new features uses the other.


The next couple as you’ll know aren’t new activities, but do have some new options available when creating them.

Custom

Previously in our .Net custom activity code we could only pass static extended properties from the ADF JSON down to the C# class. Now we have a new ‘referenceObjects’ attribute that can be used to access information about linked services and datasets. Example JSON snippet below for an ADFv2 custom activity:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "name": "SomePipeline",
  "properties": {
    "activities": [{
      "type": "DotNetActivity",
      "linkedServiceName": {
        "referenceName": "AzureBatchLinkedService",
        "type": "LinkedServiceReference"
      },
		"referenceObjects": { //new bits
          "linkedServices": [],
		  "datasets": []
        },
        "extendedProperties": {}
}}}

This completes the configuration data for our C# methods giving us access to things like the connection credentials used in our linked services. Within the IDotNetActivity class we need the following methods to get these values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
static void Main(string[] args)
{
    CustomActivity customActivity = 
        SafeJsonConvert.DeserializeObject(File.ReadAllText("activity.json"), 
        DeserializationSettings) as CustomActivity;
    List linkedServices = 
        SafeJsonConvert.DeserializeObject(File.ReadAllText("linkedServices.json"), 
        DeserializationSettings);
    List datasets = 
        SafeJsonConvert.DeserializeObject(File.ReadAllText("datasets.json"), 
        DeserializationSettings);
}
 
static JsonSerializerSettings DeserializationSettings
{
    get
    {
        var DeserializationSettings = new JsonSerializerSettings
        {
            DateFormatHandling = Newtonsoft.Json.DateFormatHandling.IsoDateFormat,
            DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Utc,
            NullValueHandling = Newtonsoft.Json.NullValueHandling.Ignore,
            ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Serialize
        };
        DeserializationSettings.Converters.Add(new PolymorphicDeserializeJsonConverter("type"));
        DeserializationSettings.Converters.Add(new PolymorphicDeserializeJsonConverter("type"));
        DeserializationSettings.Converters.Add(new PolymorphicDeserializeJsonConverter("type"));
        DeserializationSettings.Converters.Add(new TransformationJsonConverter());
 
        return DeserializationSettings;
    }
}

Copy

This can be a short one as we know what copy does. The activity now supports the following new data sources and destinations:

  • Dynamics CRM
  • Dynamics 365
  • Salesforce (with Azure Key Vault credentials)

Also as standard ‘copy’ will be able to return the number of rows processed as a parameter. This could then be used with a branching ‘if’ condition when the number of expected rows isn’t available for example.


Hopefully that’s everything and your now fully up to date with ADFv2 and all the new and exciting things it has to offer. Stay tuned for more in depth posts soon.

For more information check out the Microsoft documentation on ADF here: https://docs.microsoft.com/en-gb/azure/data-factory/introduction

Many thanks for reading.

 

Special thanks to Rob Sewell for reviewing and contributing towards the post.


Recursive U-SQL With PowerShell (U-SQL Looping)

In its natural form U-SQL does not support recursive operations and for good reason. This is a big data, scale out, declarative language where the inclusion of procedural, iterative code would be very unnatural. That said, if you must pervert things PowerShell can assist with the looping and dare I say the possibility for dynamic U-SQL.

A couple of caveats…

  • From the outset, I accept this abstraction with PowerShell to achieve some iterative process in U-SQL is a bit of a hack and very inefficient, certainly in the below example.
  • The scenario here is not perfect and created using a simple situation for the purposes of explanation only. I know the same data results could be achieved just by extending the aggregate grouping!

Hopefully that sets the scene. As I’m writing, I’m wondering if this blog post will be hated by the purists out there. Or loved by the abstraction hackers. I think I’m both 🙂

Scenario

As with most of my U-SQL blog posts I’m going to start with the Visual Studio project available as part of the data lake tools extension called ‘U-SQL Sample Application’. This gives us all the basic start up code and sample data to get going.

Input: within the solution (Samples > Data > Ambulance Data) we have some CSV files for vehicles. These are separated into 16 sources datasets covering 4 different vehicle ID’s across 4 days.

Output: let’s say we have a requirement to find out the average speed of each vehicle per day. Easily enough with a U-SQL wildcard on the extractor. But we also want to output a single file for each day of data. Not so easy, unless we write 1 query for each day of data. Fine with samples only covering 4 days, not so fine with 2 years of records split by vehicle.

Scenario set, lets look at how we might do this.

The U-SQL

To produce the required daily outputs I’m going to use a U-SQL query in isolation to return a distinct list of dates across the 16 input datasets, plus a parameterised U-SQL stored procedure to do the aggregation and output a single day of data.

First getting the dates. The below simply returns a text file containing a distinct list of all the dates in our source data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
DECLARE @InputPath string = "/Samples/Data/AmbulanceData/{filename}";
 
@DATA =
    EXTRACT 
        [vehicle_id] INT,
        [entry_id] long,
        [event_date] DateTime,
        [latitude] FLOAT,
        [longitude] FLOAT,
        [speed] INT,
        [direction] string,
        [trip_id] INT?,
        [filename] string
    FROM 
        @InputPath
    USING 
        Extractors.Csv();
 
@DateList =
    SELECT DISTINCT 
        [event_date].ToString("yyyyMMdd") AS EventDateList
    FROM 
        @DATA;
 
OUTPUT @DateList
TO "/output/AmbulanceDataDateList.txt"
USING Outputters.Csv(quoting : FALSE, outputHeader : FALSE);

Next, the below stored procedure. This uses the same input files, but does the required aggregation and outputs a daily file matching the parameter passed giving us a sinlge output.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
CREATE PROCEDURE IF NOT EXISTS [dbo].[usp_OutputDailyAvgSpeed]
    (
    @OutputDate string
    )
AS
BEGIN
 
    //DECLARE @OutputDate string = "20140914"; //FOR dev
    DECLARE @InputPath string = "/Samples/Data/AmbulanceData/{filename}";
    DECLARE @OutputPath string = "/output/DailyRecords/VehicleAvgSpeed_" + @OutputDate + ".csv";
 
    @DATA =
        EXTRACT 
            [vehicle_id] INT,
            [entry_id] long,
            [event_date] DateTime,
            [latitude] FLOAT,
            [longitude] FLOAT,
            [speed] INT,
            [direction] string,
            [trip_id] INT?,
            [filename] string
        FROM 
            @InputPath
        USING 
            Extractors.Csv();
 
    @VAvgSpeed =
        SELECT DISTINCT 
            [vehicle_id],
            AVG([speed]) AS AverageSpeed
        FROM 
            @DATA
        WHERE
            [event_date].ToString("yyyyMMdd") == @OutputDate
        GROUP BY
            [vehicle_id];
 
    OUTPUT @VAvgSpeed
    TO @OutputPath
    USING Outputters.Csv(quoting : TRUE, outputHeader : TRUE);
 
END;

At this point, we could just execute the stored procedures with each required date, manually crafted from the text file. Like this:

1
2
3
4
[dbo].[usp_OutputDailyAvgSpeed]("20140914");
[dbo].[usp_OutputDailyAvgSpeed]("20140915");
[dbo].[usp_OutputDailyAvgSpeed]("20140916");
[dbo].[usp_OutputDailyAvgSpeed]("20140917");

Fine, for small amounts of data, but we can do better for larger datasets.

Enter PowerShell and some looping.

The PowerShell

As with all things Microsoft PowerShell is our friend and the supporting cmdlets for the Azure Data Lake services are no exception. I recommend these links if you haven’t yet written some PowerShell to control ADL Analytics jobs or upload files to ADL Storage.

Moving on. How can PowerShell help us script our data output requirements? Well, here’s the answer, in my PowerShell script below I’ve done the following.

  1. Authenticate against my Azure subscription (optionally create yourself a PSCredential to do this).
  2. Submit the first U-SQL query as a file to return the distinct list of dates.
  3. Wait for the ADL Analytics job to complete.
  4. Download the output text file from ADL storage.
  5. Read the contents of the text file.
  6. Iterate over each dates listed in the text file.
  7. Submit a U-SQL job for each stored procedure with the date passed from the list.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#Params...
$WhereAmI = $MyInvocation.MyCommand.Path.Replace($MyInvocation.MyCommand.Name,"")
 
$DLAnalyticsName = "myfirstdatalakeanalysis" 
$DLAnalyticsDoP = 10
$DLStoreName = "myfirstdatalakestore01"
 
 
#Create Azure Connection
Login-AzureRmAccount | Out-Null
 
$USQLFile = $WhereAmI + "RecursiveOutputPrep.usql"
$PrepOutput = $WhereAmI + "AmbulanceDataDateList.txt"
 
#Summit Job
$job = Submit-AzureRmDataLakeAnalyticsJob `
    -Name "GetDateList" `
    -AccountName $DLAnalyticsName `
    –ScriptPath $USQLFile `
    -DegreeOfParallelism $DLAnalyticsDoP
 
Write-Host "Submitted USQL prep job."
 
#Wait for job to complete
Wait-AdlJob -Account $DLAnalyticsName -JobId $job.JobId | Out-Null
 
Write-Host "Downloading USQL output file."
 
#Download date list
Export-AzureRmDataLakeStoreItem `
    -AccountName $DLStoreName `
    -Path $myrootdir\output\AmbulanceDataDateList.csv `
    -Destination $PrepOutput | Out-Null
 
Write-Host "Downloaded USQL output file."
 
#Read dates
$Dates = Get-Content $PrepOutput
 
Write-Host "Read date list."
 
#Loop over dates with proc call for each
ForEach ($Date in $Dates)
    {
    $USQLProcCall = '[dbo].[usp_OutputDailyAvgSpeed]("' + $Date + '");'
    $JobName = 'Output daily avg dataset for ' + $Date
 
    Write-Host $USQLProcCall
 
    $job = Submit-AzureRmDataLakeAnalyticsJob `
        -Name $JobName `
        -AccountName $DLAnalyticsName `
        –Script $USQLProcCall `
        -DegreeOfParallelism $DLAnalyticsDoP
 
    Write-Host "Job submitted for " $Date
    }
 
Write-Host "Script complete. USQL jobs running."

At this point I think its worth reminding you of my caveats above 🙂

I would like to point out the flexibility in the PowerShell cmdlet Submit-AzureRmDataLakeAnalyticsJob. Allowing us to pass a U-SQL file (step 2) or build up a U-SQL string dynamically within the PowerShell script and pass that as the execution code (step 7), very handy. Switches: -Script or -ScriptPath.

If all goes well you should have jobs being prepared and shortly after running to produce the daily output files.

I used 10 AU’s for my jobs because I wanted to burn up some old Azure credits, but you can change this in the PowerShell variable $DLAnalyticsDoP.

Conclusion

It’s possible to archive looping behaviour with U-SQL when we want to produce multiple output files, but only when we abstract the iterative behaviour away to our friend PowerShell.

Comments welcome on this approach.

Many thanks for reading.

 

Ps. To make life a little easier. I’ve stuck all of the above code and sample data into a GitHub repostiory to save you copy and pasting things from the code windows above.

https://github.com/mrpaulandrew/RecursiveU-SQLWithPowerShell

 

 


Azure Data Lake Authentication from Azure Data Factory

rereTo set the scene for the title of this blog post lets firstly think about other services within Azure. You’ll probably already know that most services deployed require authentication via some form of connection string and generated key. These keys can be granted various levels of access and also recycled as required, for example an IoT Event Hub seen below (my favourite service to play with).

levelskeysandconnectionstrings

Then we have other services like SQLDB that require user credentials to authenticate as we would expect from the on premises version of the product. Finally we have a few other Azure services that handle authentication in a very different way altogether requiring both user credentials initially and then giving us session and token keys to be used by callers. These session and token keys are a lot more fragile than connection strings and can expire or become invalid if the calling service gets rebuilt or redeployed.

In this blog post I’ll explore and demonstrate specifically how we handle session and token based authentication for Azure Data Lake (ADL), firstly when calling it as a Linked Service from Azure Data Factory (ADF), then secondly within ADF custom activities. The latter of these two ADF based operations becomes a little more difficult because the .Net code created and compiled is unfortunately treated as a distant relative to ADF requiring its own authentication to ADL storage as an Azure Application. To further clarify a Custom Activity in ADF does not inherit its authorising credentials from the parent Linked Service, it is responsible for its own session/token. Why? Because as you may know from reading my previous blog post; Custom Activates get complied and executed by an Azure Batch Service. Meaning the compute for the .Net code is very much detached from ADF.

At this point I should mention that this applies to Data Lake Analytics and Data Lake Storage. Both require the same approach to authentication.

Data Lake as a Service Within Data Factory

adf-author-and-deploy-buttonThe easy one first, adding an Azure Data Lake service to your Data Factory pipeline. From the Azure portal within the ADF Author and Deploy blade you simply add a new Data Lake Linked Service which returns a JSON template for the operation into the right hand panel. Then we kindly get provided with an Authorize button (spelt wrong) at the top of the code block.

Clicking this will pop up with the standard Microsoft login screen requesting work or personal user details etc. Upon competition or successful authentication your Azure subscription will be inspected. If more than one applicable service exists, you’ll of course need to select which you require authorisation for. But once done you’ll return to the JSON template now with a completed Authorization value and SessionId.

adf-adl-json-templateJust for information and to give you some idea of the differences in this type of authorisation compared to other Azure services. When I performed this task for the purpose of creating screen shots in this post the resulting Authorization URL was 1219 characters long and the returned SessionId was 1100! Or half a page of a standard Word document each. By comparison an IoT Hub key is only 44 characters. Furthermore, the two values are customised to the service that requested them and can only be used within the context where they were created.

For completeness, because we can also now develop ADF pipelines from Visual Studio it’s worth knowing that a similar operation is now available as part of the Data Factory extension. In Visual Studio within your ADF project on the Linked Service branch you are able to Right Click > Add > New Item and choose Data Lake Store or Analytics. You’ll then be taken through a wizard (similar in look to that of the ADF deployment wizard) which requests user details, the ADF context and returns the same JSON template with populated authorising values.

vs-adf-adl-addservice

A couple of follow up tips and lessons learnt here:

  • If you tell Visual Studio to reverse engineer your ADF pipeline from a current Azure deployed factory where an existing ADL token and session ID are available. These will not be brought into Visual Studio and you’ll need to authorise the service again.
  • If you copy an ADL JSON template from the Azure portal ‘Author and Deploy’ area Visual Studio will not popup the wizard to authorise the service and you’ll need to do it again.
  • If you delete the ADL Linked Service within the portal ‘Author and Deploy’ area. The same Linked Service tokens in Visual Studio will become invalid and you’ll need to authorise the service again.
  • If you sneeze to loudly while Visual Studio is open you’ll need to authorise the service again.

Do you get the idea when I said earlier that the authorisation method is fragile? Very sophisticated, but fragile when chopping and changes things during development.

What you may find yourself doing fairly frequently is:

  1. Deploying an ADF project from Visual Studio.
  2. The deployment wizard failing telling you the ADL tokens have expired or are no longer authorised.
  3. Adding a new Linked Service to the project just to get the user authentication wizard popup.
  4. Then copying the new token and session values into the existing ADL Linked Service JSON file.
  5. Then excluding the new services you created just to re-authorise from the Visual Studio project.

Fun! Moving on.

Update: you can use an Azure AD service principal to authenticate both Azure Data Lake Store and Azure Data Lake Analytics services from ADF. Details are included in this post: https://docs.microsoft.com/en-gb/azure/data-factory/v1/data-factory-azure-datalake-connector#azure-data-lake-store-linked-service-properties

Data Factory Custom Activity Call Data Lake

Next the slightly more difficult way to authenticate against ADL, using an ADF .Net Custom Activity. As mentioned previously the .Net code once sent to Azure as a DLL is treated as a third party application requiring its own credentials.

The easiest way I’ve found to getting this working is firstly to use PowerShell to register the application in Azure which using the correct CMDLets returns an application GUID and password which when combined give the .Net code its credentials. Here’s the PowerShell you’ll need below. Be sure you run this with elevated permissions locally.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Sign in to Azure.
Add-AzureRmAccount
 
#Set this variables
$appName = "SomeNameThatYouWillRegoniseInThePortal"
$uri = "AValidURIAlthoughNotApplicableForThis"
$secret = "SomePasswordForTheApplication"
 
# Create a AAD app
$azureAdApplication = New-AzureRmADApplication `
    -DisplayName $appName `
    -HomePage $Uri `
    -IdentifierUris $Uri `
    -Password $secret
 
# Create a Service Principal for the app
$svcprincipal = New-AzureRmADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId
 
# To avoid a PrincipalNotFound error, I pause here for 15 seconds.
Start-Sleep -s 15
 
# If you still get a PrincipalNotFound error, then rerun the following until successful. 
$roleassignment = New-AzureRmRoleAssignment `
    -RoleDefinitionName Contributor `
    -ServicePrincipalName $azureAdApplication.ApplicationId.Guid
 
# The stuff you want:
 
Write-Output "Copy these values into the C# sample app"
 
Write-Output "_subscriptionId:" (Get-AzureRmContext).Subscription.SubscriptionId
Write-Output "_tenantId:" (Get-AzureRmContext).Tenant.TenantId
Write-Output "_applicationId:" $azureAdApplication.ApplicationId.Guid
Write-Output "_applicationSecret:" $secret
Write-Output "_environmentName:" (Get-AzureRmContext).Environment.Name

My recommendation here is to take the returned values and store that in something like the Class Library settings, available from the Visual Studio project properties. Don’t store them as constants at the top of your Class as its highly likely you’ll need them multiple times.

Next, what to do with the application GUID etc. Well in your Custom Activity C# will need something like the following. Apologies for dumping massive code blocks into this post, but you will need all of this in your Class if you want to use details from your ADF service and work with ADL files.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
class SomeCustomActivity : IDotNetActivity
{
	//Get credentials for app
	string domainName = Settings.Default.AzureDomainName;
	string appId = Settings.Default.ExcelExtractorAppId; //From PowerShell <<<<<
	string appPass = Settings.Default.ExceExtractorAppPass; //From PowerShell <<<<<
	string appName = Settings.Default.ExceExtractorAppName; //From PowerShell <<<<<
 
	private static DataLakeStoreFileSystemManagementClient adlsFileSystemClient;
	//and or:
	private static DataLakeStoreAccountManagementClient adlsAccountManagerClient;
 
	public IDictionary<string, string> Execute(
		IEnumerable linkedServices,
		IEnumerable datasets,
		Activity activity,
		IActivityLogger logger)
	{
		//Get linked service details from Data Factory
		Dataset inputDataset = new Dataset();
		inputDataset = datasets.Single(dataset => 
			dataset.Name == activity.Inputs.Single().Name);
 
		AzureDataLakeStoreLinkedService inputLinkedService;
 
		inputLinkedService = linkedServices.First(
			linkedService =>
			linkedService.Name ==
			inputDataset.Properties.LinkedServiceName).Properties.TypeProperties
			as AzureDataLakeStoreLinkedService;
 
		//Get account name for data lake and create credentials for app
		var creds = AuthenticateAzure(domainName, appId, appPass);
		string accountName = inputLinkedService.AccountName;
 
		//Authorise new instance of Data Lake Store
		adlsFileSystemClient = new DataLakeStoreFileSystemManagementClient(creds);
 
		/*
			DO STUFF...
 
			using (Stream input = adlsFileSystemClient.FileSystem.Open
				(accountName, completeInputPath)
				)
		*/	
 
 
		return new Dictionary<string, string>();
	}
 
 
	private static ServiceClientCredentials AuthenticateAzure
		(string domainName, string clientID, string clientSecret)
	{
		SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
 
		var clientCredential = new ClientCredential(clientID, clientSecret);
		return ApplicationTokenProvider.LoginSilentAsync(domainName, clientCredential).Result;
	}
}

Finally, before you execute anything be sure to grant the Azure app permissions to the respective Data Lake service. In the case of the Data Lake Store. From the portal you can use the Data Explorer blades to assign folder permissions.

adl-grant-permissions

I really hope this post has saved you some time in figuring out how to authorise Data Lake services from Data Factory. Especially when developing beyond what the ADF Copy Wizard gives you.

Many thanks for reading.


Changing the Start Up App on Windows 10 IoT Core

Changing the start up application on your IoT device running Windows IoT Core seems to be a common requirement once the device is out in the field so I hope this post exploring a few different ways of doing this will be of value to people.

To clarify the behaviour we will achieve below is from device power on our Windows 10 IoT Core operating system will boot up. But instead of starting the default application which is originally named ‘IoTCoreDefaultApp’ we will load our own custom created UWP app executable file. I will not talk about deploying your UWP app to the device, the assumption here is that has already been done and a complied .EXE file is living on the devices local storage.

Application & Package Name

Before diving into any changes it’s worth taking a moment to figure what our UWP application is called. This might seem obvious but depending on how you make this change (via the web portal or PowerShell) it’s referred to by different names. Both are configured in Visual Studio under the Package.AppxManifest and do not have to be the same. See below with screen shots from a new vanilla UWP project.

  • Firstly, the application name which is the friendly display name given to solution on creation and the EXE file presented in the settings web portal.

UWPAppName

 

  • Secondly, the package name which is the not so friendly value (defaulted as a GUID) is used as a prefix for the package family name. This is what PowerShell uses and needs to be exact.

UWPPackName

Assumed you’ve navigated that mine field and you know what your UWP app is called depending on the context where its used lets move on.

Headed vs Headless

Sorry, a tiny bit more detail to gain an appreciation of before we actually do this. Start up apps here can take 2x different forms; headed or headless. The difference between them is best thought of as running in the background or running in the foreground. With the foreground apps being visual via the devices display output.

  • Headed = foreground. Visible via display.
  • Headless = background. Not visible, like a Windows service. Also you have no head 🙂

Choose as required and be mindful that you can only have one of each running.

Web Portal

Ok, the easy way to make this change and make your app start up post OS boot: navigate to your devices IP address and authenticate against the web portal. Here in the Apps section you’ll find anything currently deployed to your Windows IoT Core system. Under the Startup field in the Apps table simply click the application you now want to start after the operating system has booted up using the Set as Default App link.

UWPPortalStartupAPP

At the point of making this selection the table content will refresh to reflect the change and the application will also start running.

You can now restart you device and your app will load without any further intervention.

PowerShell

Given the ease of the above why would you want to use PowerShell Paul? Well if you’ve got to do this on mass for 30x devices you’ll want to script things. Or, as I found out when Windows 10 IoT Core was still in preview for the Raspberry Pi 3 the App panel in the web portal above had a bug and just didn’t display anything!

Moving on, create yourself a remote management session to your IoT device. If your unfamiliar with doing this the IoT Dashboard can help. Check out my previous blog post Controlling Your Windows 10 IoT Core Device for guidance.

Within the remote management session you’ll need the command IoTStartUp. When using a new command for the first time I always like to add the help switch /? to check out what it can do.

Next work through the following commands in your PowerShell session.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
## 1.
## Review the list of apps currently available on your device.
IoTStartUp List
## If your expected app isn't there then it hasn't been deployed to the device.
 
## 2.
## Copy the package family name from the list or from Visual Studio if you still have it open.
 
## 3.
## Set the app as headed using the following command.
IoTStartUp Add Headed YOURAPPFAMILYNAME
 
## 4.
## To confirm which apps are now set for startup use:
IoTStartUp Startup

Here’s the same thing in a PowerShell window just to be clear.

UWPPowerShellStartupApp

Again to prove this you can now restart the device. From PowerShell use shutdown -r.

Restoring the Start Up Default

To change this back to the default application either repeat the above PowerShell steps for the package family name IoTCoreDefaultApp_1w720vyc4ccym!App or in the web portal set the app name IoTCoreDefaultApp as the default.

Many thanks for reading


Controlling Your Windows 10 IoT Core Device

If like me you have a several Raspberry Pi’s doing different things and running different operating systems controlling them can sometimes be a bit of a challenge, especially with a wealth of remote control protocols and command line tools to choose from. In this post I’m specifically exploring the different ways to control your Raspberry Pi running Windows 10 IoT Core.

The thing that surprised me about Microsoft’s “core” flavour of operating systems is just how much they appear on the network like a normal workstation, this might sound silly and very obvious but for the longest time when installing servers in Hyper-V I would shy away from the minimalist core install because I very wrongly assumed this meant you only got a black CMD prompt and nothing more. With Windows 10 IoT Core I can assure you that this is certainly not the case. Apart from the lack of a pretty title Start Menu the operating system (OS) is feature rich and very easy to work with.

Lets look at what we can use to control our Pi and Core OS.

The Old Fashioned Physical Method

PiShot1Just for completeness its probably best to start with the traditional approach of using physical things like a HDMI cable plugged into the back of your monitor plus a keyboard and mouse connected via USB ports. This is obviously very basic and probably not why you’ve been drawn to this blog post. However even via this method you do get a graphical interface on screen with the ability to alter things like the WiFi settings and actually shutdown the device correctly without pulling the plug.

Windows Core Browser Console

Next and by far my favourite feature in Windows 10 IoT Core is the browser based console that can be connected to using the IP address of the device followed by port 8080. Eg: HTTP://192.168.1.2:8080.

If you navigate to the equivalent address of the device on your network you will be prompted for a set of admin credentials and then taken to a lovely bunch of pages which include a wealth of configuration options. When developing this satisfies almost all of my immediate needs for controlling the device and starting up deployed applications.


IoTSetup


The browser console even includes a Task Manager style page of running processes with some pretty real time performance graphs for CPU, RAM, I/O and Network usage. See slide 5 of 16 in the GIF above.

Lets move on before I turn this blog post into a page of screen shots!

Windows IoT Remote Client

IoTClientNext we have the IoT Remote Client app, which as I type is officially still in preview. However this is another really helpful way to control your device. The desktop application available from the Microsoft Store is basically the IoT Core operating system RDP client, so no need for VNC server licencing between Raspbian and Windows to worry about.

IoTRemoteClientStore link: https://www.microsoft.com/en-gb/store/apps/windows-iot-remote-client/9nblggh5mnxz

To get this running in the browser based console I mentioned above first go to the Remote page and tick the box to Enable Windows IoT Remote Server. AKA Terminal Services. Next install the app on and start it up. If all is well it will detect the IoT device on your network and allow you to connect, or just enter the IP address in the box provided. Post authentication you’ll then have a view of your IoT device matching exactly what the HDMI cable can offer. Plus the ability to interact with the device with a keyboard and mouse through the remote session. When starting apps that include something graphical it’s really useful to see the thing. Another use case might be when performing cloud to device messaging, having the received content on screen is nice to see.

This method of remote control is where I actually pulled the first screen shot above from, rather than taking a picture of the monitor displaying the HDMI output.

IoT Dashboard

Another really useful bit of software in the IoT device toolbox is the IoT Dashboard.

Download link: http://go.microsoft.com/fwlink/?LinkID=708576

This walks you through the setting up a new device SD card plus gives you a My Devices window for easily launching the next 2x features detailed below.
IoTDashboard

PowerShell

What can’t you do with PowerShell in the Microsoft world?

Before the IoT Dashboard connecting to the IoT Core device required starting the PowerShell remote commands service. Then setting the items address etc etc. Like this:

1
2
3
4
5
6
7
$MyDeviceIP = "192.168.1.2" #Example
 
Net start WinRM
 
Set-Item WSMan:\localhost\Client\TrustedHosts -Value $MyDeviceIP
 
Enter-PSSession -ComputerName $MyDeviceIP -Credential "localhost\Administrator"

Now if using the IoT Dashboard above from My Devices you have the option just to right click on the auto detected device and select Connect using PowerShell. This handles all the above prerequisites for connecting and only prompts for credentials. Lovely!

PStoPi

Note; both PowerShell methods of connection to the device will require elevated permissions.

Admin File Share

Lastly and again really for completeness we have the traditional UNC path. From the IoT Dashboard this is made easily in My Devices this time by selecting Open Network Share. This brings up a file explorer window to the C$ admin share for browsing the file system on the SD Card.

PiShot5For anybody that is used to seeing the contents of the C drive on a Windows this folder structure will look very familiar. Program File etc.

Other

Sadly despite a few attempts things like Remote Registry access, remote connection to Services using your local snap-in console and Computer Management are not currently supported. The Service to enable Remote Registry simply isn’t there, along with other remote management services on Windows 10 IoT Core. If this changes I’ll be sure to update this post.

Many thanks for reading.


Using Hyper-V and PowerShell to Create the Perfect Developer Workstation

So the second challenge I faced after unboxing and plugging in my new workstation at Purple Frog Systems was software. Having customers using all versions of SQL Server from 2005 onwards I didn’t want to tie myself to a particular version locally and I also didn’t want the hassle of running lots of different SQL Server services all on the same host operating system. Especially if I wanted to use Windows 10 as my host, which as we know would have compatibility issues with SQL Server 2005 and our old friend BIDS.

WindowsFeaturesHyperV

Enter Microsoft Hyper-V

In Windows 8 onwards Hyper-V is available out of the box as a feature which can simply be switched on from Control Panel > Programs and Features > Turn Windows Features On or Off. Apologies in advance if the home editions don’t support this.

Note: be sure to select the Hyper-V Module for Windows PowerShell.

Now with my Windows 10 host now running Hyper-V management services I set about creating a bunch of virtual machines to serve up every version of SQL Server required.

Whenever starting the Hyper-V manager always do so as an administrator. The user interface throws some odd errors without the elevated permissions which could lead you down a rabbit hole of Google searches. Trust me!

Creating a Virtual Switch

For those of you new to Hyper-V creating a virtual switch for your guess virtual machines is one of the most important things to get right and sort out first. It’s very simple to do, but without it your network admins and DHCP police might start questioning what on earth your physical network connection is doing. Again, trust me on this!

HyperVCreateSwitchFrom your Hyper-V Manager toolbar go to Action > Virtual Switch Manager. In the dialogue window that appears in the right hand panel choose the switch type of External and click Create Virtual Switch. Give the switch a name and select the External Network: that the new switch will basically impersonate. This will be a sort list if you only have 1x physical network connection.

 

With an external virtual switch in place any guess machine setup to use it as a network resource will appear on your network in the same way your physical host does. It’s own MAC address, IP address, hostname etc. Also be mindful that Hyper-V will take over how your network connections appear in your host operating system and a CMD ipconfig /all will start to look a little more complex, but this is nothing to worry about and perfectly normal.

Creating the Guest Virtual Machines

To offer some repeatability when creating my guest virtual machines I broke out the PowerShell, hence including the module at install time. With a few parameterised cmdlets I was able to setup several empty virtual machines ready for my guest operating systems. Example below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#Create hyperv machine
New-VM `
    -Name $VMName `
    -Path $VMLocation `
    -MemoryStartupBytes $RAM `
    -NewVHDPath $VHDOSFile `
    -NewVHDSizeBytes $VHDSizes `
    -SwitchName $NetworkSwitch `
    -Generation $Generation
 
#Set Number of CPUs
Set-VMProcessor `
    -VMName $VMName `
    -Count $CPUs
 
#Create data disk
New-VHD `
    –Path $VHDDataFile -Dynamic `
    –SizeBytes $VHDSizes `

Download the full script I used here.

Note: I’m not a PowerShell expert, just a big fan of it’s uses.

You’ll see in the full script that I give my virtual machines 2x hard drive files each on different controllers (IDE and SCSI), which are in turn are created on different host hard drives, mainly because my host has 3x physical disks. My original intention was to have all operating systems on mechanical disks and all data on solid state disks. So for guess SQL Server installs the virtual machine data disk would be used to house databases attached to the instance. In practice this didn’t really pay off but then this is just another advantage of using Hyper-V, that the underlying resources can be moved around as required.

If you have time I would recommend having you ISO’s on a separate physical disk to the guess hard drive files. This greatly speeds up installation. You could even run the guess OS hard drive files on your solid state drive just for installation, then move them to mechanical disks afterwards.

The Guests

Assuming you now have a bunch of virtual machine shells in place I’d recommend the following operating systems vs versions of SQL Server. Mainly to avoid any compatibility issues.

  • SQL Server 2005 – Windows Server 2003
  • SQL Server 2008 – Windows Server 2008
  • SQL Server 2012 – Windows Server 2012
  • SQL Server 2014 – Windows Server 2012
  • SQL Server 2016 – Windows Server 2016 (assuming the OS is released in time)

I’m not working to talk about installing Windows and SQL Server on each guest. I’m assuming your familiar with that Next, Next, Next process (if you want the defaults!). However I would say that for SQL Server 2012 onwards I did this twice for Analysis services to give me both Tabular and Multi Dimensional services in my guess virtual machine.

The Goal

If you’ve worked through this post and setup your host in the same way I have you’ll now be able to enjoy the advantages and flexibility of running all versions of SQL Server at the same time with all SQL services in guess virtual machines on your developer worksation.

HyperVVMList

My virtual machines are also setup with dynamic RAM:

  • Initial value: 2048 MB
  • Min value: 512 MB
  • Max value: 8192 MB

The Memory Weight slider is great if you want to keep all your guess virtual machines running at the same time like I do. If doing some development work on SQL Server 2005 I simply increase that guests RAM priority which dynamically adjusts the less used virtual machines so the host doesn’t get overwhelmed. Plus you’ll only be developing on one version of SQL Server at once right?!

HyperVRAMWeight

The last thing you’ll probably want to do is install some SQL Server client tools on your Windows 10 host. I went for SQL Server 2012, plus Visual Studio 2012. Then SQL Server 2014, plus Visual Studio 2015. Installed in that order.

Final Words

Please be aware that the above was only possible with a Microsoft MSDN subscription which provided licencing for the guest operating systems and older versions of SQL Server.

During this setup I also had the advantage of being a domain admin, which meant creating things like service accounts for all my virtual machines was easy, adding my virtual machines to the domain and accessing SQL Server services from my host using AD authentication made easy. If you not a DA SQL authentication and local workgroups is ok, but has its limits for SSAS.

RDCMAddMachine

You may want to try using Remote Desktop Connection Manager to access your guess operating systems. In the latest version a VM ID can be including giving console style access without needing to connect to each guess from the Hyper-V manager.

VM IDs can be found with the following bit of PowerShell.

1
Get-VMMemory -VMName "PF*" | Select VMName, ID | Sort-Object VMName

Very last thing, I mentioned my host machine had 3x physical hard drives. The third of which is a huge 4 TB block. To keep my guest operating systems fairly customer independent you’ll have seen I only gave the standard virtual hard drives in PowerShell 100GB of space each. What I then do is give a guest an additional virtual disk much larger in size which resides on the host big data volume. Work is done here. Then post project completion this customer specific virtual data disk can just be dropped/archives/moved away and the guest machine ready for the next work item. It’s of course a management overhead, but helps keep things clean.

Many thanks for reading.


Paul’s Frog Blog

Paul is a Microsoft Data Platform MVP with 10+ years’ experience working with the complete on premises SQL Server stack in a variety of roles and industries. Now as the Business Intelligence Consultant at Purple Frog Systems has turned his keyboard to big data solutions in the Microsoft cloud. Specialising in Azure Data Lake Analytics, Azure Data Factory, Azure Stream Analytics, Event Hubs and IoT. Paul is also a STEM Ambassador for the networking education in schools’ programme, PASS chapter leader for the Microsoft Data Platform Group – Birmingham, SQL Bits, SQL Relay, SQL Saturday speaker and helper. Currently the Stack Overflow top user for Azure Data Factory. As well as very active member of the technical community.
Thanks for visiting.
@mrpaulandrew