Azure

Azure SSIS – How to Setup, Deploy, Execute & Schedule Packages

Welcome back to work in 2018! 🙂

Let’s get stuck in with a hot topic. How do we actually use our beloved SQL Server Integration Services (SSIS) packages in Azure with all this new platform as a service (PaaS) stuff? Well, in this post I’m going to go through it end to end.

Post Contents

First, some caveats:

  1. Several of the Azure components required for this are still in public preview and can be considered as ‘not finished’. Meaning this is going to seem a little painful.
  2. The ADFv2 developer UI is still in private preview. But I’ve cheated and used it to generate the JSON to help you guys. Hopefully it’ll be available publicly soon.
  3. I’ve casually used my Microsoft sponsored Azure subscription and not had to worry about the cost of these services. I advise you check with the bill payer.
  4. Everything below has been done in a deliberate order. Especially the service setup.
  5. Everything below has been deployed in the same Azure region to avoid any cross data centre authentication unpleasantness. I suggest doing the same. I used EastUS for this post.

Ok, moving on…

Azure Services Setup

Now, let’s set some expectations. To get our SSIS packages in Azure we new a collection of services. When working on premises this gets neatly wrapped up with a pretty bow into something called SQL Server. Sadly in Azure there is no wrapping, no pretty bow and nothing that neat. Yet!

Azure Data Factory Version 2 (ADFv2)

First up, my friend Azure Data Factory. As you’ll probably already know, now in version 2 it has the ability to create recursive schedules and house the thing we need to execute our SSIS packages called the Integration Runtime (IR). Without ADF we don’t get the IR and can’t execute the SSIS packages. My hope would be that the IR would be a stand alone service, but for now its contained within ADF.

To deploy the service we can simply use the Azure portal blades. Whatever location you choose here make sure you use the same location for everything that follows. Just for ease. Also, it might be worth looking ahead to ensure everything you want is actually available in your preferred Azure region.

Lets park that service and move on.

Azure SQL Server Instance

Next, we need a logical SQL Server instance to house the SSIS database. Typically you deploy one of these when you create a normal Azure SQLDB (without realising), but they can be created on there own without any databases attached. To be clear, this is not an Azure SQL Server Managed Instance. It does not have a SQL Agent and is just the endpoint we connect to and authenticate against with some SQL credentials.

Again to deploy the service we can simply use the Azure portal blades. On this one make sure the box is checked to ‘Allow azure services to access server’ highlighted with the orange arrow below and of course make a note of the user name and password. If you don’t check the box ADF will not be able to create the SSISDB in the logical instance later on.

Once the SQL instance service is deployed. Go into the service blades and update the firewall rules to allow access from your current external IP address. This isn’t anything specifically required for SSIS, you need to do it for any SQLDB connections. Which is something that I always forget, so I’m telling you to help me remember! Thanks.


Azure SSIS IR

Next on the list, we need the shiny new thing, the SSIS IR, it needs creating and then starting up. In my opinion this is a copy of the SQL Server MsDtsSrvr.exe taken from the on premises product and used in the cloud on a VM that we don’t get access to… Under the covers it probably is, but I’m guessing.

Sadly for this we don’t have any nice Azure portal user interface for this yet. It’s going to need some PowerShell. Make sure you have your Azure modules up to date and run the following with the top set of variables assigned as required.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# Azure Data Factory version 2 information:
$SubscriptionId = ""
$ResourceGroupName = ""
$DataFactoryName = "" 
$DataFactoryLocation = ""
 
# Azure-SSIS integration runtime information:
$AzureSSISName = ""
$AzureSSISDescription = ""
 
$AzureSSISNodeSize = "Standard_A4_v2"
$AzureSSISNodeNumber = 2 
$AzureSSISMaxParallelExecutionsPerNode = 2 
$SSISDBPricingTier = "S1" 
 
# Azure Logical SQL instance information:
$SSISDBServerEndpoint = ".database.windows.net"
$SSISDBServerAdminUserName = ""
$SSISDBServerAdminPassword = ""
 
 
<# LEAVE EVERYTHING ELSE BELOW UNCHANGED #>
 
$SSISDBConnectionString = "Data Source=" + $SSISDBServerEndpoint + ";User ID="+ $SSISDBServerAdminUserName +";Password="+ $SSISDBServerAdminPassword
$sqlConnection = New-Object System.Data.SqlClient.SqlConnection $SSISDBConnectionString;
Try
{
    $sqlConnection.Open();
}
Catch [System.Data.SqlClient.SqlException]
{
    Write-Warning "Cannot connect to your Azure SQL DB logical server/Azure SQL MI server, exception: $_"  ;
    Write-Warning "Please make sure the server you specified has already been created. Do you want to proceed? [Y/N]"
    $yn = Read-Host
    if(!($yn -ieq "Y"))
    {
        Return;
    } 
}
 
Login-AzureRmAccount
Select-AzureRmSubscription -SubscriptionId $SubscriptionId
 
Set-AzureRmDataFactoryV2 -ResourceGroupName $ResourceGroupName `
                        -Location $DataFactoryLocation `
                        -Name $DataFactoryName
 
$secpasswd = ConvertTo-SecureString $SSISDBServerAdminPassword -AsPlainText -Force
$serverCreds = New-Object System.Management.Automation.PSCredential($SSISDBServerAdminUserName, $secpasswd)
Set-AzureRmDataFactoryV2IntegrationRuntime  -ResourceGroupName $ResourceGroupName `
                                            -DataFactoryName $DataFactoryName `
                                            -Name $AzureSSISName `
                                            -Type Managed `
                                            -CatalogServerEndpoint $SSISDBServerEndpoint `
                                            -CatalogAdminCredential $serverCreds `
                                            -CatalogPricingTier $SSISDBPricingTier `
                                            -Description $AzureSSISDescription `
                                            -Location $DataFactoryLocation `
                                            -NodeSize $AzureSSISNodeSize `
                                            -NodeCount $AzureSSISNodeNumber `
                                            -MaxParallelExecutionsPerNode $AzureSSISMaxParallelExecutionsPerNode
 
write-host("##### Starting your Azure-SSIS integration runtime. This takes 20 to 30 minutes to complete. #####")
Start-AzureRmDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
                                             -DataFactoryName $DataFactoryName `
                                             -Name $AzureSSISName `
                                             -Force
 
write-host("##### Completed #####")
write-host("If any cmdlet is unsuccessful, please consider using -Debug option for diagnostics.")

I confess I’ve stolen this from Microsoft in there documentation here and tweaked it slightly to use the more precise subscription ID parameter as well as a couple of other things that I felt made life easier. While this is running you should get a process bar from the PowerShell ISE for the SSIS IR service starting, which really does take around 30mins. Be patient.

If you’d prefer to do this through the ADF PowerShell deployment cmdlets here is the JSON to use. Again assign values to the attributes as required. The JSON will create the SSIS IR, but it won’t start it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"name": "",
"properties": {
	"type": "Managed",
	"description": "",
	"typeProperties": {
		"computeProperties": {
			"location": "EastUS",
			"nodeSize": "Standard_A4_v2",
			"numberOfNodes": 2,
			"maxParallelExecutionsPerNode": 2
		},
		"ssisProperties": {
			"catalogInfo": {
				"catalogServerEndpoint": "Your Instance.database.windows.net",
				"catalogAdminUserName": "user",
				"catalogAdminPassword": {
					"type": "SecureString",
					"value": "password"
				},
				"catalogPricingTier": "S1"
}}}}}

For info. The new developer UI gives you a wizard to go through the steps and a nice screen to see that the IR now exists. Until you get public access to this you’ll just have to assume its there.

Anyway, moving on. Once the thing has deployed and started you’ll now have an SSIS IR and also in your logical SQL instance the SSISDB. Exciting!

Open SSMS, making sure your are using version 17.2 or later. In the connection dialogue box as well as the usual bits go to options and explicitly set which database your connecting to. If you don’t the Integration Services branch won’t appear in SSMS Object Explorer. You’ll see the database tables, views, stored procs, but won’t have any of the SSIS options to control packages.

If all goes well you should get a very familiar sight…

Creating & Deploying an SSIS Package

As this is a ‘how do’ guide I’ve done something very simple in my package. It basically copies and pastes a CSV file from one Azure Data Lake Storage (ADLs) folder to another. I’m going to assume we are all familiar with more complex SSIS packages. Plus, the point of this post was getting the services working, not to do any data transformations.

SSIS Azure Feature Pack

What is probably worth pointing out is that if you want to work with Azure services in SSIS SQL Server Data Tools you need to install the Azure Feature Pack. Download and install it from the below link:
https://docs.microsoft.com/en-us/sql/integration-services/azure-feature-pack-for-integration-services-ssis

Once installed in your SSIS Toolbox (Control Flow/Data Flow) and Connection Manager you’ll have Azure services available.

For info, the Azure Data Lake Storage connection manager now offers the option to use a service principal to authenticate.


Package Deployment

Now I’m not going to teach a granny to suck eggs (or whatever the phrase is). To deploy the package you don’t need to do anything special. I simply created the ISPAC file in SSDT and used the project deployment wizard in SSMS. The deployment wizard from the project didn’t work in my version of SSDT running in Visual Studio 2015. Not sure why at this point so I used SSMS.

Package Execution

Similarly I’m going to assume we all know how to execution an SSIS package from management studio. It’s basically the same menu on the right where the deployment wizard gets launched. Granny, eggs, etc.

Or, we can execute a couple of stored procedures using some good old fashioned T-SQL (remember that?). See below.
 

1
2
3
4
5
6
7
8
9
10
11
DECLARE @execution_id bigint;  
 
EXEC [SSISDB].[catalog].[create_execution] 
	@package_name=N'DataLakeCopy.dtsx', 
	@execution_id=@execution_id OUTPUT,
	@folder_name=N'Testing',
	@project_name=N'AzureSSIS',
	@use32bitruntime=False; 
 
EXEC [SSISDB].[catalog].[start_execution] 
	@execution_id;

I mention this because we’ll need it when we schedule the package in ADF later.

Scheduling with ADFv2

Ok, now the fun part. Scheduling the package. Currently we don’t have a SQL Agent on our logical instance and we don’t have Elastic DTU Jobs (coming soon). Meaning we need to use ADF.

Thankfully in ADFv2 this does not involve provisioning time slices! Can I get a hallelujah? 🙂

This is the part where I cheated and used the new developer UI, but I’ll share all the JSON in case you don’t have a template for these bits in ADFv2 yet.

Linked Service to SQLDB

To allow ADF to access and authenticate against our logical SQL instance we need a linked service. We did of course already provide this information when creating the SSIS IR. But ADF needs them again to store and call for activity executions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
    "name": "SSISDB",
    "properties": {
        "type": "AzureSqlDatabase",
        "typeProperties": {
            "connectionString": {
                "type": "SecureString",
                "value": "
Integrated Security=False;
Encrypt=True;
Connection Timeout=30;
Data Source=;
Initial Catalog=;
User ID="
            }
        }
    }
}

The Pipeline

Nothing extra here, a very very simple pipeline similar to what you’ve previously seem in ADFv1. Only without the time slice schedule values and other fluff.

1
2
3
4
5
6
{
    "name": "RunSSISPackage",
    "properties": {
        "activities": []
    }
}

Stored Procedure Activity

Next, the main bit of the instruction set, the activity. You’ll know from the T-SQL above that in the SSISDB you need to first create an instance of the execution for the SSIS package. Then pass the execution ID to the start execution stored procedure. ADF still can’t handle this directly with one activity giving its output to the second, meaning we have to wrap up the T-SQL we want into a parameter for the sp_executesql stored procedure. Everything can be solved with more abstraction, right? 🙂

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
            {
                "name": "CreateExecution",
                "type": "SqlServerStoredProcedure",
                "dependsOn": [],
                "policy": {
                    "timeout": "7.00:00:00",
                    "retry": 0,
                    "retryIntervalInSeconds": 20
                },
                "typeProperties": {
                    "storedProcedureName": "sp_executesql",
                    "storedProcedureParameters": {
                        "stmt": {
                            "value": "
Declare @execution_id bigint;  
EXEC [SSISDB].[catalog].[create_execution] 
@package_name=N'DataLakeCopy.dtsx', 
@execution_id=@execution_id OUTPUT,
@folder_name=N'Testing',
@project_name=N'AzureSSIS',
@use32bitruntime=False; 
 
EXEC [SSISDB].[catalog].[start_execution] 
@execution_id;"
                        }
                    }
                },
                "linkedServiceName": {
                    "referenceName": "SSISDB",
                    "type": "LinkedServiceReference"
                }
            }

Scheduled Trigger

Last but not least our scheduled trigger. Very similar to what we get in the SQL Agent, but now called ADF! For this post I went for 1:30pm daily as a test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
    "name": "Daily",
    "properties": {
        "runtimeState": "Stopped", //change to Started
        "pipelines": [
            {
                "pipelineReference": {
                    "referenceName": "RunSSISPackage",
                    "type": "PipelineReference"
                },
                "parameters": {}
            }
        ],
        "type": "ScheduleTrigger",
        "typeProperties": {
            "recurrence": {
                "frequency": "Day",
                "interval": 1,
                "startTime": "2018-01-05T13:23:16.395Z",
                "timeZone": "UTC",
                "schedule": {
                    "minutes": [
                        30
                    ],
                    "hours": [
                        13
                    ]
                }
            }
        }
    }
}

The new UI gives you a nice agent style screen to create more complex schedules, even allowing triggers every minute if you wish. Here’s a teaser screen shot:

I hope this gave you the an end to end look at how to get your SSIS packages running in Azure and saved you looking through 10 different bits of Microsoft documentation.

Many thanks for reading

Back To Top


 

Business Intelligence in Azure – SQLBits 2018 Precon

What can you expect from my SQLBits pre conference training day in February 2018 at the London Olympia?

Well my friends, in short, we are going to take whirlwind tour of the entire business intelligence stack of services in Azure. No stone will be left unturned. No service will be left without scalability. We’ll cover them all and we certainly aren’t going to check with the Azure bill payer before turning up the compute on our data transforms.



What will we actually cover?

With new cloud services and advancements in locally hosted platforms developing a lambda architecture is becoming the new normal. In this full day of high level training we’ll learn how to architect hybrid business intelligence solutions using Microsoft Azure offerings. We’ll explore the roles of these cloud data services and how to make them work for you in this complete overview of business intelligence on the Microsoft cloud data platform.

Here’s how we’ll break that down during the day…

Module 1 – Getting Started with Azure

Using platform as a service products is great, but let’s take a step back. To kick off we’ll cover the basics for deploying and managing your Azure services. Navigating the Azure portal and building dashboards isn’t always as intuitive as we’d like. What’s a resource group? And why is it important to understand your Azure Activity Directory tenant?

Module 2 – An Overview of BI in Azure

What’s available for the business intelligence architect in the cloud and how might these services relate to traditional on premises ETL and cube data flows. Is ETL enough for our big unstructured data sources or do we need to mix things up and add some more letters to the acronym in the cloud?

Module 3 – Databases in Azure (SQL DB, SQL DW, Cosmos DB, SQL MI)

It’s SQL Server Jim, but not as we know it. Check out the PaaS flavours of our long term on premises friends. Can we trade the agent and an operating system for that sliding bar of scalable compute? DTU and DWU are here to stay with new SLA’s relating to throughput. Who’s on ACID and as BI people do we care?

Module 4 – The Azure Machines are here to Learn

Data scientist or developer? Azure Machine Learning was designed for applied machine learning. Use best-in-class algorithms in a simple drag-and-drop interface. We’ll go from idea to deployment in a matter of clicks. Without a terminator in sight!

Module 5 – Swimming in the Data Lake with U-SQL

Let’s understand the role of this hyper-scale two tier big data technology and how to harness its power with U-SQL, the offspring of T-SQL and C#. We’ll cover everything you need to know to get started developing solutions with Azure Data Lake.

Module 6 – IoT, Event Hubs and Azure Stream Analytics

Real-time data is everywhere. We need to use it and unlock it as a rich source of information that can be channelled to react to events, produce alerts from sensor values or in 9000 other scenarios. In this module, we’ll learn how, using Azure messaging hubs and Azure Stream Analytics.

Module 7 – Power BI, our Sematic Layer, is it All Things to All People?

Combining all our data sources in one place with rich visuals and a flexible data modelling tool. Power BI takes it all, small data, big data, streaming data, website content and more. But we really need a Venn diagram to decide when/where it’s needed.

Module 8 – Data Integration with Azure Data Factory and SSIS

The new integration runtime is here. But how do we unlock the scale out potential of our control flow and data flow? Let’s learn to create the perfect dependency driven pipeline for our data flows. Plus, how to work with the Azure Batch Service should you need that extensibility.

 

Finally we’ll wrap up the day by playing the Azure icon game, which you’ll all now be familiar with and able to complete with a perfect score having completed this training day 🙂

Many thanks for reading and I hope to see you in February, its going to be magic 😉

Register now: https://www.regonline.com/registration/Checkin.aspx?EventID=2023328

All training day content is subject to change, dependant on timings and the demo gods will!


 

RDP to Azure Batch Service Compute Nodes

Did you know it’s now possible to RDP to your Azure Batch Service compute nodes?

I’ve used the batch service to handle the compute for my Azure Data Factory custom activities for a while now. Which I’ve basically been doing blindly because the code execution and logging is provided to ADF, with no visibility to the underlying pool of VM’s doing the work. Well, no more is this the case!

In the Azure portal go to your Batch Service > Pools > Select Pool > Nodes > Select Node > Connect.

The connect button then presents you with the option to add a new user before telling you the external IP with an RDP file.

Once you’ve connected you’ll find a virtual machine, but with a few slight differences.

The directory on the VM used for any ADF custom activities will be something like the following path:

C:\user\tasks\workitems\adf-{guid}\job-0000000001\{guid}-{activityname}-\wd\

I hope this was helpful when you go beyond the basics of Creating Azure Data Factory Custom Activities

Many thanks


Azure Data Lake – The Services. The U-SQL. The C# (Reference Guide)

This post is a reference guide to support an event talk or webinar. The content is intended to assist the audience only. Thank you.

Abstract

How do we implement Azure Data Lake? How does a lake fit into our data platform architecture? Is Data Lake going to run in isolation or be part of a larger pipeline? How do we use and work with USQL? Does size matter?! The answers to all these questions and more in this session as we immerse ourselves in the lake, that’s in a cloud. We’ll take an end to end look at the components and understand why the compute and storage are separate services. For the developers, what tools should we be using and where should we deploy our USQL scripts. Also, what options are available for handling our C# code behind and supporting assemblies. We’ll cover everything you need to know to get started developing data solutions with Azure Data Lake. Finally, let’s extend the U-SQL capabilities with the Microsoft Cognitive Services!

Links

What is Azure Data Lake? The Microsoft version.
https://azure.microsoft.com/en-gb/solutions/data-lake/

Understanding the ADL Analytics Unit
https://blogs.msdn.microsoft.com/azuredatalake/2016/10/12/understanding-adl-analytics-unit/

Why use Azure Data Lake? The Microsoft version.
https://azure.microsoft.com/en-gb/solutions/data-lake/

Comsuming Data Lake with Power – Cross tenant data refreshes.
https://www.purplefrogsystems.com/paul/2017/06/connecting-power-bi-to-azure-data-lake-store-across-tenants/

U-SQL String Data Type 128KB Limit
https://feedback.azure.com/forums/327234-data-lake/suggestions/13416093-usql-string-data-type-has-a-size-limit-of-128kb

Creating a U-SQL Merge Statement
https://www.purplefrogsystems.com/paul/2016/12/writing-a-u-sql-merge-statement/

U-SQL Looping
https://www.purplefrogsystems.com/paul/2017/05/recursive-u-sql-with-powershell-u-sql-looping/

U-SQL Date Dimension
https://www.purplefrogsystems.com/paul/2017/02/creating-a-u-sql-date-dimension-numbers-table-in-azure-data-lake/

Further Reading

Microsoft Blog – An Introduction to U-SQL in Azure Data Lake
https://blogs.msdn.microsoft.com/robinlester/2016/01/04/an-introduction-to-u-sql-in-azure-data-lake/

Microsoft Documentation – U-SQL Programmability Guide
https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide

Microsoft MSDN – U-SQL Language Reference
https://msdn.microsoft.com/en-US/library/azure/mt591959(Azure.100).aspx

SQL Server Central – Stairway to U-SQL
http://www.sqlservercentral.com/stairway/142480/

Stack Overflow – U-SQL Tag
http://stackoverflow.com/questions/tagged/u-sql

 

Cognitive services with U-SQL in Azure Data Lake

https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-cognitive


What’s New in Azure Data Factory Version 2 (ADFv2)

I’m sure for most cloud data wranglers the release of Azure Data Factory Version 2 has been long overdue. Well good news friends. It’s here! So, what new features does the service now offer for handling our Azure data solutions?… In short, loads!

In this post, I’ll try and give you an overview of what’s new and what to expect from ADFv2. However, I’m sure more questions than answers will be raised here. As developers we must ask why and how when presented with anything. But let’s start somewhere.

Note: the order of the sub headings below was intentional.

Before diving into the new and shiny I think we need to deal with a couple of concepts to understand why ADFv2 is a completely new service and not just an extension of what version 1 offered.

Let’s compare Azure Data Factory Version 1 and Version 2 at a high level.

  • ADFv1 – is a service designed for the batch data processing of time series data.
  • ADFv2 – is a very general-purpose hybrid data integration service with very flexible execution patterns.

This makes ADFv2 a very different animal and something that now can handle scale out control flow and data flow patterns for all our ETL needs. Microsoft seemed to have got the message here, following lots of feedback from the community, that this is the framework we want for developing our data flows. Plus, is how we’ve been working for a long time with the very mature SQL Server Integration Services (SSIS).
 
 
 

Concepts:

Integration Runtime (IR)

Everything done in Azure Data Factory v2 will use the Integration Runtime engine. The IR is the core service component for ADFv2. It is to the ADFv2 JSON framework of instructions what the Common Language Runtime (CLR) is to the .Net framework.

Currently the IR can be virtualised to live in Azure, or it can be used on premises as a local emulator/endpoint. To give each of these instances their proper JSON label the IR can be ‘SelfHosted’ or ‘Managed’. To try and put that into context, consider the ADFv1 Data Management Gateway as a self-hosted IR endpoint (for now). This distinction between hosted and managed IR’s will also be reflected in the data movement costs on your subscription bill, but let’s not get distracted with pricing yet.

The new IR is designed to perform three operations:

  1. Move data.
  2. Execute ADF activities.
  3. Execute SSIS packages.

Of course, points 1 and 2 here aren’t really anything new as we could already do this in ADFv1, but point 3 is what should spark the excitement. It is this ability to transform our data that has been missing from Azure that we’ve badly needed.

With the IR in ADFv2 this means we can now lift and shift our existing on premises SSIS packages into the cloud or start with a blank canvas and create cloud based scale out control flow and data flow pipelines, facilitated by the new capabilities in ADFv2.

Without crossing any lines, the IR will become the way you start using SSIS in Azure, regardless of whether you decide to wrap it in ADFv2 or not.

Branching

This next concept I assume for anyone that’s used SSIS won’t be new. But it’s great to learn that we now have it available in the ADFv2 control flow (at an activity level).

Post execution our downstream activities can now be dependent on four possible outcomes as standard.

  • On success
  • On failure
  • On completion
  • On skip

Also, custom ‘if’ conditions will be available for branching based expressions (more on expressions later).


That’s the high-level concepts dealt with. Now, for ease of reading let’s break the new features down into two main sections. The service level changes and then the additions to our toolkit of ADF activities.

Service Features:

Web Based Developer UI

This won’t be available for use until later in the year but having a web based development tool to build our ADF pipelines is very exciting!… No more hand crafting the JSON. I’ll leave this point just with a sneaky picture. I’m sure this explains more than I can in words.

It will include an interface to GitHub for source control and the ability the execute the activities directly in the development environment.

For field mappings between source and destination the new UI will also support a drag and drop panel, like SSIS.

Better quality screen shots to follow as soon as its available.

Expressions & Parameters

Like most other Microsoft data tools, expressions give us that valuable bit of inline extensibility to achieve things more dynamically when developing. Within our ADFv2 JSON we can now influence the values of our attributes in a similar way using a rich new set of custom inner syntax, secondary to the ADF JSON. To support the expressions factory-wide, parameters will become first class citizens in the service.

As a basic example, before we might do something like this:

1
"name": "value"

Now we can have an expression and return the value from elsewhere, maybe using a parameter like this:

1
"name": "@parameters('StartingDatasetName')"

With the @ symbol becoming important here for the start of the inline expression. The expression syntax is rich and offers a host of inline functions to call and manipulate our service. These include:

  • String functions – concat, substring, replace, indexof etc.
  • Collection functions – length, union, first, last etc.
  • Logic functions – equals, less than, greater than, and, or, not etc.
  • Conversation functions – coalesce, xpath, array, int, string, json etc.
  • Math functions – add, sub, div, mod, min, max etc.
  • Date functions – utcnow, addminutes, addhours, format etc.

System Variables

As a good follow on from the new expressions/parameters available we now also have a handful of system variables to support our JSON. These are scoped at two levels with ADFv2.

  1. Pipeline scoped.
  2. Trigger scoped (more on triggers later).

The system variables extend the parameter syntax allowing us to return values like the data factory name, the pipeline name and a specific run ID. Variables can be called in the following way using the new @ symbol prefix to reference the dynamic content:

1
"attribute": "@pipeline().RunId"

Inline Pipelines

For me this is a deployment convenience thing. Before and currently our linked services, datasets and pipelines were separate JSON files within our Visual Studio solution. Now an inline pipeline can house all its required parts within its own properties. Personally, I like having a single reusable linked service for various datasets in one place that only needs updating with new credentials once. Why would you duplicate these settings as part of several pipelines? Maybe if you want some complex expressions to influence your data handling and you are limited by the scope of a system variable, an inline pipeline may then be required.

Anyway, this is what the JSON looks like:

1
2
3
4
5
6
7
8
9
{
    "name": "SomePipeline",
    "properties": {
		"activities": [], 		//before
		"linkedServices": [], 		//now available
		"datasets": [],			//now available
		"parameters": []		//now available
		}
}

Beware, if you use the ADF copy wizard via the Azure portal. An inline pipeline is what you’ll now get back.

Activity Retry & Pipeline Concurrency

In ADFv2 our activities will be categorised as control and non-control types. This is mainly to support the use of our new activities like ‘ForEach’ (more on the activity itself later). A ‘ForEach’ activity sits within the category of a control type. Meaning it will not have retry, long retry and concurrency options available within its JSON policy block. I think it’s logical that something like a sequential looping can’t concurrency run, so just be aware that such JSON attributes will now be validated depending on the category of the activity.

Our familiar and existing activities like ‘Copy’, ‘Hive’ and ‘U-SQL’ will therefore be categorised as non-control types with policy attributes remaining the same.

Event Triggers

Like our close friend Azure Logic Apps, ADFv2 can perform actions based on triggered events. So far, the only working example of this requires an Azure Blob Storage account that will output a file arrival event. It will be great to replace those time series polling activities that needed to keep retrying until the file appeared with this event based approach.

Scheduled Triggers

You guessed it. We can now finally schedule our ADF executions using a defined recursive pattern (with enough JSON). This schedule will sit above our pipelines as a separate component within ADFv2.

  • A trigger will be able to start multiple pipelines.
  • A pipeline can be started by multiple scheduled triggers.

Let’s look at some JSON to help with the understanding.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
  "properties": {
    "type": "ScheduleTrigger",
    "typeProperties": {
      "recurrence": {
        "frequency": Minute, Hour, Day, Week, Year,
        "interval": ,  // optional, how often to fire (default to 1)
        "startTime": ,
        "endTime": ,
        "timeZone": 
        "schedule": {  // optional (advanced scheduling specifics)
          "hours": 0-24,
          "weekDays": ": ,
          "minutes": 0-60,
          "monthDays": 1-31,
          "monthlyOccurences": [
               {
                    "day": ,
                    "occurrence": 1-5
               }
           ] 
      }
    },
   "pipelines": [ // pipeline here
            {
                "pipelineReference": {
                    "type": "PipelineReference",
                    "referenceName": ""
                },
                "parameters": {
                    "": {
                        "type": "Expression",
                        "value": ""
                    },
                    " : ""
                }
           }
      ]
  }
}

Tumbling Window Triggers

For me, ADFv1 time slices simply have a new name. A tumbling window is a time slice in ADFv2. Enough said on that I think.

Depends On

We know that ADF is a dependency driven tool in terms of datasets. But now activities are also dependency driven with the execution of one providing the necessary information for the execution of the second. The introduction of a new ‘DependOn’ attribute/clause can be used within an activity to drive this behaviour.

The ‘DependsOn’ clause will also provide the branching behaviour mentioned above. Quick example:

1
"dependsOn": [ { "dependencyConditions": [ "Succeeded" ], "activity": "DownstreamActivity" } ]

More to come with this explanation later when we talk about the new ‘LookUp’ activity.

Azure Monitor & OMS Integration

Diagnostic logs for various other Azure services have been available for a while in Azure Monitor and OMS. Now with a little bit of setup ADFv2 will be able to output much richer logs with various metrics available across a data factory services. These metrics will include:

  • Successful pipeline runs.
  • Failed pipeline runs.
  • Successful activity runs.
  • Failed activity runs.
  • Successful trigger runs.
  • Failed trigger runs.

This will be a great improvement on the current PowerShell or .Net work required with version 1 just to monitor issues at a high level.
If you want to know more about Azure Monitor go here: https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-azure-monitor

PowerShell

It’s worth being aware that to support ADFv2 there will be a new set of PowerShell cmdlets available within the Azure module. Basically, all named the same as the cmdlets used for version 1 of the service, but now including ‘V2’ somewhere in the cmdlet name and accepting parameters specific to the new features.

Let’s start with the obvious one:

1
2
3
4
New-AzureRmDataFactoryV2 `
	-ResourceGroupName "ADFv2" `
	-Name "PaulsFunFactoryV2" `
	-Location "NorthEurope"

Or, a splatting friendly version for the PowerShell geeks 🙂

1
2
3
4
5
6
$parameters = @{
    Name = "PaulsFunFactoryV2"
    Location = "NorthEurope"
    ResourceGroupName = "ADFv2"
}
New-AzureRmDataFactoryV2  @parameters

Pricing

This isn’t a new feature as such, but probably worth mentioning that with all the new components and functionality in ADFv2 there is a new pricing model that you’ll need to do battle with. More details here: https://azure.microsoft.com/en-gb/pricing/details/data-factory/v2

Note: the new pricing tables for SSIS as a service with variations on CPU, RAM and Storage!


Activities:

Lookup

This is not an SSIS data transformation lookup! For ADFv2 we can lookup a list of datasets to be used in another downstream activity, like a Copy. I mentioned earlier that we now have a ‘DependsOn’ clause in our JSON, lookup is a good example of why we might use it.

Scenario: we have a pipeline containing two activities. The first lookups of some list of datasets (maybe some tables in a SQLDB). The second performs the data movement using the results of the lookup so it knows what to copy. This is very much a dataset level handling operation and not a row level data join. I think a picture is required:

Here’s a JSON snippet, which will probably be a familiar structure for those of you that have ever created an ARM Template.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"name": "SomePipeline",
"properties": {
    "activities": [
        {
            "name": "LookupActivity", //First
            "type": "Lookup"
        },
        {
            "name": "CopyActivity", //Second
            "type": "Copy",              
            "dependsOn": [  //Dependancy
                {
                    "activity": "LookupActivity"
                }
            ],
            "inputs": [],  //From Lookup
            "outputs": []
        }
    ]        
}}

Currently the following sources can be used as lookups, all of which need to return a JSON dataset.

  • Azure Storage (Blob and Table)
  • On Premises Files
  • Azure SQL DB

HTTP

With the HTTP activity, we can call out to any web service directly from our pipelines. The call itself is a little more involved than a typical web hook and requires an XML job request to be created within a workspace. Like other activities ADF doesn’t handle the work itself. It passes off the instructions to some other service. In this case it uses the Azure Queue Service. The queue service is the compute for this activity that handles the request and HTTP response, if successful this get thrown back up to ADF.

There’s something about needing XML inside JSON for this activity that just seems perverse. So much so that I’m not going to give you a code snippet 🙂

Web (REST)

Our new web activity type is simply a REST API caller. Which I assume doesn’t require much more explanation. In ADFv1 if we wanted to make a REST call a custom activity was required and we needed C# for the interface interaction. Now we can do it directly from the JSON with child attributes to cover all the usual suspects for REST APIs:

  • URL
  • Method (GET, POST, PUT)
  • Headers
  • Body
  • Authentication

ForEach

The ForEach activity is probably self-explanatory for anyone with an ounce of programming experience. ADFv2 brings some enhancements to this. You can use a ForEach activity to simply iterate over a collection of defined items one at a time as you would expect. This is done by setting the IsSequential attribute of the activity to True. But you also have the ability to perform the activity in parallel, speeding up the processing time and using the scaling power of Azure.

For example: if you had a ‘ForEach’ Activity iterating over a ‘Copy’ operation, with 10 different items, with the attribute “isSequential” set to false, all copies will execute at once. ForEach then offers a new maximum of 20 concurrent iterations, compared to a signal non-control activity with its concurrency supporting only a maximum of 10.

To try and clarify, the ForEach activity accepts items and is developed as a recursive thing. But on execution you can chosoe to process them sequentially or in parallel (up to a maxuimum of 20). Maybe a picture will help:

Going even deeper, the ‘ForEach’ activity is not confined to only processing a single activity, it can also iterate over a collection of other activities, meaning we can nest activities in a workflow where ‘ForEach’ is the parent/master activity. The items clause for the looping still needs to be provided as a JSON array, maybe by an expression and parameter within your pipeline. But those items can reference another inner block of activities.

There will definitely be a follow up blog post on this one with some more detail and a better explanation, come back soon 🙂

Meta Data

Let’s start by defining what metadata is within the context of ADFv2. Meta data includes the structure, size and last modified date information about a dataset. A metadata activity will take a dataset as an input, and output the various information about what it’s found. This output could then be used as a point of validation for some downstream operation. Or, for some dynamic data transformation task that needs to be told what dataset structure to expect.

The input JSON for this dataset type needs to know the basic file format and location. Then the structure will be worked out based on what it finds.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name": "MyDataset",
"properties": {
"type": "AzureBlob",
	"linkedService": {
		"referenceName": "StorageLinkedService",
		"type": "LinkedServiceReference"
	},
	"typeProperties": {
		"folderPath":"container/folder",
		"Filename": "file.json",
		"format":{
			"type":"JsonFormat"
			"nestedSeperator": ","
		}
	}
}}

Currently, only datasets within Azure blob storage are supported.

I’m hoping you are beginning to see how branching, depends on condititions, expressions and parameters are bringing you new options when working with ADFv2, where one new features uses the other.


The next couple as you’ll know aren’t new activities, but do have some new options available when creating them.

Custom

Previously in our .Net custom activity code we could only pass static extended properties from the ADF JSON down to the C# class. Now we have a new ‘referenceObjects’ attribute that can be used to access information about linked services and datasets. Example JSON snippet below for an ADFv2 custom activity:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "name": "SomePipeline",
  "properties": {
    "activities": [{
      "type": "DotNetActivity",
      "linkedServiceName": {
        "referenceName": "AzureBatchLinkedService",
        "type": "LinkedServiceReference"
      },
		"referenceObjects": { //new bits
          "linkedServices": [],
		  "datasets": []
        },
        "extendedProperties": {}
}}}

This completes the configuration data for our C# methods giving us access to things like the connection credentials used in our linked services. Within the IDotNetActivity class we need the following methods to get these values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
static void Main(string[] args)
{
    CustomActivity customActivity = 
        SafeJsonConvert.DeserializeObject(File.ReadAllText("activity.json"), 
        DeserializationSettings) as CustomActivity;
    List linkedServices = 
        SafeJsonConvert.DeserializeObject(File.ReadAllText("linkedServices.json"), 
        DeserializationSettings);
    List datasets = 
        SafeJsonConvert.DeserializeObject(File.ReadAllText("datasets.json"), 
        DeserializationSettings);
}
 
static JsonSerializerSettings DeserializationSettings
{
    get
    {
        var DeserializationSettings = new JsonSerializerSettings
        {
            DateFormatHandling = Newtonsoft.Json.DateFormatHandling.IsoDateFormat,
            DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Utc,
            NullValueHandling = Newtonsoft.Json.NullValueHandling.Ignore,
            ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Serialize
        };
        DeserializationSettings.Converters.Add(new PolymorphicDeserializeJsonConverter("type"));
        DeserializationSettings.Converters.Add(new PolymorphicDeserializeJsonConverter("type"));
        DeserializationSettings.Converters.Add(new PolymorphicDeserializeJsonConverter("type"));
        DeserializationSettings.Converters.Add(new TransformationJsonConverter());
 
        return DeserializationSettings;
    }
}

Copy

This can be a short one as we know what copy does. The activity now supports the following new data sources and destinations:

  • Dynamics CRM
  • Dynamics 365
  • Salesforce (with Azure Key Vault credentials)

Also as standard ‘copy’ will be able to return the number of rows processed as a parameter. This could then be used with a branching ‘if’ condition when the number of expected rows isn’t available for example.


Hopefully that’s everything and your now fully up to date with ADFv2 and all the new and exciting things it has to offer. Stay tuned for more in depth posts soon.

For more information check out the Microsoft documentation on ADF here: https://docs.microsoft.com/en-gb/azure/data-factory/introduction

Many thanks for reading.

 

Special thanks to Rob Sewell for reviewing and contributing towards the post.


Chaining Azure Data Factory Activities and Datasets

As I work with Azure Data Factory (ADF) and help others in the community more and more I encounter some confusion that seems to exist surrounding how to construct a complete dependency driven ADF solution. One that chains multiple executions and handles all of your requirements. In this post I hope to address some of that confusion and will allude to some emerging best practices for Azure Data Factory usage.

First a few simple questions:

  • Why is there confusion? In my opinion this is because the ADF copy wizard available via the Azure portal doesn’t help you architect a complete solution. It can be handy to reverse certain things, but really the wizard tells you nothing about the choices you make and what the JSON behind it is doing. Like most wizards, it just leads to bad practices!
  • Do I need several data factory services for different business functions? No, you don’t have to. Pipelines within a single data factory service can be disconnected for different processes and often having all your linked services in one place is easier to manage. Plus a single factor offers reusability and means I single set of source code etc.
  • Do I need one pipeline per activity. No, you can house many activities in a single pipeline. Pipelines are just logic containers to assist you when managing data orchestration tasks. If you want an SSIS comparison, think of them as sequence containers. In a factory I may group all my on premises gateway uploads into a single pipeline. This means I can pause that stream of uploads on demand. Maybe when the gateway keys needs to be refreshed etc.
  • Is the whole data factory a pipeline? Yes, in concept. But for technical terminology a pipeline is a specific ADF component. The marketing people do love to confuse us!
  • Can an activity support multiple inputs and multiple outputs? Generally yes. But there are exceptions depending on the activity type. U-SQL calls to Azure Data Lake can have multiples of both. ADF doesn’t care as long as you know what the called service is doing. On the other hand a copy activity needs to be one to one (so Microsoft can charge more for data movements).
  • Does an activity have to have an input dataset? No. For example, you can create a custom activity that executes your code for a defined time slice without an input dataset, just the output.

Datasets

Moving on, lets go a little deeper and think about a scenario that I use in my community talks. We have an on premises CSV file. We want to upload it. Clean it and aggregate the output. For each stage of this process we need to define a dataset for Azure Data Factory to use.

To be clear, a dataset in this context is not the actual data. It is just a set of JSON instructions that defines where and how our data is stored. For example, its file path, its extension, its structure, its relationship to the executing time slice.

Lets define each of the datasets we need in ADF to complete the above scenario for just 1 file:

  1. The on premises version of the file. Linked to information about the data management gateway to be used, with local credentials and file server/path where it can be accessed.
  2. A raw Azure version of the file. Linked to information about the data lake storage folder to be used for landing the uploaded file.
  3. A clean version of the file. Linked to information about the output directory of the cleaning process.
  4. The aggregated output file. Linked to information about the output directory of the query being used to do the aggregation.

All of the linked information to these datasets should come from your ADF linked services.

So, we have 1 file to process, but in ADF we now need 4 datasets defined for each stage of the data flow. These datasets don’t need to be complex, something as simple as the following bit of JSON will do.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
  "name": "LkpsCurrencyDataLakeOut",
  "properties": {
    "type": "AzureDataLakeStore",
    "linkedServiceName": "DataLakeStore",
    "structure": [ ],
    "typeProperties": {
      "folderPath": "Out",
      "fileName": "Dim.Currency.csv"
    },
    "availability": {
      "frequency": "Day",
      "interval": 1
    }
  }
}

Activities

Next, our activities. Now the datasets are defined above we need ADF to invoke the services that are going to do the work for each stage. As follows:

Activity (JSON Value) Task Description Input Dataset Output Dataset
Copy Upload file from local storage to Data Lake storage. 1 2
DotNetActivity Perform transformation/cleaning on raw source file. 2 3
DataLakeAnalyticsU-SQL Aggregate the datasets to produce a reporting output. 3 4

From the above table we can clearly see the output dataset of the first activity becomes the input of the second. The output dataset of the second activity becomes the input of the third. Apologies if this seems obvious, but I have know it to confuse people.

Pipelines

For our ADF pipeline(s) we can now make some decisions about how we want to manage the data flow.

  1. Add all the activities to a single pipeline meaning we can stop/start everything for this 1 dataset end to end.
  2. Add each activity to a different pipeline dependant on its type. This is my starting preference.
  3. Have the on premises upload in one pipeline and everything else in a second pipeline.
  4. Maybe separate your pipelines and data flows depending on the type of data. Eg. Fact/dimension. Finance and HR.

The point here, is that it doesn’t matter to ADF, it’s just down to how you want to control it. When I created the pipelines for my talk demo I went with option 2. Meaning I get the following pretty diagram, arranged to fit the width of my blog 🙂

Here we can clearly see at the top level each dataset flowing into a pipeline and its child activity. If I’m constructed this using option 1 above I would simply see the first dataset and the fourth with 1 pipeline box. I could then drill into the pipeline to see the chain activities within. A repeat, this doesn’t matter to ADF.

I hope you found the above useful and a good starting point for constructing your ADF data flows.

Best Practices

As our understanding of Azure Data Factory matures I’m sure some of the following points will need to be re-written, but for now I’m happy to go first and start laying the ground work of what I consider to be best for ADF usage. Comments very welcome.

  1. Resist using the wizard, please.
  2. Keep everything within a single ADF service if you can. Meaning linked services can be reused.
  3. Disconnect your on premises uploads using a single pipeline. For ease of management.
  4. Group your activities into natural pipeline containers for the operation type or data category.
  5. Layout your ADF diagram carefully. Left to right. It makes understanding it much easier for others.
  6. Use Visual Studio configuration files to deploy ADF projects between Dev/Test/Live. Ease of source control and development.
  7. Monitor activity concurrency and time outs carefully. ADF will kill called service executions if breached.
  8. Be mindful of activity cost and group inputs/outputs for data compute where possible.
  9. Use time slices to control your data volumes. Eg. Pass the time slice as a parameter to the called compute service.

What next? Well, I’m currently working on this beast…

  • 127x datasets.
  • 71x activities.
  • 9x pipelines.

… and I’ve got about another third left to build!

Many thanks for reading.


Azure Business Intelligence – The Icon Game!

As Azure becomes the new normal for many organisations our architecture diagrams become ever more complicated. Articulating our designs/data flows to management or technical audiences therefore requires a new group of cloud service icons in our pretty pictures. Especially for hybrid solutions. Sadly those icons aren’t yet that familiar for most. So, here’s a very simple blog post to help you recognise what’s in the Azure stack from a Purple Frog business intelligence perspective, in no particular order.

All of the following having been snipped from the Azure portal dashboard so there shouldn’t be any surprises once you start working with these services.

  Azure
    Data Catalogue   Data Factory
  Batch Service   Data Lake Storage
  Data Lake Analytics   Power BI
  Cosmos DB   IoT Hub
  Event Hub   Stream Analytics
  Machine Learning   SQL DB
  SQL DW   Logical SQL Server
  Data Management Gateway         Analysis Services
  Resources   Virtual Machine
  Azure Active Directory   Blob Storage

Happy drawing!


Connecting PowerBI.com to Azure Data Lake Store – Across Tenants

Welcome readers, this is a post to define a problem that shouldn’t exist. But sadly, does exist and given its relative complexity I think warrants some explanation. Plus, I’ve included details of what you can currently do if you encounter it.

First some background…

Power BI Desktop

With the recent update to the Power BI desktop application we now find the Azure Data Lake Store connector has finally relinquished its ‘(Beta)’ status and is considered GA. This is good news, but doesn’t make any difference to those of us that have already be storing our outputs as files in Data Lake Store.

The connector as before can be supplied with any storage ADL:// URL and set of credentials using the desktop Power BI application. Our local machines are of course external to the concept and context of a Microsoft Cloud tenant and directory. To reiterate, this means any Data Lake Store anywhere can be queried and datasets refreshed using local tools. It doesn’t even matter about Personal vs Work/School accounts.

This hopefully sets the scene for this posts and starts to allude to the problem your likely to encounter if you want to use your developed visuals beyond your own computer.

PowerBI.com

In this scenario, we’ve developed our Power BI workbook in the desktop product and hit publish. Armed with a valid Office 365/Power BI account the visuals, initial working data, model, measures and connection details for the data source get transferred to the web service version of Power BI, known as PowerBI.com. So far so good.

Next, you want to share and automatically refresh the data, meaning your audience have the latest data at the point of viewing, given a reasonable schedule.

Sharing, no problem at all, assuming you understand the latest Power BI Premium/Free apps, packs, workspace licencing stuff!… A post for another time. Maybe.

Automatic dataset refreshes, not so simple. This expects several ducks to all be lined up exactly. By ducks I mean your Azure Subscription and Office 365 tenant. If they aren’t and one little ducky has strayed from the pack/group/heard (what’s a collection of ducks?). This is want you’ll encounter.

Failed to update data source credentials: The credentials provided for the DataLake source are invalid.

Now this error is also misleading because the problem is not invalid credentials on the face of it. A better error message would say invalid credentials for the tenant of the target data source.

Problem Overview

As most systems and environments evolve its common (given the experience of several customer) to accidentally create a disconnection between your Azure Subscription and your Office 365 environments. This may result in each group of services residing in different directory services or tenants.


In this case the disconnection means you will not be able to authenticate your PowerBI.com datasets against your Azure Data Lake Store allowing for that very important scheduled data refresh.
Coming back to the title of this blog post:

You cannot currently authenticate against an Azure Data Lake Store from PowerBI.com across tenants.

What Do To

Once you’ve finished cursing, considering everything you’ve developed over the last 6 months in your Azure Subscription. Take a breath. Unfortunately, the only long term thing you can do is setup a new Azure Subscription and make dam sure that it’s linked to your Office 365 office and thus residing in the same tenant. Then migrate your Data Lake Store to the new subscription.


Once these ducks are in line the credentials you supply to PowerBI.com for the dataset refresh will be accepted. I promise. I’ve done it.

A short-term work around is to refresh your datasets in the desktop app every day and republish new versions. Very manual. Sorry to be the bearer of bad news.

What Next

Well my friends, I recommend that we strongly petition Microsoft to lift this restriction. I say restriction because it seems like madness. After all the PowerBI.com connector to Azure Data Lake is using OAuth2, so what’s the problem. Furthermore, back in Power BI Desktop land we can connect to any storage with any credentials. We can even create a Power BI workbook joining 2 Data Lake Stores with 2 different sets of credentials (handy if you have a partial data model in production and new outputs in a test environment).

Here is my attempt to get things changed and I’d appreciate your votes.

https://office365.uservoice.com/forums/264636-general/suggestions/17841250-allow-powerbi-com-to-connect-directly-to-azure-dat

To conclude, I really want this blog post to get an update soon with some better news given the above. But for now, I hope it helped you understand the potential problem your facing. Or, raises your awareness to a future problem you are likely to encounter.

Many thanks for reading.


Cognitive Services with U-SQL (Reference Guide)

This post is a reference guide to support an event talk or webinar. The content is intended to assist the audience only. Thank you.

Abstract

Microsoft’s Cognitive Services are basically the best thing since sliced bread, especially for anybody working with data. Artificial intelligence just got packaged and made available for the masses to download. In this short talk, I’ll take you on a whirl wind tour of how to use these massively powerful libraries directly in Azure Data Lake with that offspring of T-SQL and C# … U-SQL. How do you get hold of the DLL’s and how can you wire them up for yourself?… Everything will be revealed as well as the chance to see what the machines make of the audience!

Links

Helpful Bits

Why U-SQL?

  • U for unified. Unifying T-SQL and C#.
  • U is the next letter after T. T-SQL > U-SQL.
  • U for U-Boat, because Mike Rys dives into his Data Lake with a U-Boat 🙂

Installing the U-SQL samples and extension files in your Data Lake Storage.

The executed code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
USE [CognitiveServices];
 
REFERENCE ASSEMBLY ImageCommon;
REFERENCE ASSEMBLY FaceSdk;
REFERENCE ASSEMBLY ImageEmotion;
REFERENCE ASSEMBLY ImageTagging;
REFERENCE ASSEMBLY ImageOcr;
 
--Extract the number of objects on each image and tag them 
@imgs =
    EXTRACT 
        FileName string, 
        [ImgData] byte[]
    FROM 
        @"/Images/{FileName}.jpg"
    USING 
        NEW Cognition.Vision.ImageExtractor();
 
@imgTags =
    PROCESS 
        @imgs 
    PRODUCE 
        [FileName],
        [NumObjects] INT,
        [Tags] string
    READONLY 
        [FileName]
    USING 
        NEW Cognition.Vision.ImageTagger();
 
OUTPUT @imgTags
TO "/output/ImageTags.csv"
USING Outputters.Csv(quoting : TRUE, outputHeader : TRUE);

 

Recursive U-SQL With PowerShell (U-SQL Looping)

In its natural form U-SQL does not support recursive operations and for good reason. This is a big data, scale out, declarative language where the inclusion of procedural, iterative code would be very unnatural. That said, if you must pervert things PowerShell can assist with the looping and dare I say the possibility for dynamic U-SQL.

A couple of caveats…

  • From the outset, I accept this abstraction with PowerShell to achieve some iterative process in U-SQL is a bit of a hack and very inefficient, certainly in the below example.
  • The scenario here is not perfect and created using a simple situation for the purposes of explanation only. I know the same data results could be achieved just by extending the aggregate grouping!

Hopefully that sets the scene. As I’m writing, I’m wondering if this blog post will be hated by the purists out there. Or loved by the abstraction hackers. I think I’m both 🙂

Scenario

As with most of my U-SQL blog posts I’m going to start with the Visual Studio project available as part of the data lake tools extension called ‘U-SQL Sample Application’. This gives us all the basic start up code and sample data to get going.

Input: within the solution (Samples > Data > Ambulance Data) we have some CSV files for vehicles. These are separated into 16 sources datasets covering 4 different vehicle ID’s across 4 days.

Output: let’s say we have a requirement to find out the average speed of each vehicle per day. Easily enough with a U-SQL wildcard on the extractor. But we also want to output a single file for each day of data. Not so easy, unless we write 1 query for each day of data. Fine with samples only covering 4 days, not so fine with 2 years of records split by vehicle.

Scenario set, lets look at how we might do this.

The U-SQL

To produce the required daily outputs I’m going to use a U-SQL query in isolation to return a distinct list of dates across the 16 input datasets, plus a parameterised U-SQL stored procedure to do the aggregation and output a single day of data.

First getting the dates. The below simply returns a text file containing a distinct list of all the dates in our source data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
DECLARE @InputPath string = "/Samples/Data/AmbulanceData/{filename}";
 
@DATA =
    EXTRACT 
        [vehicle_id] INT,
        [entry_id] long,
        [event_date] DateTime,
        [latitude] FLOAT,
        [longitude] FLOAT,
        [speed] INT,
        [direction] string,
        [trip_id] INT?,
        [filename] string
    FROM 
        @InputPath
    USING 
        Extractors.Csv();
 
@DateList =
    SELECT DISTINCT 
        [event_date].ToString("yyyyMMdd") AS EventDateList
    FROM 
        @DATA;
 
OUTPUT @DateList
TO "/output/AmbulanceDataDateList.txt"
USING Outputters.Csv(quoting : FALSE, outputHeader : FALSE);

Next, the below stored procedure. This uses the same input files, but does the required aggregation and outputs a daily file matching the parameter passed giving us a sinlge output.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
CREATE PROCEDURE IF NOT EXISTS [dbo].[usp_OutputDailyAvgSpeed]
    (
    @OutputDate string
    )
AS
BEGIN
 
    //DECLARE @OutputDate string = "20140914"; //FOR dev
    DECLARE @InputPath string = "/Samples/Data/AmbulanceData/{filename}";
    DECLARE @OutputPath string = "/output/DailyRecords/VehicleAvgSpeed_" + @OutputDate + ".csv";
 
    @DATA =
        EXTRACT 
            [vehicle_id] INT,
            [entry_id] long,
            [event_date] DateTime,
            [latitude] FLOAT,
            [longitude] FLOAT,
            [speed] INT,
            [direction] string,
            [trip_id] INT?,
            [filename] string
        FROM 
            @InputPath
        USING 
            Extractors.Csv();
 
    @VAvgSpeed =
        SELECT DISTINCT 
            [vehicle_id],
            AVG([speed]) AS AverageSpeed
        FROM 
            @DATA
        WHERE
            [event_date].ToString("yyyyMMdd") == @OutputDate
        GROUP BY
            [vehicle_id];
 
    OUTPUT @VAvgSpeed
    TO @OutputPath
    USING Outputters.Csv(quoting : TRUE, outputHeader : TRUE);
 
END;

At this point, we could just execute the stored procedures with each required date, manually crafted from the text file. Like this:

1
2
3
4
[dbo].[usp_OutputDailyAvgSpeed]("20140914");
[dbo].[usp_OutputDailyAvgSpeed]("20140915");
[dbo].[usp_OutputDailyAvgSpeed]("20140916");
[dbo].[usp_OutputDailyAvgSpeed]("20140917");

Fine, for small amounts of data, but we can do better for larger datasets.

Enter PowerShell and some looping.

The PowerShell

As with all things Microsoft PowerShell is our friend and the supporting cmdlets for the Azure Data Lake services are no exception. I recommend these links if you haven’t yet written some PowerShell to control ADL Analytics jobs or upload files to ADL Storage.

Moving on. How can PowerShell help us script our data output requirements? Well, here’s the answer, in my PowerShell script below I’ve done the following.

  1. Authenticate against my Azure subscription (optionally create yourself a PSCredential to do this).
  2. Submit the first U-SQL query as a file to return the distinct list of dates.
  3. Wait for the ADL Analytics job to complete.
  4. Download the output text file from ADL storage.
  5. Read the contents of the text file.
  6. Iterate over each dates listed in the text file.
  7. Submit a U-SQL job for each stored procedure with the date passed from the list.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#Params...
$WhereAmI = $MyInvocation.MyCommand.Path.Replace($MyInvocation.MyCommand.Name,"")
 
$DLAnalyticsName = "myfirstdatalakeanalysis" 
$DLAnalyticsDoP = 10
$DLStoreName = "myfirstdatalakestore01"
 
 
#Create Azure Connection
Login-AzureRmAccount | Out-Null
 
$USQLFile = $WhereAmI + "RecursiveOutputPrep.usql"
$PrepOutput = $WhereAmI + "AmbulanceDataDateList.txt"
 
#Summit Job
$job = Submit-AzureRmDataLakeAnalyticsJob `
    -Name "GetDateList" `
    -AccountName $DLAnalyticsName `
    –ScriptPath $USQLFile `
    -DegreeOfParallelism $DLAnalyticsDoP
 
Write-Host "Submitted USQL prep job."
 
#Wait for job to complete
Wait-AdlJob -Account $DLAnalyticsName -JobId $job.JobId | Out-Null
 
Write-Host "Downloading USQL output file."
 
#Download date list
Export-AzureRmDataLakeStoreItem `
    -AccountName $DLStoreName `
    -Path $myrootdir\output\AmbulanceDataDateList.csv `
    -Destination $PrepOutput | Out-Null
 
Write-Host "Downloaded USQL output file."
 
#Read dates
$Dates = Get-Content $PrepOutput
 
Write-Host "Read date list."
 
#Loop over dates with proc call for each
ForEach ($Date in $Dates)
    {
    $USQLProcCall = '[dbo].[usp_OutputDailyAvgSpeed]("' + $Date + '");'
    $JobName = 'Output daily avg dataset for ' + $Date
 
    Write-Host $USQLProcCall
 
    $job = Submit-AzureRmDataLakeAnalyticsJob `
        -Name $JobName `
        -AccountName $DLAnalyticsName `
        –Script $USQLProcCall `
        -DegreeOfParallelism $DLAnalyticsDoP
 
    Write-Host "Job submitted for " $Date
    }
 
Write-Host "Script complete. USQL jobs running."

At this point I think its worth reminding you of my caveats above 🙂

I would like to point out the flexibility in the PowerShell cmdlet Submit-AzureRmDataLakeAnalyticsJob. Allowing us to pass a U-SQL file (step 2) or build up a U-SQL string dynamically within the PowerShell script and pass that as the execution code (step 7), very handy. Switches: -Script or -ScriptPath.

If all goes well you should have jobs being prepared and shortly after running to produce the daily output files.

I used 10 AU’s for my jobs because I wanted to burn up some old Azure credits, but you can change this in the PowerShell variable $DLAnalyticsDoP.

Conclusion

It’s possible to archive looping behaviour with U-SQL when we want to produce multiple output files, but only when we abstract the iterative behaviour away to our friend PowerShell.

Comments welcome on this approach.

Many thanks for reading.

 

Ps. To make life a little easier. I’ve stuck all of the above code and sample data into a GitHub repostiory to save you copy and pasting things from the code windows above.

https://github.com/mrpaulandrew/RecursiveU-SQLWithPowerShell

 

 


Paul’s Frog Blog

Paul is a Microsoft Data Platform MVP with 10+ years’ experience working with the complete on premises SQL Server stack in a variety of roles and industries. Now as the Business Intelligence Consultant at Purple Frog Systems has turned his keyboard to big data solutions in the Microsoft cloud. Specialising in Azure Data Lake Analytics, Azure Data Factory, Azure Stream Analytics, Event Hubs and IoT. Paul is also a STEM Ambassador for the networking education in schools’ programme, PASS chapter leader for the Microsoft Data Platform Group – Birmingham, SQL Bits, SQL Relay, SQL Saturday speaker and helper. Currently the Stack Overflow top user for Azure Data Factory. As well as very active member of the technical community.
Thanks for visiting.
@mrpaulandrew