0845 643 64 63

Nick Edwards

How to Trigger a Power BI Dataset Refresh via Azure Synapse Analytics in 3 Simple Steps.

In this blog post we’ll explore how to trigger a Power BI Dataset via Azure Synapse Analytics in 3 simple steps:

Step 1:

Create a new security group called “AzureSynapsePowerBIIntegration” in your Azure Active Directory.

Next add the name of your Azure Synapse Workspace as a member of your new security group. This will be your service principal.

Step 2:

Login to powerbi.com and in the top right hand corner locate “Settings” and then “Admin portal”

Under “Tenant settings” locate “Developer Settings” and then “Allow service principles to user Power BI APIs”.

Set this service to “Enabled” using the toggle. Next under the heading “Apply to:” select “Specific security groups (Recommended)”. Next add the newly created security group “AzureSynapsePowerBIIntegration” and click apply.

Now all that remains in the power bi service is to assign the security group “Member” workspace access to the dataset you want to refresh.

Step 3:

Login to your Azure Synapse Workspace.

Create a new pipeline called “Power BI Dataset Refresh” and then create a new web activity called “Refresh Dataset”.

Create two new parameters “WorkspaceGuidId” and “DatasetGuidId” both as strings.

Locate the WorkspaceGuidId (also known as the GroupId) and the DatasetGuidId from the URL of the dataset you want to refresh via powerbi.com

The URL will look like this below:

https://app.powerbi.com/groups/{groupId}/datasets/{datasetId}/details

Assign the Workspace/GroupId guid and the DatasetId guid to your newly created parameters.

Next within the settings of the “Web” activity paste the following dynamic content into the URL section.

@concat(‘https://api.powerbi.com/v1.0/myorg/groups/’,pipeline().parameters.WorkspaceGuidId,’/datasets/’,pipeline().parameters.DatasetGuidId,’/refreshes’)

And set the other options as per below:

Method:

POST

Body:

{}

Authentication:

System Assigned Managed Identity

Resource:

https://analysis.windows.net/powerbi/api

And that’s it…

All that is left to do now is debug / trigger the pipeline and see the dataset refresh in powerbi.com!

Some final points:

  • If the dataset resides on a non premium capacity you can only schedule up to 8 refreshes per day.
  • If the dataset resides on a premium capacity you can schedule up to 48 refreshes per day.
  • Any datasets on a premium capacity don’t impose limitations for API refreshes.
  • The API refresh data request only works when a published dataset is set to import mode rather than direct query.
  • You can switch the method to a “GET” request instead and retrieve the refresh history of the dataset. The output of this could be saved as a variable and then subsequently written back to a database via a stored procedure.

How to parameterise the Execute Pipeline activity in Azure Synapse Analytics.

Unfortunately, as of April 2022 there is not an option to parameterise or add dynamic content to an “Execute Pipeline” activity to invoke a pipeline run.

However, with the use of a Microsoft API there is method which we can use to overcome this.

In this blog post we’ll use the “Pipeline – Create Pipeline Run” API documented below:

https://docs.microsoft.com/en-us/rest/api/synapse/data-plane/pipeline/create-pipeline-run

Using this API will allow us to dynamically pass in a pipeline name and trigger it. It will also us to dynamically pass in parameters to the called pipelines.

For this example, we’ll create 3 simple pipelines:

  1. “Wait 5 Seconds”
  2. “Wait 10 Seconds”
  3. “Wait 15 Seconds”

Plus a “Control Pipeline” that we’ll get onto later.

Each of the “Wait” pipelines contain a single “Wait” activity and a parameter called “WaitDuration” with a “Type” of “Int”. The idea being that we will dynamically pass in a wait duration from an array.

The “wait time in seconds” option in the settings tab of the “Wait” activity is then parameterised to refer to the parameter as shown below.

Next, we’ll build the “Control Pipeline” based on a “For Each” activity which will loop around the array displayed in notepad++ below. This array references the Pipeline Name that we want to dynamically trigger plus the Wait Duration we want it to wait for.

To use this array we’ll create a “ParameterArray” parameter with “Type” equal to “Array” in the “Control Pipeline”. We’ll set the default value equal to the array from above.

Next, within the settings tab of the “ForEach” activity we have the option of ticking the sequential option and listing the items we want to loop over.

Here we’ve got the “Sequential” option ticked. This just means that it will loop through each item in our array, starting with the very first item. For the “Items” option we’ll add the dynamic content which refers to our “ParameterArray” parameter.

Next, within the “ForEach” activity we’ll create a “Web” activity with the following options.

URL:

https://{Your Synapse Workspace Name}.dev.azuresynapse.net/pipelines/@{item().PipelineName}/createRun?api-version=2020-12-01

For example:

https://mytestsynapseworkspace.dev.azuresynapse.net/pipelines/@{item().PipelineName}/createRun?api-version=2020-12-01

*Notice here that I’ve added @{item().PipelineName} into the string which is our pipeline name from our “ParameterArray” parameter. This is what will make our Web API POST request dynamic.

Method:

POST

Headers:

Name : Content-Type

Value : application/json

Body:

{“WaitDuration”:@{item().WaitDuration}}

*Also notice here that i’ve added @{item().WaitDuration} into the string which is our wait duration from our “ParameterArray” parameter for each item. This allows us to feed each of the pipelines with the wait duration.

Integration runtime:

AutoResolveIntegrationRuntime

Authentication:

System Assigned Managed Identity

Resource:

https://dev.azuresynapse.net/

Once we’ve entered all of these details we should have a “Web” activity which looks similar to the image below:

Now let’s trigger our “Control Pipeline”:

As we can see each our 3 pipelines have been triggered by just triggering the “Control Pipeline”. We can also see that the respective parameters have been passed in successfully to each of the triggered pipelines.

Wait 5 Seconds:

Wait 10 Seconds:

Wait 15 Seconds:

Note, if you’re using Azure DevOps Git CI/CD integration you’ll need to make sure that the pipelines you want to execute already exist in live mode for the API request to recognise that they exist. If not you’ll get an error code suggesting that the pipeline doesn’t exist.

Also note that there may be some Azure permissions that you’ll need to configure with your Azure Administrator in the Azure portal to allow this to work.

In summary, this method is a trigger and forget approach. In this blog there is no consideration in the for each activity to wait for the pipeline to either succeed, fail or timeout. However with the use of the “Pipeline Runs – Get” API and a “Until” activity you can easily make this work.

How to resolve SSIS “Row yielded no match during lookup” error.

Have you ever been faced with the SSIS error “Row yielded no match during lookup”? If so, this blog is for you!

A customer of ours recently faced the same issue in one of their SSIS packages and asked us to investigate this for them. Initial investigations on their side highlighted that when they replicated the lookup component using a standard join in T-SQL (similar to the image below) it returned the expected results.

So why was SSIS reporting an error and ultimately causing the package to fail?

To demonstrate why they faced this problem, I will provide a worked example. It should be noted that the customer example we were presented with was many thousands of rows, so the problem was much less obvious than the one I have created today.

To begin with, I have created 3 tables in my demo database using a subset of Adventure Works data.

These are:

“example.Product” which contains 10 rows and 4 columns. Essentially this could be thought of as my Product Dimension.

“example.Sales” which contains 40 rows and 4 columns. Essentially this could be thought of as my Staging Fact Table.

And finally, “example.FactSales” which contains 40 rows and 4 columns and is the result of the lookup component and ultimately would form the basis of my Sales Fact Table.

Here you will notice that “ProductNumber” has been replaced with “ProductKey”. Behind the scenes in my example SSIS package I am looking up “ProductNumber” from the “example.Product” table to return “ProductKey” in my Sales Fact Table.

*Note: I have specified for the component to fail if it finds any none matching rows.

Below is the data flow task within my example SSIS package.

This package executes without failure and populates my “example.FactSales”. However, I will now replicate the issue our customer in question faced.

To begin I executed the below T-SQL update statement against my “example.Product” table.

This changed the “ProductNumber” from “AR-5381” to “ar-5381” where “ProductKey” was equal to 1. When performing a simple “SELECT TOP 100 * FROM example.Product” we can now see that “ProductNumber” is lower case where “ProductKey” = 1 and the rest of the Product Numbers remain unchanged.

Now when I replicate the existing SSIS package using a standard T-SQL join as before there appears to be no issue – 40 rows are still returned using an inner join.

However, if I now execute the SSIS package the error “Row yielded no match during lookup” is subsequently returned.

So why is this happening?

Quite simply SSIS is case sensitive, accent sensitive and kana sensitive (at least in full cache mode) whereas SQL Server is not.

Now this is a fairly obvious example, but with more complex lookups which may involve multiple columns, it becomes a little more tricky to spot!

To solve this customer example, I used the EXACT function in MS Excel to spot the exact row in the lookup table which was returning the error. The EXACT function compares two text strings and returns “TRUE” if they are exactly the same and “FALSE” if they are not.

Alternatively, you could use SQL Server itself to replicate the lookup behavior of SSIS. Taking the T-SQL script from before, you can alter the existing inner join to a left join and collate using “Latin1_General_BIN” to identify the potential root cause of the lookup failure.

This highlights the importance of cleaning and standardising your lookup columns before using an SSIS lookup data flow component. Here at Purple Frog we tend to use a scalar function to create clean derived columns which are subsequently used for lookup components.

I hope this helps you solve any “Row yielded no match during lookup” errors with your SSIS solution!

Replicating SQL windowing functions in Power Query.

I’m sure you’ve all heard of windowing functions in SQL Server? But what if you need to replicate these windowing functions in Power Query?

Here I have a sample CarSales.csv dataset of car sales per brand, per year, per month and the number of units sold. I will use this dataset to demonstrate how we can replicate the windowing LAG function in Power Query.

In my example I’d like to calculate the month on month sales growth per car brand, per year. Using SQL this would be relatively simple using the LAG function to access the previous row value and then simply subtract the output of the LAG function from the column unit_sold with the use of a CTE.

Which would then generate the output below.

However, what if you found yourself in the situation where you couldn’t use SQL? Perhaps you’ve connected to an Analysis Cube or perhaps just a .CSV file in an Azure Storage Account. What do you do then…?

I’ll now show you how you use Power Query to help you solve your problem!

To start open Power BI Desktop and click “Home” > “Get Data” > “Text/CSV” and then select and open your .csv file. In my case it was CarSales.csv.

Now instead of just opening the data, you’ll want to click the option to “Transform Data” which will then open Power Query.

Once Power Query is open you’ll then need to sort the columns you want to partition by and order by. In my case it was “brand”, “year” and “month” which I sorted in ascending order. To do this click on the small down arrow to the right of each of the columns and click “Sort Ascending”.

This then produces the output/preview shown below.

The next step is to add an index column. To do this click on “Add Column” in the ribbon, click “Index Column” and then click “From 0”. This will then create a new column called “Index” which will increment by 1 starting from 0.

Once done you’ll then need to create a new column which will add 1 to this “Index” column. To do this click on the “Index” column to select it and then click on “Add Column” from the ribbon, then click “Standard” and finally click “Add”.

This will then launch a new window where you’ll be able to specify the value to add. For this example as we just want to access the previous row to get the previous months sales data we’ll enter the value 1. This step essentially replicates the [offset] argument in the LAG function. If instead you wanted to replicate the LEAD function you could instead use the “Subtract” option to subtract 1 away from this index column.

Once done, click “OK” and this will create a new column called “Addition”. To make things clearer for the purpose of this blog we’ll rename this to “Offset_Index”.

Next using the newly created “Index” and “Offset_Index” columns we want to left join this table back to itself. To do this click “Home” in the ribbon and then click “Merge Queries”.

This will then open a new window. You now need to select the 2 columns you want to partition by plus the index column. This will form the basis of your join criteria. To do this control + click the columns you want to join on. In my example these are “brand”, “year” and “index”.

Next in the empty drop down select the same table as the table you want to merge to. It will have “(Current)” tagged onto the end of the name. Now ensure that the join kind is set to “Left Outer”.

Next, control + click the 2 columns you want to partition in the new table plus the “Offset_Index” column we created earlier. In my case these were “brand”, “year” and “Offset_Index”.

You’ll now see the following join logic all joined by an “and” condition:

  • “brand” from “CarSales” left joins to “brand” from “CarSales (Current)” indicated by the “1” next to the column heading each of the columns.
  • “year” from “CarSales” left joins to “year” from “CarSales (Current)” indicated by the “2” next to the column heading each of the columns.
  • “Index” from “CarSales” left joins to “Offset_Index” from “CarSales (Current)” indicated by the “3” next to the column heading each of the columns.

Next click OK and this will create a new column called “Renamed Columns” (which was just the name from the previous power query step) but essentially contains all of the columns from the newly joined (or merged) table.

Next expanded the “Renamed Columns” column by clicking the double-sided arrow and select the column you want to access from the previous row. In my case it was the “units_sold” column from the “CarSales (Current)” table.

Now you’ll want to rename this column to something that’s more meaningful to the context of your data. In my example I renamed this column to “units_sold_previous_month”.

Finally to solve the original problem we just needed to calculate the sales growth per month, per brand, per year. To do this click “Add Column” in the ribbon and then click “Custom Column”. When the new window opens call the new column something meaningful to your data and enter a custom formula to solve your problem. In my example I called this column “sales_growth” and entered a custom formula of [units_sold] – [units_sold_previous_month].

This then created a new column called “sales_growth”.

Finally the 3 newly created columns we created in the steps earlier can now be deleted. In my example these were “Index”, “Offset_Index” and “units_sold_previous_month”. To do this control + click the columns you want to remove, right click and then left click “remove columns”.

And that’s it! These steps have successfully replicated the LAG function in Power Query!

I hope this step by step guides helps your solve your windowing function needs in Power Query.

How to change the summarization of multiple columns in Power BI.

Have you ever been faced with the issue where Power BI automatically adds a sigma symbol to the front of your column if it detects it’s a data type of integer or decimal?

Now this isn’t too much of an issue if you’ve imported only a few columns but what if you’ve imported lots? Going through this column by column can become tiresome!

Luckily there is a hack, not only to turn summarization to “None” on multiple columns at once, but also to quickly change the formatting too.

As an example, I have imported a table from Excel into Power BI with 27 columns (“Column A” through to “Column Z”) and a “Key” column. Power BI detected that all of my columns contain whole numbers (or integers) and applied a summarization of “sum” against each column automatically.

So how do we quickly change this? Click the “Model” tab on the left hand side. Open the table up to expose all the columns within a table you want to change multiple columns within. Next select the inital column you want to change and then either select a range of fields using Shift+Click or select multipile fields using Control+Click.

“Shift + Click”
“Control + Click”

Once you’ve selected the group of columns you want to change find the advanced drop down in the properties window and change the “Summarize by” option to “None”.

That’s it! There are other formatting properties in here too so feel free to have a play and change multiple columns at once!

How to export more than 30,000 rows of data from Power BI Desktop.

Have you ever wanted to export a table from Power BI Desktop into Excel just to make sure the DAX you’ve written is performing as expected but ran into this error message: “Data exceeds the limit – Your data is too large. Some data sampling may occur”?

Most probably this has occurred because you’ve got more than 30,000 rows of data in your table that you’re trying to export. In the example shown below I’ve actually got 30,001 rows of data – a row containing column headers plus 30,000 rows of actual data.

If I now change the row count in my “Table To Export” table to 29,999 rows of actual data using a simple filter I have no issues exporting the data. This brings the grand total to 30,000 rows (29,999 rows of actual data plus a row containing columns headers).

But what if you really need to export more than 30,000 rows of data? Well there is a way…

Firstly make sure you’ve got DAX Studio installed on your machine.

Now within Power BI Desktop click “View” and then “Performance analyzer”.

This will open a new window pane called “Performance analyzer”.

Now click “Start Recording” and then click “Refresh Visuals”. Next within the Performance analyzer window locate the name of the visualisation which in my case is “Table To Export” and click the small “+” symbol next to it.

Next go ahead and grab the DAX query which has been used to construct the table. Click on the link “Copy query” which will copy the DAX query to your clipboard.

Next you’ll need to launch DAX Studio from within Power BI Desktop.

To do this on the top ribbon in Power BI Desktop click “External Tools” and then click “DAX Studio”.

Now within DAX Studio paste the copied query from your clipboard into the query window.

Next remove the variable which limits the number of rows which will be returned (TOPN) and replace the “EVALUATE” statement with “_DS0Core” as per the snip below.

Next change the output of the query to be executed from “Grid” to “File”.

Finally click the “Run” button and let DAX Studio Export your data containing at least 30,000 rows to a CSV file. Simple!

I hope this helps with any debugging you might need to perform in Power BI Desktop!

How to make your matrix column widths all equal to each other in Power BI using DAX.

Have you ever come across an issue where your Power BI matrix column widths just aren’t the same width and visually just don’t look right?

Unfortunately (as of April 21’) there is no easy way to make all column widths equal in the format pane of a matrix visual.

However there is a hack to set the width of your all columns in a matrix so that they are all equal and pixel perfect with DAX!

How do you do this I hear you ask?

Firstly create a new measure called ‘Set Column Width’ and enter a string value equal to the length of your longest column title. In my case my longest title is “Front Derailleur Cage” and this has a length of 21 characters including spaces. Therefore I need to set my DAX expression to be a string which is 21 characters long. In my example I’ve just created a string of 21 asterixis wrapped in speech marks – but this can be any combinations of characters you like!

The next job is to go to the format pane of your matrix and set the “Show on rows” toggle equal to ‘On’.

Next make sure the ‘Auto-size column width’ is set equal to ‘On’.

Now drag your newly created DAX expression (in my case ‘Set Column Width’) on to the values field of your matrix.

You’ll then notice that your matrix will look similar to the snip below – a little bit of a mess! But not to worry this all part of the plan!

Next go back the format pane of your matrix and set ‘Auto-size column width’ to ‘Off’.

Now remove your ‘Set Column Width’ measure from the visual by clicking the ‘X’ symbol on the field pane.

Finally increase the width and height of your matrix visual to accommodate the increased column widths.

You now have pixel perfect column widths which are all equal to each other!

A huge thanks to the brilliant MVP Ruth Pozuelo Martinez (@ruthpozuelo) from curbal.com for this hack! It’s a been a massive help for my Power BI reports here at Purple Frog! Hopefully the Power BI team will release a proper solution in the matrix format pane soon!

ADF Switch Activity – A neat solution for nested IFs.

Whilst carrying out some work for a client using Azure Data Factory I was presented with the challenge of triggering different activities depending on the result of a stored procedure.

In fact the challenge posed was to…

  • Execute ‘Copy A’ activity if the result of a stored procedure returned (A),
  • Execute ‘Copy B’ activity if the result of a stored procedure returned (B),
  • Execute ‘Copy C’ activity if the result of a stored procedure returned (C)
  • Finally execute an error stored procedure if the result of the stored procedure returned (D).

Prior to the switch statement I could achieve this using 4 ‘IF’ activities connected to a lookup activity as shown in the snip below using my ‘Wait’ example pipeline.

However a neater solution is to use the ‘Switch’ activity to do this work instead. I’ll now jump straight into a worked example to show you how I achieved this.

I created a demo.RandomNumberGenerator stored procedure in our Purple Frog Demo database which declares a variable and sets it equal to a random number between 1 and 4. The stored procedure then uses this variable in a case statement to return a string. So if the @randomnumber variable returns 1 the case statement returns ‘Wait 1 Minute’.

I then used a lookup activity in ADF to return the result of this stored procedure which I then used to ‘feed’ my ‘Switch’ activity.

When I preview this lookup activity it just returns the result of the stored procedure. When this is run time and time again it just returns a random ‘ADFAction’ as named in my case statement generated by my rand() SQL function. So in the scenario above it returned an action of ‘Wait 1 Minute’. If I were to preview the stored procedure again it might return ‘Wait 4 Minutes’ instead.

I can then connect my switch activity to this lookup activity using the ‘success’ green output connector.

I now need to add some dynamic content to refer to the output of my lookup activity. I don’t need to create dynamic nested IF content to say if my stored procedure equals ‘Wait 1 Minute’ then do this, if my stored procedure equals ‘Wait 2 Minutes’ then do this… all I need to do is refer to the output of the connected lookup activity ‘Random Number Lookup Activity’.

So in my example this is simply just @activity(‘Random Number Lookup Activity’).output.firstrow.adfaction. ADF will then match the output of the stored procedure to the case statements I provide it. The default activity is just a set of activities that are executed when the expression evaluation isn’t satisfied. I.e. the output of my the stored procedure doesn’t match any of the case statements I provide it.

In the snip below I have provided 4 case statements which match all of the possible outputs from my stored procedure. Essentially there isn’t any need for default activity as my stored procedure will always return 1 of 4 results but it’s useful to know for future reference.

Within each case statement there is an activity for which I would like ADF to perform if it matches the result of my stored procedure. In my example these are just ‘Wait’ activities. So if the stored procedure returns ‘Wait 1 Minute’ I’ve put a ‘Wait’ activity within the case statement to wait 1 minute (or 60 seconds).

When I first ran this pipeline in debug mode I could see that the output of the stored procedure was ‘Wait 2 Minutes’. The switch activity then matched this to the case statements I provided and performed the activity within the ‘Wait 2 Minutes’ case statement and triggered my ‘Wait 2 Minutes’ activity.

When I debugged another run my stored procedure returned ‘Wait 4 Minutes’. The switch activity then executed my ‘Wait 4 Minutes’ activity. Perfect!

So it’s pretty much as simple that!

I hope this helps all you ADF’ers out there!

How to block the creation of a classic workspace in powerbi.com when a new MS Team is created!

There is now a new tenant setting for Power BI admins to prevent users from creating classic workspaces in Power BI! This includes from places like MS Teams! Hurrah!

If you’re a Power BI Admin you can now enable ‘Block classic workspace creation’.

To do this go to the powerbi.com portal and click the ‘settings’ icon in the top right hand corner of the screen.

Click on ‘Admin portal’ and then locate and click ‘Tenant settings’.

Next under the ‘Workspace settings’ click ‘Block classic workspace creation’.

Now change the toggle from ‘Disabled’ to ‘Enabled’. This will now block the creation of new classic workspaces in your Power BI portal from places like MS Teams.

This will also automatically remove classic workspaces from your Power BI workspace list, but only if they were created by Teams, and only if they’ve never been accessed. If they have been accessed then they’ll be left there for you to deal with manually.

Now you’re probably thinking well I’ll just delete any old remaining workspaces which have been accessed but are no longer being used! Woah hold on…If you do this you’ll end up deleting the underlying office 365 modern group (and the associated Team!) which still may be used.

A better way of doing this is to upgrade the workspace to the new experience and then delete it.

To do this, click on the ellipsis next to the filters button within the workspace you want to delete and click ‘Edit workspace’. Next go to the advanced section and under ‘Upgrade this workspace (preview)’ select ‘Upgrade now’.

A small window will appear in the middle of your screen.

Check the ‘I’m ready to upgrade this workspace’ option and then click ‘Upgrade’.

Now, once the workspace has been upgraded to the new experience you can then click on ‘Settings’ icon and click ‘Delete Workspace’. This will now leave the office 365 modern group untouched but remove it from your workspace. Simple!

As always a big thanks to the guys from at Guy In A Cube (Twitter: https://twitter.com/GuyInACube) on You Tube for sharing this knowledge!

Check out their video here: https://www.youtube.com/watch?v=T2PAL4D2SvU.

Here’s the link to the Microsoft Power BI team’s announcement on this new feature: https://powerbi.microsoft.com/en-us/blog/announcing-you-can-now-block-classic-workspace-creation/

Using ConcatenateX in PowerBI to return multiple values.

In this blog post we’ll take a quick look at using ConcatenateX function to view a concatenated string of dates where the max daily sales occurred for a given month.

I came across this function whilst going through the excellent “Mastering DAX 2nd Edition Video Course” by the guys from SQLBI.com. So credit to Marco and Alberto for sharing this.

So how does it work? If we had a list of dates ranging from 01/01/2020 to 31/12/2020 and we wanted to see which days we achieved maximum sales for each given month in a year we could use the ConcatenateX function to return these dates in a single row per month.

As we can see in the screenshot below, the left hand table shows the month of June where we achieved maximum sales for in June on both 18/06/2020 and 25/06/2020 of 99. In the table to the right we can see those two dates presented on a single row for the month of June in the column “What were the max days?”. This was column was created using the ConcatenateX function!

So let us first look at what the maximum daily sales were per month. To do this we’ll use the MAXX function to create “Max Daily Sales”. This returns the maximum daily sales rate achieved for each given month as a single value. So for the month of June this would be 99. The problem with this is we are not sure which days these max sales were achieved on without drilling down into the data.  Was this just one day or was it multiple days? All we can see is a figure of 99.

So let us create a new measure to work out on which days this figure of 99 occurred on.

The variable at point (a) returns a table with a single column which lists all of the unique dates in our Sales_2020 table.

The variable at point (b) returns the max sales for the given filter context in this case month.

The variable at point (c) uses the filter function to filter out only the days where the max sales were achieved by setting the total quantity sold measure to the max daily sales variable. For example in June we achieved max daily sales of 99 on 18/06/2020 and 25/06/2020. Therefore the variable at point (c) would filter out the ListOfDays table variable to just 18/06/2020 and 25/06/2020 only.

If we just had one max sales day per month we could simply return MaxDaysOnly. However we may have multiple days per month where max sales were achieved. Hence we use the ConcatenateX function to create a string of dates.

The variable at point (d) creates a string of concatenated dates separated by the delimiter “,” which can be used against a single row in a table.

Wrapping it all up returns us this table below, which shows us which days max sales were achieved per given month! Pretty cool eh?

Check out https://www.sqlbi.com/articles/mastering-dax-video-course-2nd-edition/ for more information regarding the online DAX course as well as https://twitter.com/marcorus and https://twitter.com/ferrarialberto on Twitter!

Power BI Sentinel
The Frog Blog

Team Purple Frog specialise in designing and implementing Microsoft Data Analytics solutions, including Data Warehouses, Cubes, SQL Server, SSIS, ADF, SSAS, Power BI, MDX, DAX, Machine Learning and more.

This is a collection of thoughts, ramblings and ideas that we think would be useful to share.

Authors:

Alex Whittles
(MVP)
Jeet Kainth
Jon Fletcher
Nick Edwards
Joe Billingham
Lewis Prince
Reiss McSporran
Microsoft Gold Partner

Data Platform MVP

Power BI Sentinel
Frog Blog Out
twitter
rssicon