0845 643 64 63

Uncategorized

How the UNICHAR() DAX Function Enhances Power BI Reports

The UNICHAR() DAX function is a text function that takes a numerical Unicode value and displays its associated character. For example, UNICHAR(128515) will display as:

90% of the information the human brain processes is visual and we process images up to 60,000 times faster than text, so it makes perfect sense to use icons where possible to enhance reports. This scarcely used DAX function opens-up that option.

The below stacked column chart uses Unicode emoticons to enhance the readability of the ‘Genre’ axis labels.

So, how do we achieve this?

To produce this you will need to edit the query. In the ‘Data’ view, right click the relevant table and select “Edit Query”

First, duplicate the existing column you want Unicode characters for (genre in this case). Then use the ‘Replace Values’ option to substitute in the relevant Unicode numbers for each genre.

(this can be hidden from the report view as it contains nothing meaningful).

Next, create a second calculated column that uses a simple measure:

IconColumn = (UNICHAR(UnicodeNumberColumn))

This new ‘Icon’ column can now be used in reports the same way as any other text column.

Note how in the stacked column chart above, the original names have been included, this is good practice for two main reasons. One is clarity, a clown denotes comedy to most users, but could indicate horror to others, including the label removes the ambiguity.

The other reason is due to possible compatibility issues. It is worth pointing out here that the Unicode characters will only display when the character exists in the chosen font. In most cases this will be fine, especially for emoji characters, but just in case there are display issues it is worth including the full label.

Staying with the movie topic, the below chart shows movie ratings both numerically and visually created by a custom measure:

Stars = REPT(UNICHAR(11088), AVERAGE('IMDB 1000'[10 Star Rating]))

A measure that uses the UNICHAR() function will always be a text field and as such, normal formatting applies, in the example above we can set colours to be gold on a black background.

The previous examples do help readability but don’t really add anything meaningful to the report. The below table shows that the UNICHAR() function can add worthwhile content with customisable KPIs by combining it with conditional formatting.

There are 143,859 Unicode characters available, everything from emojis, symbols, shapes and braille patterns to dice and playing cards. Whether you want to offer further insight into your data, enhance the user experience or simply create something sublimely ridiculous, with so many icons at your fingertips, the possibilities are only limited by your imagination.

Further information on the UNICHAR() function can be found here: UNICHAR function (DAX) – DAX | Microsoft Docs
A list of Unicode characters and their respective numerical values can be found here: Huge List of Unicode Characters

Join 2 Python lists together using nested loops

In this blog post I will show you how to join two 2D Python lists together.

The code is in the screenshot below.

Lines 1 – 2 are two lists that are going to be joined, line 3 is an empty list where the output will be appended to.

Lines 4 – 5 are two loops (one nested inside the other) which cycle through the records in both lists, line 6 checks whether the first items (index 0) in the two records from each list that are currently i & j match. If yes: the Key, Capital and Country are appended to our new list.

Lines 9 – 10 show the output record by record showing the join has worked successfully.

This code can be expanded for 3 lists, the code would have a 3rd for loop and an extra check in the if statement to find the correct record in the 3rd list to join.

Joining lists together in Python is useful when there is data in different lists and it would be beneficial if it were combined.

Dynamic Date Formats in Power BI

Which date format styles should we use if we are building a report that is being consumed internationally?

Remember, 01/12/2021 is December 1st or January 12th depending in which part of the world it is being read.

The decision may be taken from our hands if there is a company policy in place. If the company is based in the USA, for example, they may choose to use US formatted date fields as a standard for reporting across the entire business, however, if the field needs to be truly dynamic depending on the consumers location, the answer lies in this tool tip:

Explanation of dynamic date formats

There are 2 formats in the selection that are prefixed with an asterisk:

Selection of dynamic date formats
* We shall use ‘General Date’ in the examples throughout this post for reasons explained later

There are 2 variables that the Power BI Service checks when loading reports in the service.

First it will check the language setting of the user account in the service. This is set under ‘Settings >> General >> Language’. There is a dropdown option that acts as both a language and regional setting, this drives how dates are formatted when dynamic date formats are used.

Power BI service language settings

If this is set to ‘Default (browser language)’ the second variable, the browser’s default language setting, will take effect.

In Edge this is set under ‘Settings >> Language’, when multiple languages are set, the topmost one is considered the default.

Language settings in Edge

In Chrome it is set under ‘Settings >> Advanced >> Language’, this uses the same system as Edge where the topmost language is used as default.

Language settings in Chrome

Here is an example of a table loaded in a browser using both English UK and English US:

English UK
English US

This example shows that not only does the format of the date itself change (day and month have switched) but there are also visual connotations to account for. The US format uses a 12-hour clock by default and the addition of the AM/PM suffix changes the column width and drastically alters the readability of the table and potentially the entire report. It is these occurrences we need to be aware of when developing reports for international consumption.

This issue can easily be avoided by using the ‘Auto-size column width’ setting under ‘Column Headers’ on the formatting tab of the visual, or by allowing for the growth when setting manual column widths. (For a great guide on manually setting equal column widths, please read this helpful post by my colleague, Nick Edwards)

Unfortunately, this post comes with a caveat, at the time of writing it would seem there is a bug in Power BI. Remember this from earlier?

Explanation of dynamic date formats
Selection of dynamic date formats

As you can see below, both fields use the UK format of DD/MM/YYYY when the browser language is set to English UK.

Settings set to UK
UK dates

However, when the browser settings are changed to English US, only the *‘General Date’ format has changed, the *’DD/MM/YYYY’ format is still showing in the UK format even though there is an asterisk next to it in the selection list.

Settings set to US
Erroneous mix of US and UK dates

Hopefully once this issue is addressed, the use of regionally dynamic date formats will be available for both long and short formats.

Power BI – Enable Load

In Power BI Power Query there is an option to enable or disable whether a table is loaded into the report. The option ‘Enable load’ can be found by right clicking on the table. Typically, by default, the load is already enabled.

There is also an option ‘Include in report refresh’ which lets a user stop a table from refreshing when they refresh the entire report. This maybe useful for static tables or tables that are large which take a long time to refresh and a user wants to concentrate on how other tables are refreshing.

Once a user disables the ‘Enable load’ option, the table name turns italic which is an easy way for users to determine whether a table will be loaded or not.

After applying these changes, no data has been loaded into the report.

To re-enable the load, jump back into Power Query, right click on the table and ‘Enable load’.

Finally, some scenarios where it might be useful to disable loading a table:
– Disable loading tables in Power Query that were only ever stepping stones to create other tables
– See how removing a table effects your report before deleting it
– Removing a table that might be required again in the future

How to make your matrix column widths all equal to each other in Power BI using DAX.

Have you ever come across an issue where your Power BI matrix column widths just aren’t the same width and visually just don’t look right?

Unfortunately (as of April 21’) there is no easy way to make all column widths equal in the format pane of a matrix visual.

However there is a hack to set the width of your all columns in a matrix so that they are all equal and pixel perfect with DAX!

How do you do this I hear you ask?

Firstly create a new measure called ‘Set Column Width’ and enter a string value equal to the length of your longest column title. In my case my longest title is “Front Derailleur Cage” and this has a length of 21 characters including spaces. Therefore I need to set my DAX expression to be a string which is 21 characters long. In my example I’ve just created a string of 21 asterixis wrapped in speech marks – but this can be any combinations of characters you like!

The next job is to go to the format pane of your matrix and set the “Show on rows” toggle equal to ‘On’.

Next make sure the ‘Auto-size column width’ is set equal to ‘On’.

Now drag your newly created DAX expression (in my case ‘Set Column Width’) on to the values field of your matrix.

You’ll then notice that your matrix will look similar to the snip below – a little bit of a mess! But not to worry this all part of the plan!

Next go back the format pane of your matrix and set ‘Auto-size column width’ to ‘Off’.

Now remove your ‘Set Column Width’ measure from the visual by clicking the ‘X’ symbol on the field pane.

Finally increase the width and height of your matrix visual to accommodate the increased column widths.

You now have pixel perfect column widths which are all equal to each other!

A huge thanks to the brilliant MVP Ruth Pozuelo Martinez (@ruthpozuelo) from curbal.com for this hack! It’s a been a massive help for my Power BI reports here at Purple Frog! Hopefully the Power BI team will release a proper solution in the matrix format pane soon!

Tabular Cube Processing Report

I have created a Power BI report which provides detail on the state of processing in a Tabular Cube.
The report uses the cube’s dynamic management views to provide information about the cube’s partitions.

To use the tabular cube processing report, you need to insert the following information:

  • Server
  • Database (Cube Name)

Once entered and assuming the connection is fine you need to accept some native queries. These statements are select statements and will not alter the cube. That should give you a report similar to the one below, I have used an adventure works tabular model as an example.

Click here to download the report

This report is inspired by one I found for general tabular cube documentation by Data Savvy, which I highly recommend. That report includes information on: tables, columns, relationships, measure and security:

Click here to view tabular cube documentation blog post

My Experience of SQLBits 2020

A few weeks ago, Purple Frog attended SQLBits 2020 (the largest data conference in Europe). This year the event had a change of pace as the whole conference was hosted online due to the current global pandemic. We converted the office into our own small venue with almost every screen being used over the week to display sessions. With the help of plenty of social distancing, plenty of pizza and the odd bacon sandwich… it worked quite well!

Here is what the frogs had to say about the event,

SQLBits 2020 had literally hundreds of sessions to choose from. It was a tough call which session to watch next in most cases! Sessions ranged from hardcore data related sessions, to softer creative/design/career sessions too. I enjoyed seeing sessions from big names such as Christian Wade, Chris Webb, Brent Ozar and not forgetting the Italian DAX experts Marco & Alberto! I also gained lots of insight from Paul Andrew’s (a Purple Frog alumni) training day session on Azure Data Factory and gained lots of hints and tips that we could apply to live projects that I’m working on.

As some of the sessions were pre-recorded, the presenter was on hand to answer any questions along the way using the interactive chat window which was super useful and interactive. Finally, the organisers hosted a pub quiz Thursday night which was a great laugh as well as hosting lunch time entertainment on Thursday & Friday from the guys from the ‘Festival of the Spoken Nerd‘! All in all a fantastic week of remote training and insight! Be sure to get yourself booked on for next year! – Nick Edwards (BI Developer)


I had the pleasure of attending SQLBits 2019 last year, to see such a large event be totally migrated online was rather impressive. While the lunchtime entertainment and the pub quiz were a lot of fun with Steve Mould having a comedy routine on Venn diagrams, my highlight was attending Andy Leonard’s “From Zero to Azure Data Factory” training day. Andy is a great presenter and very knowledgeable about this area, there were many versatile examples he provided during the day that I found very useful. While I would have preferred to attend a live event, given the circumstances I feel SQLBits 2020 was a success and has really set the mark for what a virtual conference can achieve.- Liam McGrath (Support Engineer)


SQLBits was very different this year! The atmosphere of a virtual room, compared to an in person event is never going to compete, but that doesn’t mean that the quality or content was any lesser for it! In fact the ability to ask questions directly to the speakers via the chat windows or Q&A during the pre-recorded presentation, allowed them to fully answer your question in depth and have a good conversation around it. Sitting in on sessions with Industry giants, Itzik Ben-Gan, Brent Ozar, Pinal Dave and Simon Whitely and Terry McCann from Advancing Analytics was incredibly valuable as expected.

There were loads of incredibly thought provoking sessions, ranging from Machine Learning in Power BI, to Databricks vs Synapse. Unlocking your LinkedIn profile, to T-SQL Performance Tuning. If you were any way interested in Data, there was something for you to learn! Overall I had a great time, and I can’t wait to see what the wonderful team of volunteers and speakers come up with next year! Bring on SQLBits 2021! – Reiss McSporran (BI Developer)


I found the conference really useful and quite a fun way of learning. I got some really useful knowledge out of the sessions and got to see different ways of doing things. The range of sessions is really good and there are so many options, you get a good mix of topics and therefore aren’t overwhelmed with information. The lunchtime entertainment and quiz were a really fun addition, so it’s not just all about the learning. – Jeet Kainth (BI Developer)


SQLBits 2020 was the first SQLBits event that I have attended. I enjoyed lots of different sessions on numerous topics such as: DAX, Power BI, Azure Machine Learning, SQL Server and even one about brand building.
The sessions were very informative and I learnt a lot. I would definitely attend again even if the event was to be virtual. – Jon Fletcher (BI Developer)


Having attended many SQLbits conferences over the years, I was excited to be accepted to deliver a talk this year on LinkedIn. As the conference was virtual, we have to pre-record our sessions but this meant that whilst the recordings were playing we were able to directly interact with the delegates in the chat to answer questions. It was great to see a soft skills topic as part of the SQLBits agenda and given the positive response I hope that we start to see more of these sessions scheduled at future events. Also, it was funny that I had slightly more people in my session than Alex Whittles! – Hollie Whittles (Speaker)

How to delay a Python loop

In this blog post, I will run through 3 different ways to execute a delayed Python loop. In these examples, we will aim to run the loop once every minute.
To show the delay, we will print out the current datetime using the datetime module.

1 – Sleep
The sleep function from Python’s time module pauses the Python execution by the number of seconds inputted. The example below pauses the script for 60 seconds.

The above script has a 60 second delay between the end of a loop and the start of the next. However, what if we wanted to execute the next loop 60 seconds after the start of the previous loop.
In other words, how do we start the loop at the same time every minute.

2 – Calculate Sleep
To do this we calculate how long the loop should sleep for.
We will print the datetime every time the minutes changes. At the start of the loop we lookup the number of seconds that have passed so far this minute.

The number of seconds that have passed this minute is calculated from date_time[-2:]. Subtracting this from 60 gives the length of time in seconds for which the loop should sleep for, to execute when the next minute starts.

Once the loop has slept for the required number of seconds, we lookup the datetime again and print it out.

3 – Task Scheduler
The previous two options are good for executing a loop a few times, ten in our case. If we wanted to execute a python script continuously without expiring, we could use the above examples with an infinite appending loop.
However, if one loop errors the script will stop. Therefore, we want to execute the entire Python script once a minute using an external trigger. This is where we can use Task Scheduler.

Task Scheduler can execute a python script from source but it is often easier to use a batch file. The batch file includes the location of the python application (python.exe) and the location of the python script (.py). For more detail on using Task Scheduler and batch files to run Python scripts, please see the following datatofish post – https://datatofish.com/python-script-windows-scheduler

Our batch file is:

To demonstrate Task Scheduler, I’m going to run the following Python code every minute.
This code uses Pandas to produce a blank CSV file, but the name of CSV file is the datetime the script was run.

These following screenshots show the triggers and actions used.

This produced the following CSV files, we can see that the files takes 1 – 4 seconds to create.

In summary we have seen three different ways to delay a Python loop, two using loops inside Python and one using Task Scheduler. All can be used depending on what kind of delay is best.

Sorting a Power BI table by multiple columns

A common request that is raised by clients is how to sort a table in Power BI by multiple columns, in the same way you can in Excel.
For a while, there was no way (at least no easy way) to do this until the Power BI March 2020 update.

I learnt this tip from the following YouTube video:
https://www.youtube.com/watch?v=ik0K1H9j2Uc
Full credit to Dhruvin Shah, check his video out.

Below I have a Power BI table displaying fruit sales, currently unsorted.

To sort the table by Fruit click on the column header Fruit.

The table is now sorted by Fruit in alphabetical order.
To add a secondary sort on Sales, hold the Shift key and click on the column header Sales.

The table is now sorted by
– Fruit name in alphabetical order
– Sales in descending order

Some extras to note:
– There is no limit on the number of columns that can be used to sort a table. Just hold the shift key and keep choosing columns.
– This feature is not available for matrices.
– To switch the sorting from ascending to descending or vice-versa continue to hold shift and click on the column header again.

Capturing Insert and Update Counts from Merge

This post shows hows how you can capture and store the number of records inserted, updated or deleted from a T-SQL Merge statement.

This is in response to a question on an earlier post about using Merge to load SCDs in a Data Warehouse.

You can achieve this by using the OUTPUT clause of a merge statement, including the $Action column that OUTPUT returns.

The basic syntax is:

 
INSERT INTO XXXX
SELECT [Action]
 FROM
 (
 MERGE      XXXX AS Target
      USING XXXX AS Source
         ON XXXX=XXXX
      WHEN MATCHED
         AND XXXX <> XXXX
     THEN UPDATE SET
         XXXX=XXXX
     WHEN NOT MATCHED THEN
           INSERT (
              XXXX
           ) VALUES (
              XXXX
           )
     OUTPUT $action AS [Action]
     ) MergeOutput

You wrap the Merge statement up as a sub-query, adding the OUTPUT clause to return details about what happened in the merge. Note that you can’t just select from this sub-query, there has to be an INSERT INTO statement.

One row will be returned for each row touched by the merge process.

The $action column will contain either INSERT, UPDATE or DELETE, to indicate what happened to that row.

You can also include Source.* in order to include the source column values in the output dataset.

You can also include DELETED.*, which returns the values of any updated records before they were updated, and INSERTED.* to show the values after the updated. In reality the records are not deleted or inserted, just updated, but DELETED/INSERTED is used as the terminology for old/new values either side of the update. When inserting a new record, DELETED values will be NULL.

 ... OUTPUT $action AS [Action], Source.*, DELETED.*, INSERTED.*
     ) MergeOutput 

You can then refer to this ‘MergeOutput’ result set at the top of the query by selecting from this sub-query.

There is a limitation, though, you can’t aggregate the table. So if we want to summarise the actions into a single row of insert, update and delete counts, we have to use a temporary table such as in the sample code below.

 
CREATE TABLE #MergeActions ([Action] VARCHAR(10)) 

 INSERT INTO #MergeActions ([Action])
 SELECT [Action]
 FROM
 (
 MERGE      [dim].[Date] AS Target
      USING [etl].[DimDate] AS Source
         ON ISNULL(Target.[DateKey],0) = ISNULL(Source.[DateKey],0)
      WHEN MATCHED
         AND (Target.[Date] <> Source.[Date]
          OR Target.[Month] <> Source.[Month]
          OR Target.[Quarter] <> Source.[Quarter]
          OR Target.[Year] <> Source.[Year]
         )
     THEN UPDATE SET
         [Date] = Source.[Date]
        ,[Month] = Source.[Month]
        ,[Quarter] = Source.[Quarter]
        ,[Year] = Source.[Year]
        ,LastUpdated = GetDate()
     WHEN NOT MATCHED THEN
           INSERT (
              [DateKey]
             ,[Date]
             ,[Month]
             ,[Quarter]
             ,[Year]
             ,LastUpdated
           ) VALUES (
              Source.[DateKey]
             ,Source.[Date]
             ,Source.[Month]
             ,Source.[Quarter]
             ,Source.[Year]
             ,GetDate()
           )
     OUTPUT $action AS [Action]
     ) MergeOutput
 ;

 SELECT
      SUM(CASE WHEN [Action]='INSERT' THEN 1 ELSE 0 END) AS InsertCount
     ,SUM(CASE WHEN[Action]='UPDATE' THEN 1 ELSE 0 END) AS UpdateCount
     ,SUM(CASE WHEN [Action]='DELETE' THEN 1 ELSE 0 END) AS DeleteCounts
 FROM #MergeActions
 GROUP BY [Action]

 DROP TABLE #MergeActions 

</Frog-Blog Out>

Power BI Sentinel
The Frog Blog

Team Purple Frog specialise in designing and implementing Microsoft Data Analytics solutions, including Data Warehouses, Cubes, SQL Server, SSIS, ADF, SSAS, Power BI, MDX, DAX, Machine Learning and more.

This is a collection of thoughts, ramblings and ideas that we think would be useful to share.

Authors:

Alex Whittles
(MVP)
Reiss McSporran
Jeet Kainth
Jon Fletcher
Nick Edwards
Joe Billingham
Microsoft Gold Partner

Data Platform MVP

Power BI Sentinel
Frog Blog Out
twitter
rssicon