Monthly Archives: May 2020
A common request that is raised by clients is how to sort a table in Power BI by multiple columns, in the same way you can in Excel.
For a while, there was no way (at least no easy way) to do this until the Power BI March 2020 update.
I learnt this tip from the following YouTube video:
Full credit to Dhruvin Shah, check his video out.
Below I have a Power BI table displaying fruit sales, currently unsorted.
To sort the table by Fruit click on the column header Fruit.
The table is now sorted by Fruit in alphabetical order.
To add a secondary sort on Sales, hold the Shift key and click on the column header Sales.
The table is now sorted by
– Fruit name in alphabetical order
– Sales in descending order
Some extras to note:
– There is no limit on the number of columns that can be used to sort a table. Just hold the shift key and keep choosing columns.
– This feature is not available for matrices.
– To switch the sorting from ascending to descending or vice-versa continue to hold shift and click on the column header again.
When querying data from Azure Databricks (spark) into Power BI you may encounter an error: “ODBC:ERROR [HY000] [Microsoft][Hardy] (100) The host and port specified for the connection do not seem to belong to a Spark server. Please check your configuration.“
This is usually caused by trying to connect to a ‘Standard’ Databricks instance, but Power BI (and ODBC in general) can only connect to Databricks using a ‘Premium’ pricing tier. You will need to upgrade this to be able to access Databricks/Spark from Power BI.
Another common and similar error when Power BI is querying data from Azure Databricks (spark) is: “ODBC:ERROR [HY000] [Microsoft][DriverSupport] (1170) Unexpected response received from the server. Please ensure the server host and port specified for the connection are correct.“
The most likely cause of this error is an invalid server path, as you have to modify the path that Databricks gives you before using it.
In Databricks, open the cluster, and in Advanced Options click on the JDBC/ODBC tab, and copy the JDBC URL. It will look something like this:
All of the sections in red need removing or changing, so it ends up like this:
The username needs to be “token”, and the password needs to be a token, generated from the Databricks portal. Click on the user/silhouette icon on the top right of the screen, User Settings, Generate New Token.
Also when using Databricks, watch out for the type of connection you choose in Power BI, it should be ‘Spark’ not ‘Azure HDInsight Spark’.
Hope this helps!
I recently came across the following error while deploying a tabular cube using the Analysis Services Deployment Wizard:
My updates for this deployment included removing a table from the cube. As part of the cube deployment options I chose to “Retain roles and members”. However the issue here was that an existing role was referencing the table I had deleted, for example the below image shows the Role Manager in Visual Studio and shows that the Cube_Finance role references the “Transaction Type” table.
To resolve this error I simply had to deploy the cube using the option “Deploy roles and retain members”:
This resulted in a successful deployment with the role updated to exclude any references to the deleted table:
Note this error can also occur if a table has been renamed, for the same reason explained above where a role references the “old” table name. The solution is the same: deploy roles and retain members.
Azure Data Factory (ADF) Parent Activities don’t trigger ON ERROR OUTPUT when Child Activities Error… What happens?
So that title is a big question that a client of ours came across that required some additional investigation… What does happen and how do we get around it?
In the below example we’ll use a simple IF Activity as our Parent Activity. To explain briefly how it functions, you provide it with a query that evaluates to either True of False. This then passes down to a set of Activities If True and a different set of Activities If False. We then have three Outputs, On Success, On Failure and On Completion. This looks like:
The IF Activity therefore has logically four different things it does given the query criteria:
- If the query evaluates correctly, then:
- If True, Pass to True Activities
- If False, Pass to False Activities
- Error Activity if the query doesn’t evaluate to True or False.
Following that, we have the below options:
- When 1. If True Activities complete with Success, Trigger On Success Output and On Completion Output
- When 1. If False Activities complete with Success, Trigger On Success Output and On Completion Output
- When 2. completes with Error, Trigger On Error Output and On Completion Output
The issue occurs with the below options:
What happens when the If True Activities complete with Error?
What happens when the If False Activities complete with Error?
Logically you would expect that to cause the IF Activity to error as well. Right? WRONG!
Although it shows as if it has an error in the pipeline view:
The details show “Activity failed because an inner activity failed.”:
This means that the IF Activity itself has still started down it’s success pathway and is now just showing the error from further down. This causes the IF Activity to complete with neither Success or Error! (You can see this in Image 2 above). You can also see that the On Completion Output has triggered successfully.
This means that the Error Output of the If Activity can only be used to handle Errors on the Evaluation of the If Query itself. (When it can’t get to true or false, or can’t execute for another reason).
If you want to monitor the steps inside the IF Activity, you’ll need to put those output pathways inside the True and/or False activities themselves and handle this at a deeper level.
I have launched a new twitter account http://www.twitter.com/PurpleFrogReiss so please follow me there for updates on when I post new blog posts!