0845 643 64 63

SQL Server

Dynamic MDX in Reporting Services

After a couple of days off work last week with SQL Bits III and SQL 2008 Unleashed, it’s back to the grindstone this week; however before I get onto the serious stuff I’d like to say thank you to the organisers of both events. Watching you on stage headbanging to Rockstar – legendary! (You can see for yourself here and here…).

Anyway, back to MDX…

This post explains how you can build a dynamic MDX query in Reportins Services, customised to the users requirements. This can often bring some quite major performance benefits.

Lets say for example that you want to have a sales report grouped dynamically by either product, sales person, department or customer. Normally you would use a single static MDX query, and then add a dynamic grouping to the table in the report. This is fine, until you try it on a large dataset. If you only have 50 products, 2 salesmen, 5 departments and 100 customers, your MDX needs to return 50,000 records, the report then has to summarise all of this into the level of grouping you want. This renders the pre-calculated aggregations in OLAP pretty much worthless.

To get around this, you can generate your MDX dynamically, so that the query returns the data already grouped into the correct level. You can also use this to add extra filters to the query, but only when they are required.

To start with, lets see how we would do this normally with SQL. Assuming we’re working from a denormalised table such as this

Dynamic MDX Table

Dynamic SQL is pretty simple, instead of having your dataset query as

  SELECT SalesPerson,
      Sum(Sales) AS Sales
   FROM tblData
   GROUP BY SalesPerson

you can add a report parameter called GroupBy,

Dynamic MDX SQL

and then use an expression as your dataset

  ="SELECT "
      + Parameters!GroupBy.Value + " AS GroupName,
      Sum(Sales) AS Sales
   FROM tblData
   GROUP BY " + Parameters!GroupBy.Value

However MDX queries don’t let you use an expression in the dataset, so we have to work around that quite major limitation. To do this we make use of the OpenRowset command. You need to enable it in the surface area config tool, but once it’s enabled you can fire off an OpenRowset command to SQL Server, which will then pass it on to the cube. As the datasource connnection is to SQL Server not Analysis Services, it allows you to use an expression in the dataset.

  ="SELECT * FROM OpenRowset('MSOLAP',
    'DATA SOURCE=localhost; Initial Catalog=SalesTest;',
    'SELECT
      {[Measures].[Sales]} ON 0,
      NON EMPTY {[Product].[Product].[Product].Members} ON 1
      FROM Sales') "

You can then expand this to make it dynamic depending on the value of a parameter. Before we do this though, there are a couple of items I should point out.
1) As the expression can get quite large, I find it much easier to create the query from a custom code function
2) As SSRS can’t interpret the expression at runtime, you need to define the fields in your dataset manually (more on this later)

To use a custom code function, just change the dataset expression to

  =Code.CreateMDX(Parameters)

We pass in the parameters collection so that we can use the parameters to determine the query. Create a function called CreateMDX() in the code block

Dynamic MDX Code

You can then construct your MDX query within the code block.

  Public Function CreateMDX(ByVal params as Parameters) as string

   Dim mdx as string

   mdx = "SELECT * FROM OpenRowset("
   mdx += " 'MSOLAP', "
   mdx += " 'DATA SOURCE=localhost; Initial Catalog=SalesTest;', "
   mdx += " ' SELECT {[Measures].[Sales]} ON 0, "
   mdx += "    NON EMPTY {[Product].[Product].[Product].Members} ON 1 "
   mdx += "   FROM Sales ' "
   mdx += ")"

   return mdx

End Function

We’re almost there…
The next problem is that the field names returned by the query are less than helpful. To fix this we just need to alias the fields in the query. I usually take the opportunity of casting the numerical fields so that the report treats them as such, rather than as a string.

  Public Function CreateMDX(ByVal params as Parameters)
                  as string

   Dim mdx as string

   mdx = "SELECT "
   mdx += "  ""[Product].[Product].[Product].[MEMBER_CAPTION]"" AS GroupName, "
   mdx += "   Cast(""[Measures].[Sales]"" AS int) AS Sales "
   mdx += " FROM OpenRowset("
   mdx += " 'MSOLAP', "
   mdx += " 'DATA SOURCE=localhost; Initial Catalog=SalesTest;', "
   mdx += " ' SELECT {[Measures].[Sales]} ON 0, "
   mdx += "    NON EMPTY {[Product].[Product].[Product].Members} ON 1 "
   mdx += "   FROM Sales ' "
   mdx += ")"

   return mdx

End Function

(please do watch out for the quotes, double quotes and double double quotes, it can get a little confusing!)
We then need to tell the dataset which fields to expect from the query.

Dynamic MDX Fields

You can now use the dataset in your report.
However, the original point of this was to make the query dynamic… All we need to do to achieve this is expand the VB.Net code accordingly.

  Public Function CreateMDX(ByVal params as Parameters) as string

   Dim mdx as string

   mdx = "SELECT "

IF params("GroupBy").Value.ToString()="Product" THEN
   mdx += "  ""[Product].[Product].[Product]"
ELSE IF params("GroupBy").Value.ToString()="SalesPerson" THEN
   mdx += "  ""[Sales Person].[Sales Person].[Sales Person]"
ELSE IF params("GroupBy").Value.ToString()="Customer" THEN
   mdx += "  ""[Customer].[Customer].[Customer]"
END IF

   mdx += ".[MEMBER_CAPTION]"" AS GroupName, "

   mdx += "   Cast(""[Measures].[Sales]"" AS int) AS Sales "
   mdx += " FROM OpenRowset("
   mdx += " 'MSOLAP', "
   mdx += " 'DATA SOURCE=localhost; Initial Catalog=SalesTest;', "
   mdx += " ' SELECT {[Measures].[Sales]} ON 0, "

IF params("GroupBy").Value.ToString()="Product" THEN
   mdx += "  NON EMPTY {[Product].[Product].[Product]"
ELSE IF params("GroupBy").Value.ToString()="SalesPerson" THEN
   mdx += "  NON EMPTY {[Sales Person].[Sales Person].[Sales Person]"
ELSE IF params("GroupBy").Value.ToString()="Customer" THEN
   mdx += "  NON EMPTY {[Customer].[Customer].[Customer]"
END IF

   mdx += ".Members} ON 1 "

   mdx += "   FROM Sales ' "
   mdx += ")"

   return mdx

End Function

It’s certainly not that simple, and debugging can cause a few headaches, but you can benefit from a massive performance in complex reports if you’re prepared to put the work in.

Dynamic MDX Results

You can download the project files here

As always, please let me know how you get on with it, and shout if you have any queries…

Alex

SQLBits III and more…

Next week we’re lucky enough to have a double-whammy of SQL events in the UK, has Christmas come early?…

On Wednesday 10th September, Microsoft are running a live Technet event, SQL Server 2008 Unleashed, at their Reading campus. It looks like it should be a day well spent, with Prem Mehra delivering a keynote speech on large scale SQL Server systems, and other sessions including data warehouse improvements, upgrade paths, and a session focusing on BI. The event is now full, so I hope you’ve already booked your place…

Then, on Saturday 13th September, we’ve got the long awaited SQLBits III event in Hatfield. There are far too many sessions to list here, but with dedicated tracks for BI, Admin, Development and Katmai, there is certainly something for everyone. You can safely assume that I’ll be on the BI track all day!

I’ll be going to both events; I know a few of you regular frog-blogees are also going, so I look forward to meeting you at one or both.

Alex

Use Stored Procedure Output Parameters in SSRS

It’s commonly known that there are a few frustrations when using stored procedures as the source for SSRS 2005 datasets, the inability to cope with temporary tables is always a key one, another is not being able to get at output parameters or return values.

This post explains the easiest way to get at the output parameters and return value.

Lets assume you are using the following stored proc:


IF OBJECT_ID('myProc') IS NOT NULL
  DROP PROCEDURE myProc
GO
CREATE PROCEDURE dbo.myProc
  @OutTest int OUTPUT
AS
  SET @OutTest = 42

  --Random test query
  SELECT 1 as x, 'rec1' as y, 'rec1' as z
  UNION
  SELECT 2, 'rec2', 'rec2'

  RETURN 1
GO

When we set the dataset to use myProc, we will get the three fields (x, y and z) returned, but not the @OutTest parameter, or the return value of 1.

To fix this all you need to do is create a wrapper procedure that converts the values into extra fields.


IF OBJECT_ID('myProcWrapper') IS NOT NULL
  DROP PROCEDURE myProcWrapper
GO
CREATE PROCEDURE dbo.myProcWrapper
AS

  CREATE TABLE #myTmp(
    x int,
    y varchar(20),
    z varchar(20)
  )

  DECLARE @ReturnValue int, @OutTest int

  INSERT INTO #myTmp
    EXEC @ReturnValue=dbo.myProc @OutTest OUTPUT

  SELECT x, y, z,
      @ReturnValue as RetVal,
      @OutTest as [Output]
    FROM #myTmp

  Drop Table #myTmp

  RETURN 1
GO

This runs the original procedure, and stores the values in a temporary table (yes, I know I know…!). It then appends the return value and output parameters as extra fields and then returns the whole lot. These values are repeated for every record returned, but all you have to do is take the value from the first record =First(Fields!Output.Value).

If you wanted to get rid of the temporary table, you could always replace it with a permenant table, but you’d have to be careful to keep each client instance seperate so results for different users didn’t interfere with each other. You could do this by adding a working column to the myTmp table contaning the username and/or workstation ID. Something like this…


IF OBJECT_ID('myTmp') IS NOT NULL
  DROP TABLE myTmp
GO

CREATE TABLE myTmp(
  x int,
  y varchar(20),
  z varchar(20),
  UserID varchar(50),
  Workstation varchar(50)
)
GO
ALTER TABLE dbo.myTmp ADD CONSTRAINT
  DF_myTmp_UserID DEFAULT SYSTEM_USER FOR UserID
GO
ALTER TABLE dbo.myTmp ADD CONSTRAINT
  DF_myTmp_Workstation DEFAULT Host_Name() FOR Workstation
GO

IF OBJECT_ID('myProcWrapper') IS NOT NULL
  DROP PROCEDURE myProcWrapper
GO
CREATE PROCEDURE dbo.myProcWrapper
AS

  DECLARE @ReturnValue int, @OutTest int

  DELETE FROM myTmp
    WHERE UserID = SYSTEM_USER
    AND Workstation = Host_Name()

  INSERT INTO myTmp (x,y,z)
    EXEC @ReturnValue=dbo.myProc @OutTest OUTPUT

  SELECT x, y, z,
      @ReturnValue as RetVal,
      @OutTest as [Output]
    FROM myTmp
    WHERE UserID = SYSTEM_USER
    AND Workstation = Host_Name()

  DELETE FROM myTmp
    WHERE UserID = SYSTEM_USER
    AND Workstation = Host_Name()

  RETURN 1
GO

This still has the benefit of not having to modify the original stored procedure, but now allows you to see the available fields in Reporting Services.

SSRS Report Performance monitoring

Today’s Frog-Blog top-tips are quite simple ones, but ones that I always find very useful. SSRS Report Performance monitoring.

Once you start to build up a few Reporting Services reports, you need to find out how they’re performing, whether any particular reports are causing the server problems and even which users are running which reports. The following set of queries should point you in the right direction. They should all be run against the ReportServer database.

The 20 most recently run reports

 SELECT TOP 20 C.Path, C.Name, EL.UserName, EL.Status, EL.TimeStart, 
     EL.[RowCount], EL.ByteCount, 
    (EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)/1000 AS TotalSeconds, 
     EL.TimeDataRetrieval, EL.TimeProcessing, EL.TimeRendering 
   FROM ExecutionLog EL 
    INNER JOIN Catalog C ON EL.ReportID = C.ItemID 
   ORDER BY TimeStart DESC 

The slowest reports (in the last 28 days)

 SELECT TOP 10 C.Path, C.Name, Count(*) AS ReportsRun, 
   AVG((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) 
                                            AS AverageProcessingTime, 
   Max((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) 
                                            AS MaximumProcessingTime, 
   Min((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) 
                                            AS MinimumProcessingTime 
 FROM ExecutionLog EL 
   INNER JOIN Catalog C ON EL.ReportID = C.ItemID 
 WHERE EL.TimeStart>Datediff(d,GetDate(),-28) 
 GROUP BY C.Path,C.Name 
 ORDER BY AVG((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) DESC 

The most active users

 SELECT TOP 10 EL.UserName, Count(*) AS ReportsRun, 
   Count(DISTINCT C.[Path]) AS DistinctReportsRun 
  FROM ExecutionLog EL 
    INNER JOIN Catalog C ON EL.ReportID = C.ItemID 
  WHERE EL.TimeStart>Datediff(d,GetDate(),-28) 
  GROUP BY EL.UserName 
  ORDER BY Count(*) DESC 

The most popular reports

 SELECT TOP 10 C.Path, C.Name, Count(*) AS ReportsRun, 
   AVG((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) 
                                            AS AverageProcessingTime, 
   Max((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) 
                                            AS MaximumProcessingTime, 
   Min((EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)) 
                                            AS MinimumProcessingTime 
  FROM ExecutionLog EL 
   INNER JOIN Catalog C ON EL.ReportID = C.ItemID 
  WHERE EL.TimeStart>Datediff(d,GetDate(),-28) 
  GROUP BY C.Path, C.Name 
  ORDER BY Count(*) DESC 

Failed Reports

 SELECT TOP 20 C.Path, C.Name, EL.UserName, EL.Status, EL.TimeStart, 
   EL.[RowCount], EL.ByteCount, 
   (EL.TimeDataRetrieval + EL.TimeProcessing + EL.TimeRendering)/1000 AS TotalSeconds, 
   EL.TimeDataRetrieval, EL.TimeProcessing, EL.TimeRendering 
  FROM ExecutionLog EL 
   INNER JOIN Catalog C ON EL.ReportID = C.ItemID 
  WHERE EL.Status <> 'rsSuccess' 
  ORDER BY TimeStart DESC 

There are countless other variations, but this should be enough to get you going.

[Update: 05/09/2011 – Thanks to Jonathan [Twitter|Website] for pointing out a typo!]

Semi Additive Measures using SQL Server Standard

One of the most frustrating limitations of SQL Server 2005 Standard edition is that it doesn’t support semi additive measures in SSAS Analysis Services cubes. This post explains a work around that provides similar functionality without having to shell out for the Enterprise Edition.

What Are Semi Additive Measures?

Semi Additive measures are values that you can summarise across any related dimension except time.

For example, Sales and costs are fully additive; if you sell 100 yesterday and 50 today then you’ve sold 150 in total. You can add them up over time.

Stock levels however are semi additive; if you had 100 in stock yesterday, and 50 in stock today, you’re total stock is 50, not 150. It doesn’t make sense to add up the measures over time, you need to find the most recent value.

Why are they important?

Whether they are important to you or not depends entirely on what you are trying to do with your cube. If all of your required measures are fully additive then you really don’t need to worry about anything. However as soon as you want to include measures such as stock levels, salarys, share prices or test results then they become pretty much essential.

Why are they not available in SQL Standard edition?

Microsoft has to have some way of pursuading us to pay for the Enterprise edition!

How can I get this functionality within SQL Standard?

Firstly we need to understand what semi additive measures do. By far the most common aggregation used is the LastNonEmpty function, so we’ll stick with that as an example. This basically says that whatever time frame you are looking at, find the most recent value for each tuple. This really is a fantastically powerful function, which only really becomes apparent whan you don’t have it!

Lets say that you perform a stock take of different products on different days of the week. You will have a stock entry for product A on a Thursday and product B on a Friday. The LastNonEmpty function takes care of this for you, if you look at the stock level on Saturday it will give you the correct values for both A and B, even though you didn’t perform a physical stock take on the Saturday.

If you then add the time dimension into the query, SSAS will perform this function for each and every time attribute shown, and then aggregate the results up to any other dimensions used. i.e. Each month will then display the sum of all LastNonEmpty values for all products within that month, essentially the closing stock level for each and every month.

To replicate this in Standard Edition, we need to split the work up into two stages.
1) Create daily values in the data warehouse
2) Use MDX to select a single value from the time dimension.

Think of this as splitting up the LastNonEmpty function into two, ‘Last’ and ‘Non Empty’. The ‘Non Empty’ bit essentially fills in the blanks for us. If a value doesn’t exist for that particular day, it looks at the previous day’s value. The ‘Last’ bit says that if we are looking at months in our query, find the value for the last day in that month. The same goes for years, or indeed any other time attribute.

To code up a full LastNonEmpty function ourselves in MDX would be too slow to query as soon as you get a cube of any reasonable size. One of the key benefits of a cube is speed of querying data and we don’t want to impact this too much, therefore we move some of the donkey work into the ETL process populating the datawarehouse. This leaves the cube to perform a simple enough calculation so as to not cause any problems.

1) The ‘Non Empty’ bit

Lets say that have a table called tblStock, containing the following data

We need to expand this into a new fact table that contains one record per day per product.

There are a number of ways of doing this, I’ll describe one here that should suit most situations, although you may need to customise it to your own situation, and limit it to only updating changed/new records rather than re-populating the entire table, but you get the idea. I should point out that you would be much better off populating this as part of your ETL process, but I’m showing this method as it’s more generic.

You need a list of all available dates relevant to your data warehouse or cube. If you already have a time dimension table then use this, otherwise create a SQL function that returns you a list of dates, such as this one:


   CREATE FUNCTION [dbo].[FN_ReturnAllDates](
         @DateFrom DateTime, @DateTo DateTime)
         RETURNS @List TABLE (Date DateTime)
     BEGIN
     DECLARE @tmpDate DateTime
     SET @tmpDate = @DateFrom
     WHILE @tmpDate<=@DateTo
       BEGIN
         INSERT INTO @List
           SELECT Convert(datetime,
                 Convert(Nvarchar,@tmpDate, 102), 102)
         SET @tmpDate = Dateadd(d,1,@tmpDate)
       END
     RETURN
   END

We need to perform a full outer join between the date dimension and any other relevant dimensions, in this case product. This will generate one record per product per date. We can then perform a sub query for each combination to find the stock level appropriate for that day. (Yes, this will be a slow query to run – I did say you should do it in your ETL process!)


     INSERT INTO FactStock
        (StockTakeDate, ProductID, StockLevel)
     SELECT D.Date, P.ProductID,
           ISNULL((SELECT TOP 1 StockLevel
              FROM tblStock
              WHERE ProductID = P.ProductID
                 AND StockTakeDate<=D.Date
              ORDER BY StockTakeDate DESC),0)
        FROM FN_ReturnAllDates((SELECT Min(StockTakeDate)
                      FROM tblStock),GetDate()) D
           FULL OUTER JOIN
                    (SELECT ProductID FROM tblProduct) P ON 1=1

2) The ‘Last’ bit

Now that we have a large fact table consisting of one record per product/date, we can load this into the cube.

If you just add the StockLevel field as a measure and browse the results, you’ll quickly see that if you view it by month, you will get each day’s stock level added together giving you a non-sensical value. To fix this we need to tell Analysis Services to only show one day’s value.

To do this we first need to find all descendents of the current time member at the day level, using something like this:

     DESCENDANTS([Time].[Year Month Day].CurrentMember,
       [Time].[Year Month Day].[Day])
       --Please modify to suit your own date hierarchy! 

We can then find the last member (giving us the closing stock level) by using TAIL():

     TAIL(DESCENDANTS([Time].[Year Month Day].CurrentMember,
          [Time].[Year Month Day].[Day]))

You could aso use HEAD() if you wanted to find the opening stock instead of closing.

You should hide the actual StockLevel measure to prevent users from selecting it, I usually alias these with an underscore, as well as making them invisible, just for clarity. You can then add a calculated member with the following MDX:


     CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Close]
      AS SUM(TAIL(DESCENDANTS([Time].[Year Month Day].currentmember,
                    [Time].[Year Month Day].[Day])),
                    [Measures].[_Stock Level]),
     FORMAT_STRING = "#,#",
     VISIBLE = 1  ;

Or you can calculate the average stock over the selected period


     CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Avg]
      AS AVG(DESCENDANTS([Time].[Year Month Day].currentmember,
                   [Time].[Year Month Day].[Day]),
                   [Measures].[_Stock Level]),
     FORMAT_STRING = "#,#",
     VISIBLE = 1  ;

Or the maximum value


     CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Max]
      AS MAX(DESCENDANTS([Time].[Year Month Day].currentmember,
                   [Time].[Year Month Day].[Day]),
                   [Measures].[_Stock Level]),
     FORMAT_STRING = "#,#",
     VISIBLE = 1  ;

Or the mimimum value


     CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Min]
      AS MIN(DESCENDANTS([Time].[Year Month Day].currentmember,
                  [Time].[Year Month Day].[Day]),
                  [Measures].[_Stock Level]),
     FORMAT_STRING = "#,#",
     VISIBLE = 1  ;

And there you have it, semi additive measures in SQL Server 2005 Standard Edition!

Even though this method does work well, it is still not as good as having the Enterprise edition. The built in functions of Enterprise will perform significantly better than this method, and it saves having to create the large (potentially huge) fact table. This process will also only work on a single date hierarchy. If you have multiple hierarchies (i.e. fiscal and calendar) you will need to enhance this somewhat.

Extract Datasource and Query from Excel Pivot

Have you ever tried to reverse engineer an Excel pivot table? It’s not as easy as you would think! Whether you just want to find out the datasource details, or identify the query that was used, there is just no simple way of getting Excel to tell you.

The macro below will loop through every sheet in a workbook, and will document the datasources, SQL or MDX queries as well as the page, row, column and data fields.

To use it, add it into your macros, then select a starting cell where you want te report to be placed and run the macro. It’s pretty raw, and may need some tweaks to suit your requirements but it should give you a good starting point. I use it on Excel 2003 MDX pivots from SQL Server Analysis Services 2005, but I presume it will work on other versions of Excel as well.


Public Sub PivotDetails()
   Dim ws As Worksheet
   Dim qt As QueryTable
   Dim pt As PivotTable
   Dim pc As PivotCache
   Dim pf As PivotField

   For Each ws In ActiveWorkbook.Sheets

      For Each qt In ws.QueryTables
        ActiveCell.Value = "Sheet"
        ActiveCell.Offset(0, 1).Value = ws.Name

        ActiveCell.Offset(1, 0).Select
        ActiveCell.Value = "Data Source"
        ActiveCell.Offset(0, 1).Value = qt.Connection

        ActiveCell.Offset(1, 0).Select
        ActiveCell.Value = "Query"
        ActiveCell.Offset(0, 1).Value = qt.CommandText
      Next qt

      ActiveCell.Offset(2, 0).Select

      For Each pt In ws.PivotTables

        ActiveCell.Offset(1, 0).Select
        ActiveCell.Value = "Pivot Table"
        ActiveCell.Offset(0, 1).Value = pt.Name

        ActiveCell.Offset(1, 0).Select
        ActiveCell.Value = "Connection"
        ActiveCell.Offset(0, 1).Value = pt.PivotCache.Connection

        ActiveCell.Offset(1, 0).Select
        ActiveCell.Value = "SQL"
        ActiveCell.Offset(0, 1).Value = pt.PivotCache.CommandText

        ActiveCell.Offset(1, 0).Select
        ActiveCell.Value = "MDX"
        ActiveCell.Offset(0, 1).Value = pt.MDX

        For Each pf In pt.PageFields
            ActiveCell.Offset(1, 0).Select
            ActiveCell.Value = "Page"
            ActiveCell.Offset(0, 1).Value = pf.Name
            ActiveCell.Offset(0, 2).Value = pf.CurrentPageName
        Next pf

        For Each pf In pt.ColumnFields
            ActiveCell.Offset(1, 0).Select
            ActiveCell.Value = "Column"
            ActiveCell.Offset(0, 1).Value = pf.Name
        Next pf

        For Each pf In pt.RowFields
            ActiveCell.Offset(1, 0).Select
            ActiveCell.Value = "Row"
            ActiveCell.Offset(0, 1).Value = pf.Name
        Next pf

        For Each pf In pt.DataFields
            ActiveCell.Offset(1, 0).Select
            ActiveCell.Value = "Data"
        Next pf

      Next pt
   Next ws
End Sub

Report Parameter Selection Memory

This post explains a method of enhancing Reporting Services reports to make them more user friendly. I have been quizzed many times by end users who want reports to remember the parameter selections so that next time they run the report they haven’t got to re-select the same parameters. This is especially useful if there are a lot of long parameter lists, and each user tends to run the same selection frequently.

This process is made possible by the User!UserID global variable within SSRS. It contains the domain and username of the user running the report (DOMAIN\USER).

There are a number of stages required to get this to work :-
– New SQL table to store users’ selections
– New stored procedures to save and load selections
– Two new datasets within the report

Firstly create a database table to store the selections

   CREATE TABLE tblUserSetting(
      ID int IDENTITY(1,1) NOT NULL,
      UserID varchar(100) NULL,
      Report varchar(100) NULL,
      Setting varchar(100) NULL,
      Val varchar(max) NULL
   )

Then add a couple of stored procedures, one to save the default and one to retrieve it.

   CREATE PROC spu_SetUserSetting
      (
         @UserID varchar(100),
         @Report varchar(100),
         @Setting varchar(100),
         @Value varchar(max)
      )
   AS
   BEGIN
      --Firstly delete any old settings for this parameter
      DELETE FROM tblUserSetting
            WHERE UserID=@UserID
            AND Report=@Report
            AND Setting=@Setting

     --Now add the new setting
      INSERT INTO tblUserSetting
         (UserID, Report, Setting, Val)
      VALUES
         (@UserID, @Report, @Setting, @Value)

     --Return the new setting
      SELECT @Value AS Val
  END

  CREATE PROC spr_GetUserSetting
      (
         @UserID varchar(100),
         @Report varchar(100),
         @Setting varchar(100),
         @Default varchar(max)=NULL
      )
   AS
   BEGIN
      DECLARE @Val varchar(max)
      SET @Val = ISNULL((SELECT Val FROM tblUserSetting
            WHERE UserID=@UserID
            AND Report=@Report
            AND Setting=@Setting
        ),ISNULL(@Default,''))

     SELECT @Val AS Val
   END

Now on to the report.

Lets say that we have a parameter called ‘TestParam’. The first thing we need to do is create a new dataset that will be used to retrieve the default value if one exists.

Name the new dataset dsTestParamDefault, the CommandType should be StoredProcedure, the query string should be spr_GetUserSetting.

Edit the Dataset and select the parameters tab. SSRS has already created report parameters for each of the proc’s parameters. We need to replace these with our own values.

   @UserID    =User!UserID
   @Report    =Globals!ReportName
   @Setting   ="TestParam"
   @Default   =Nothing

(if you want to you can specify a default value here for when the user runs the report for the first time)

Set the default value of your TestParam parameter to be the Val field of the dsTestParamDefault dataset.
The parameter will now load the default from the appropriate database record for that user.

Now we need to add the ability to save the users selection for next time.

Create a new dataset named dsTestParamSave, the CommandType should be StoredProcedure, the query string should be spu_SetUserSetting.

Edit the Dataset and select the parameters tab. SSRS has already created report parameters for each of the proc’s parameters. Again, we need to replace these with our own values.

   @UserID    =User!UserID
   @Report    =Globals!ReportName
   @Setting   ="TestParam"
   @Value     =Parameters!TestParam.Value

You will then need to delete the four report parameters SSRS created for you (UserID, Report, Setting and Default).

And that’s it! When the user first executes the report, it will ask them for the parameter value (or use the default if you specified one). However the next time they run it it will automatically remember what value they last used. As it is stored by username, it doesn’t matter whether they use the same PC or not.

The example above is for a simple single-valued parameter, however this technique can also be used for more complex parameters. The parameter could have a list of available values, and can even be multi-valued. If you want to use a multi-valued parameter, then you only need to make a couple of minor changes…

The spr_GetUserSetting will need changing to

   BEGIN
      DECLARE @Val varchar(max)
      SET @Val = ISNULL((SELECT Val FROM tblUserSetting
            WHERE UserID=@UserID
            AND Report=@Report
            AND Setting=@Setting
         ),ISNULL(@Default,''))

     SELECT Val FROM dbo.Split(',', @Val)
   END

For this to work you will need to find yourself a suitable Split function. (Search Google for ‘SQL Server Split’ to find hundreds that are suitable, or I’ve put one in the linked sample file)Then change the @Value parameter of the dsTestParamSave dataset to

   =Join(Parameters!TestParam.Value, ",")

This will store a comma delimited value containing all values selected. If you use commas in your data then choose a suitable alternative as a delimiter.

I have put some working examples of all of this in the link here along with copies of all relevant SQL script…
https://www.purplefrogsystems.com/Download/blog/ParameterMemory.zip

Hope this is of some use!

Alex

Open SSRS report in a new Window

There isn’t yet a built-in facility within Reporting Services to open a report link (i.e. a drillthrough report) in a new browser window or tab, however it isn’t too tricky to get this to work using a bit of javascript.

javascript:void(window.open(Destination,’_blank’))

My preferred method is to wrap this up into a custom code function such as

Function OpenURL(ByVal URL As String) As String
Return “javascript:void(window.open(‘” & URL & “‘,’_blank’))”
End Function

In your report, you can then just set the navigation property to jump to a URL of a website:

=Code.OpenURL(“https://www.purplefrogsystems.com“)

Or to the path of a the drillthrough report:

=Code.OpenURL(“http://[Your Server]/ReportServer?/Folder/ReportName&rs:Command=Render”)

If you have parameters that you need to pass in, you can add these into the URL either fixed or dynamic:

=Code.OpenURL(“http://[Your Server]/ReportServer?/Folder/ReportName&rs:Command=Render &MyFirstParam=Value1&MySecondParam=” & Fields!MyField.Value)

Please note that this will not work in the BIDS development environment, it will only work once you have deployed the report to the Report Server.

Excel Addin for Analysis Services

For any users of Analysis Services, if you haven’t already downloaded the Excel (2002/2003) addin you’re missing out.

It’s a free download from Microsoft which significantly expands Excel’s cube querying ability. Well recommended!

Get it here…

Convert MDX fields to SQL

A number of our customers have reporting systems that use both MDX and SQL, retrieving data from both OLAP and SQL Server databases. This generates the problem of converting an MDX field ([Dimension].[Hierarchy].&[Attribute]) into SQL Server field value (Attribute). The following code is a Reporting Services custom code section that will rip off the MDX and leave you with the value.

    Public Function MDXParamToSQL(Parameter As String, All As String) As String

        Dim Val As String
        Val = Parameter

        If Val.Contains(“[“) Then
            If Val.ToLower().Contains(“].[all]”) Then
                Return All
            Else
                Val = Val.Substring(1, Val.LastIndexOf(“]”) – 1)
                Val = Val.Substring(Val.LastIndexOf(“[“) + 1)
                Return Val
            End If
        Else
            Return Val
        End If

    End Function

Lets say that you have a report using an MDX dataset, if you want to call a drillthrough report based on SQL Server you will need to pass at least one attribute through as a parameter to filter the second report. If you add the code above to the custom code section, you can set the parameter value of the second report to

    =Code.MDXParamToSQL(Fields!MyField.Value,”%”)

The second report will then just receive the member name, not the full MDX unique identifier.

Power BI Sentinel
The Frog Blog

Team Purple Frog specialise in designing and implementing Microsoft Data Analytics solutions, including Data Warehouses, Cubes, SQL Server, SSIS, ADF, SSAS, Power BI, MDX, DAX, Machine Learning and more.

This is a collection of thoughts, ramblings and ideas that we think would be useful to share.

Authors:

Alex Whittles
(MVP)
Jeet Kainth
Jon Fletcher
Nick Edwards
Joe Billingham
Lewis Prince
Reiss McSporran
Microsoft Gold Partner

Data Platform MVP

Power BI Sentinel
Frog Blog Out
twitter
rssicon