0845 643 64 63

# MDX Calculated Member Spanning Multiple Date Dimensions

It’s common in most cubes to have a number of different date dimensions, whether role playing, distinct, or a combination of both. Say for example, Entry Date, Posting Date and Accounting Period. There may also be numerous hierarchies in each date dimension, such as calendar and fiscal calendar, leading to a relatively complicated array of dates to worry about when calculating semi-additive measures.

If we create a date related calculation (i.e. total to date) how do we ensure that this calculation works across all date dimensions?

Lets assume we have a stock movement measure, where each record in the fact table is the change in stock (plus or minus). The current stock level is found by using a calculation totaling every record to date.

```CREATE MEMBER CURRENTCUBE.[Measures].[Stock Level]
AS
SUM({NULL:[Date].[Calendar].CurrentMember}
, [Measures].[Stock Movement]
);
```

[Note that {NULL:xxx} just creates a set of everything before the xxx member, i.e. everything to date]

This works just fine, if the user selects the [Date].[Calendar] hierarchy. What if the user selects the [Date].[Fiscal] hierarchy, or the [Period] dimension? Basically the calculation wont work, as the MDX expression is only aware of the [Date].[Calendar] hierarchy.

The simple solution is to use the Aggregate function over all of the dimensions that the calculation needs to be aware of:

```CREATE MEMBER CURRENTCUBE.[Measures].[Stock Level]
AS
AGGREGATE(
{NULL:[Date].[Fiscal].CurrentMember}
* {NULL:[Date].[Calendar].CurrentMember}
* {NULL:[Period].[Period].CurrentMember}
, [Measures].[Stock Movement]
);
```

The calculation will then use whichever date or time hierarchy is selected. It will even cope if multiple dimensions are selected, say the calendar on 0 and the periods on 1, both axis will honor the aggregation as expected.

Frog-Blog out.

# MDX Sub select Vs WHERE clause

I’ve just read an interesting thread on the SQL Server Developer Center forum, regarding how to filter results. Specifically the difference in MDX between using a subselect

```SELECT x on COLUMNS, y on ROWS
FROM ( SELECT z on COLUMNS FROM cube))
```

or using a where clause

```SELECT x on COLUMNS, y on ROWS
FROM cube
WHERE z
```

In a simple query they produce the same results, but what is the actual difference? You can read the full thread here, but to summarise Darren Gosbell’s response…

Using the WHERE clause sets the query context and consequently the CurrentMember. This then enables functions such as YTD and PerdiodsToDate to work.

Using a subselect can provide improved performance, but does not set the context.

Simples..!

# Scope Problems with MDX Calculated Members

We were recently investigating a problem for a client regarding the use of Scope within MDX calculated members. The code in question was similar to this:

```CREATE MEMBER
CURRENTCUBE.[Measures].[Test Measure To Date]
AS "NA", VISIBLE = 1;
Scope([Date].[Calendar].MEMBERS);
[Measures].[Test Measure To Date] =
SUM(NULL:[Date].[Calendar].CurrentMember,
[Measures].[Test Measure]);
End Scope;
Scope([Date].[Fiscal].MEMBERS);
[Measures].[Test Measure To Date] =
SUM(NULL:[Date].[Fiscal].CurrentMember,
[Measures].[Test Measure]);
End Scope;```

Essentially the warehouse was providing a transaction table with credits and debits, this calculated measure was supposed to provide the current balance, summing all transactions to date (not just the current year/period etc, but the entire history). Scope is used to enable the calculation to work across two different date hierarchies, calendar and fiscal.

The problem was that even when the [Date].[Calendar] hierarchy was selected, the code still used the fiscal hierarchy to calculate the value.

This is caused by the fact that [Date].[Fiscal].MEMBERS includes the member [Date].[Fiscal].[All]. Consequently, even when the Fiscal hierarchy was not included in the query, its [All] member was effectively still within the scope. Thus the fiscal calculation was overriding the calendar calculation no matter what was selected.

The solution to this is to exclude [All] from the scope, which can be done by changing the code to the following:

```CREATE MEMBER
CURRENTCUBE.[Measures].[Test Measure To Date]
AS "NA", VISIBLE = 1;
Scope(DESCENDANTS([Date].[Calendar],,AFTER));
[Measures].[Test Measure To Date] =
SUM(NULL:[Date].[Calendar].CurrentMember,
[Measures].[Test Measure]);
End Scope;
Scope(DESCENDANTS([Date].[Fiscal],,AFTER));
[Measures].[Test Measure To Date] =
SUM(NULL:[Date].[Fiscal].CurrentMember,
[Measures].[Test Measure]);
End Scope;```

DESCENDANTS(xxx,,AFTER) is a simple way of identifying every descendent of the hierarchy AFTER the current member, which is [All] when not specified.

Problem solved, Frog-blog out.

# Excel 2007 and SSAS 2008 Error

I was working on a new SSAS 2008 cube today, and came across an error which Google was unable to help with. I thought I’d post the solution here to help anyone else who may encounter it.

The cube in question will be primarily be accessed using Excel 2007, so I’d been dutifully testing it along the way to ensure all was well. And then, after a number of changes the following error appeared when connecting to the cube from Excel to create a pivot table.

Excel was unable to get necessary information about this cube. The cube might have been reorganized or changed on the server.

Contact the OLAP cube administrator and, if necessary, set up a new data source to connect to the cube

Connecting and querying the cube via SSMS or BIDS worked without error (hense I didn’t spot the error sooner!).

A quick Google revealed a number of posts regarding this error, but they all related to attributes containing invalid characters when accessed from Excel 2000 Or problems with translations and locale settings in the .oqy file. Neither of these was the cause here, so I had to go back and recreate every change I had made step by step to track the problem.

Well, I’m please to report that in the end it was nothing more that a simple spelling mistake in a named set. One of the dynamic named sets in the cube calculations referred to a specific member of a dimension, which was spelled slightly incorrectly. (Simplified example..)

```CREATE DYNAMIC SET CURRENTCUBE.[Set1]
AS {[Dimension].[Attribute].[Value1],
[Dimension].[Attribute].[Value2WithTypo]
};
```

When querying calculated measures through MDX in SSMS, the MDX parser just ignored the problem and only uses the valid members, however it appears as though Excel 2007 is slightly more picky with its cubes.

Useful to know, and even more useful when used as a tool to double check for any errors in the MDX calculations.

# Dynamic MDX in Reporting Services

After a couple of days off work last week with SQL Bits III and SQL 2008 Unleashed, it’s back to the grindstone this week; however before I get onto the serious stuff I’d like to say thank you to the organisers of both events. Watching you on stage headbanging to Rockstar – legendary! (You can see for yourself here and here…).

Anyway, back to MDX…

This post explains how you can build a dynamic MDX query in Reportins Services, customised to the users requirements. This can often bring some quite major performance benefits.

Lets say for example that you want to have a sales report grouped dynamically by either product, sales person, department or customer. Normally you would use a single static MDX query, and then add a dynamic grouping to the table in the report. This is fine, until you try it on a large dataset. If you only have 50 products, 2 salesmen, 5 departments and 100 customers, your MDX needs to return 50,000 records, the report then has to summarise all of this into the level of grouping you want. This renders the pre-calculated aggregations in OLAP pretty much worthless.

To get around this, you can generate your MDX dynamically, so that the query returns the data already grouped into the correct level. You can also use this to add extra filters to the query, but only when they are required.

To start with, lets see how we would do this normally with SQL. Assuming we’re working from a denormalised table such as this

Dynamic SQL is pretty simple, instead of having your dataset query as

```  SELECT SalesPerson,
Sum(Sales) AS Sales
FROM tblData
GROUP BY SalesPerson
```

you can add a report parameter called GroupBy,

and then use an expression as your dataset

```  ="SELECT "
+ Parameters!GroupBy.Value + " AS GroupName,
Sum(Sales) AS Sales
FROM tblData
GROUP BY " + Parameters!GroupBy.Value
```

However MDX queries don’t let you use an expression in the dataset, so we have to work around that quite major limitation. To do this we make use of the OpenRowset command. You need to enable it in the surface area config tool, but once it’s enabled you can fire off an OpenRowset command to SQL Server, which will then pass it on to the cube. As the datasource connnection is to SQL Server not Analysis Services, it allows you to use an expression in the dataset.

```  ="SELECT * FROM OpenRowset('MSOLAP',
'DATA SOURCE=localhost; Initial Catalog=SalesTest;',
'SELECT
{[Measures].[Sales]} ON 0,
NON EMPTY {[Product].[Product].[Product].Members} ON 1
FROM Sales') "
```

You can then expand this to make it dynamic depending on the value of a parameter. Before we do this though, there are a couple of items I should point out.
1) As the expression can get quite large, I find it much easier to create the query from a custom code function
2) As SSRS can’t interpret the expression at runtime, you need to define the fields in your dataset manually (more on this later)

To use a custom code function, just change the dataset expression to

`  =Code.CreateMDX(Parameters)`

We pass in the parameters collection so that we can use the parameters to determine the query. Create a function called CreateMDX() in the code block

You can then construct your MDX query within the code block.

```  Public Function CreateMDX(ByVal params as Parameters) as string

Dim mdx as string

mdx = "SELECT * FROM OpenRowset("
mdx += " 'MSOLAP', "
mdx += " 'DATA SOURCE=localhost; Initial Catalog=SalesTest;', "
mdx += " ' SELECT {[Measures].[Sales]} ON 0, "
mdx += "    NON EMPTY {[Product].[Product].[Product].Members} ON 1 "
mdx += "   FROM Sales ' "
mdx += ")"

return mdx

End Function
```

We’re almost there…
The next problem is that the field names returned by the query are less than helpful. To fix this we just need to alias the fields in the query. I usually take the opportunity of casting the numerical fields so that the report treats them as such, rather than as a string.

```  Public Function CreateMDX(ByVal params as Parameters)
as string

Dim mdx as string

mdx = "SELECT "
mdx += "  ""[Product].[Product].[Product].[MEMBER_CAPTION]"" AS GroupName, "
mdx += "   Cast(""[Measures].[Sales]"" AS int) AS Sales "
mdx += " FROM OpenRowset("
mdx += " 'MSOLAP', "
mdx += " 'DATA SOURCE=localhost; Initial Catalog=SalesTest;', "
mdx += " ' SELECT {[Measures].[Sales]} ON 0, "
mdx += "    NON EMPTY {[Product].[Product].[Product].Members} ON 1 "
mdx += "   FROM Sales ' "
mdx += ")"

return mdx

End Function
```

(please do watch out for the quotes, double quotes and double double quotes, it can get a little confusing!)
We then need to tell the dataset which fields to expect from the query.

You can now use the dataset in your report.
However, the original point of this was to make the query dynamic… All we need to do to achieve this is expand the VB.Net code accordingly.

```  Public Function CreateMDX(ByVal params as Parameters) as string

Dim mdx as string

mdx = "SELECT "

IF params("GroupBy").Value.ToString()="Product" THEN
mdx += "  ""[Product].[Product].[Product]"
ELSE IF params("GroupBy").Value.ToString()="SalesPerson" THEN
mdx += "  ""[Sales Person].[Sales Person].[Sales Person]"
ELSE IF params("GroupBy").Value.ToString()="Customer" THEN
mdx += "  ""[Customer].[Customer].[Customer]"
END IF

mdx += ".[MEMBER_CAPTION]"" AS GroupName, "

mdx += "   Cast(""[Measures].[Sales]"" AS int) AS Sales "
mdx += " FROM OpenRowset("
mdx += " 'MSOLAP', "
mdx += " 'DATA SOURCE=localhost; Initial Catalog=SalesTest;', "
mdx += " ' SELECT {[Measures].[Sales]} ON 0, "

IF params("GroupBy").Value.ToString()="Product" THEN
mdx += "  NON EMPTY {[Product].[Product].[Product]"
ELSE IF params("GroupBy").Value.ToString()="SalesPerson" THEN
mdx += "  NON EMPTY {[Sales Person].[Sales Person].[Sales Person]"
ELSE IF params("GroupBy").Value.ToString()="Customer" THEN
mdx += "  NON EMPTY {[Customer].[Customer].[Customer]"
END IF

mdx += ".Members} ON 1 "

mdx += "   FROM Sales ' "
mdx += ")"

return mdx

End Function
```

It’s certainly not that simple, and debugging can cause a few headaches, but you can benefit from a massive performance in complex reports if you’re prepared to put the work in.

As always, please let me know how you get on with it, and shout if you have any queries…

Alex

# Mosha's MDX Studio

I almost feel embarrassed…, I’ve been writing this blog for over 9 months now, and I have yet to mention Mosha, although in my defence, there is a link to his blog in the links section to the right.

As many/most of you may know, Mosha Pasumansky is one the key brains behind designing the MDX language and Analysis Services – nuff said?

Over the last year he has been working on a pet project, MDX Studio. It’s an MDX query tool which any self respecting OLAP developer should now be using on a regular basis. He has just released v0.4.6, which adds some really nifty features such as the dependency view.

If you’re just starting out with MDX, then the intellisense will be of massive benefit to you; even if you’re a seasoned pro, the performance monitoring is an essential tool on its own.

If you haven’t already tried it, have a look at Mosha’s blog, and get a copy – you won’t regret it.

And thanks for all your hard work Mosha – It’s much appreciated.

Alex

# Ranking results from MDX queries

This post explains how you can create a ranking of data from an OLAP MDX query. This will take the results from the query, and assign a ranking to each row. i.e. 1st, 2nd, 3rd best rows etc.

The first thing to do is to decide two things.
1) What measure do you want to rank by
2) What data set are you returning

Let’s assume we want to rank all stores by sales value. The basic non-ranked MDX query would be something like this

```  SELECT
{[Measures].[Sales Value]} ON 0,
{[Store].[Store Name].members} ON 1
FROM
[SalesCube]
```

So our measure is Sales Value, and our data set (granularity) is Store Name. We now want to create an ordered set of this data, ordered by Sales Value. We do this with the ORDER() function, which takes a set, a measure and either ascending or descending as its parameters. Note that by specifying the attribute twice we remove the [All] member from the set.

```  WITH SET [OrderedSet] AS
ORDER([Store].[Store Name].[Store Name].MEMBERS,
[Measures].[Sales Value],
BDESC)
SELECT
{[Measures].[Sales Value]} ON 0,
{[OrderedSet]} ON 1
FROM
[SalesCube]
```

The next stage is to apply a ranking to this ordered set. Helpfully, MDX provides us with a Rank() function, which takes a member and a set as its parameters. All it does is locate the member within the set and return its position. Because we have ordered the set, it will give us the ranking.

```  WITH SET [OrderedSet] AS
ORDER([Store].[Store Name].[Store Name].MEMBERS,
[Measures].[Sales Value],
BDESC)
MEMBER [Measures].[Rank] AS
RANK([Store].[Store Name].CurrentMember,
[OrderedSet])
SELECT
{[Measures].[Rank], [Measures].[Sales Value]} ON 0,
{[OrderedSet]} ON 1
FROM
[SalesCube]
```

You can now easily expand this to only show you the top x records, by using the Head() function on the ordered set. In this example we’re only showing the top 10. You could also use Tail() to find the bottom x records. Other functions you can use include TopPercent(), TopSum(), BottomPercent() and BottomSum().

```  WITH SET [OrderedSet] AS
ORDER([Store].[Store Name].[Store Name].MEMBERS,
[Measures].[Sales Value],
BDESC)
MEMBER [Measures].[Rank] AS
RANK([Store].[Store Name].CurrentMember,
[OrderedSet])
SELECT
{[Measures].[Rank], [Measures].[Sales Value]} ON 0,
FROM
[SalesCube]
```

# Semi Additive Measures using SQL Server Standard

One of the most frustrating limitations of SQL Server 2005 Standard edition is that it doesn’t support semi additive measures in SSAS Analysis Services cubes. This post explains a work around that provides similar functionality without having to shell out for the Enterprise Edition.

Semi Additive measures are values that you can summarise across any related dimension except time.

For example, Sales and costs are fully additive; if you sell 100 yesterday and 50 today then you’ve sold 150 in total. You can add them up over time.

Stock levels however are semi additive; if you had 100 in stock yesterday, and 50 in stock today, you’re total stock is 50, not 150. It doesn’t make sense to add up the measures over time, you need to find the most recent value.

Why are they important?

Whether they are important to you or not depends entirely on what you are trying to do with your cube. If all of your required measures are fully additive then you really don’t need to worry about anything. However as soon as you want to include measures such as stock levels, salarys, share prices or test results then they become pretty much essential.

Why are they not available in SQL Standard edition?

Microsoft has to have some way of pursuading us to pay for the Enterprise edition!

How can I get this functionality within SQL Standard?

Firstly we need to understand what semi additive measures do. By far the most common aggregation used is the LastNonEmpty function, so we’ll stick with that as an example. This basically says that whatever time frame you are looking at, find the most recent value for each tuple. This really is a fantastically powerful function, which only really becomes apparent whan you don’t have it!

Lets say that you perform a stock take of different products on different days of the week. You will have a stock entry for product A on a Thursday and product B on a Friday. The LastNonEmpty function takes care of this for you, if you look at the stock level on Saturday it will give you the correct values for both A and B, even though you didn’t perform a physical stock take on the Saturday.

If you then add the time dimension into the query, SSAS will perform this function for each and every time attribute shown, and then aggregate the results up to any other dimensions used. i.e. Each month will then display the sum of all LastNonEmpty values for all products within that month, essentially the closing stock level for each and every month.

To replicate this in Standard Edition, we need to split the work up into two stages.
1) Create daily values in the data warehouse
2) Use MDX to select a single value from the time dimension.

Think of this as splitting up the LastNonEmpty function into two, ‘Last’ and ‘Non Empty’. The ‘Non Empty’ bit essentially fills in the blanks for us. If a value doesn’t exist for that particular day, it looks at the previous day’s value. The ‘Last’ bit says that if we are looking at months in our query, find the value for the last day in that month. The same goes for years, or indeed any other time attribute.

To code up a full LastNonEmpty function ourselves in MDX would be too slow to query as soon as you get a cube of any reasonable size. One of the key benefits of a cube is speed of querying data and we don’t want to impact this too much, therefore we move some of the donkey work into the ETL process populating the datawarehouse. This leaves the cube to perform a simple enough calculation so as to not cause any problems.

1) The ‘Non Empty’ bit

Lets say that have a table called tblStock, containing the following data

We need to expand this into a new fact table that contains one record per day per product.

There are a number of ways of doing this, I’ll describe one here that should suit most situations, although you may need to customise it to your own situation, and limit it to only updating changed/new records rather than re-populating the entire table, but you get the idea. I should point out that you would be much better off populating this as part of your ETL process, but I’m showing this method as it’s more generic.

You need a list of all available dates relevant to your data warehouse or cube. If you already have a time dimension table then use this, otherwise create a SQL function that returns you a list of dates, such as this one:

```
CREATE FUNCTION [dbo].[FN_ReturnAllDates](
@DateFrom DateTime, @DateTo DateTime)
RETURNS @List TABLE (Date DateTime)
BEGIN
DECLARE @tmpDate DateTime
SET @tmpDate = @DateFrom
WHILE @tmpDate<=@DateTo
BEGIN
INSERT INTO @List
SELECT Convert(datetime,
Convert(Nvarchar,@tmpDate, 102), 102)
END
RETURN
END
```

We need to perform a full outer join between the date dimension and any other relevant dimensions, in this case product. This will generate one record per product per date. We can then perform a sub query for each combination to find the stock level appropriate for that day. (Yes, this will be a slow query to run – I did say you should do it in your ETL process!)

```
INSERT INTO FactStock
(StockTakeDate, ProductID, StockLevel)
SELECT D.Date, P.ProductID,
ISNULL((SELECT TOP 1 StockLevel
FROM tblStock
WHERE ProductID = P.ProductID
AND StockTakeDate<=D.Date
ORDER BY StockTakeDate DESC),0)
FROM FN_ReturnAllDates((SELECT Min(StockTakeDate)
FROM tblStock),GetDate()) D
FULL OUTER JOIN
(SELECT ProductID FROM tblProduct) P ON 1=1
```

2) The ‘Last’ bit

Now that we have a large fact table consisting of one record per product/date, we can load this into the cube.

If you just add the StockLevel field as a measure and browse the results, you’ll quickly see that if you view it by month, you will get each day’s stock level added together giving you a non-sensical value. To fix this we need to tell Analysis Services to only show one day’s value.

To do this we first need to find all descendents of the current time member at the day level, using something like this:

```     DESCENDANTS([Time].[Year Month Day].CurrentMember,
[Time].[Year Month Day].[Day])

We can then find the last member (giving us the closing stock level) by using TAIL():

```     TAIL(DESCENDANTS([Time].[Year Month Day].CurrentMember,
[Time].[Year Month Day].[Day]))```

You could aso use HEAD() if you wanted to find the opening stock instead of closing.

You should hide the actual StockLevel measure to prevent users from selecting it, I usually alias these with an underscore, as well as making them invisible, just for clarity. You can then add a calculated member with the following MDX:

```
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Close]
AS SUM(TAIL(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day])),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1  ;
```

Or you can calculate the average stock over the selected period

```
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Avg]
AS AVG(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day]),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1  ;
```

Or the maximum value

```
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Max]
AS MAX(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day]),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1  ;
```

Or the mimimum value

```
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Min]
AS MIN(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day]),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1  ;
```

And there you have it, semi additive measures in SQL Server 2005 Standard Edition!

Even though this method does work well, it is still not as good as having the Enterprise edition. The built in functions of Enterprise will perform significantly better than this method, and it saves having to create the large (potentially huge) fact table. This process will also only work on a single date hierarchy. If you have multiple hierarchies (i.e. fiscal and calendar) you will need to enhance this somewhat.

# Extract Datasource and Query from Excel Pivot

Have you ever tried to reverse engineer an Excel pivot table? It’s not as easy as you would think! Whether you just want to find out the datasource details, or identify the query that was used, there is just no simple way of getting Excel to tell you.

The macro below will loop through every sheet in a workbook, and will document the datasources, SQL or MDX queries as well as the page, row, column and data fields.

To use it, add it into your macros, then select a starting cell where you want te report to be placed and run the macro. It’s pretty raw, and may need some tweaks to suit your requirements but it should give you a good starting point. I use it on Excel 2003 MDX pivots from SQL Server Analysis Services 2005, but I presume it will work on other versions of Excel as well.

```
Public Sub PivotDetails()
Dim ws As Worksheet
Dim qt As QueryTable
Dim pt As PivotTable
Dim pc As PivotCache
Dim pf As PivotField

For Each ws In ActiveWorkbook.Sheets

For Each qt In ws.QueryTables
ActiveCell.Value = "Sheet"
ActiveCell.Offset(0, 1).Value = ws.Name

ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Data Source"
ActiveCell.Offset(0, 1).Value = qt.Connection

ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Query"
ActiveCell.Offset(0, 1).Value = qt.CommandText
Next qt

ActiveCell.Offset(2, 0).Select

For Each pt In ws.PivotTables

ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Pivot Table"
ActiveCell.Offset(0, 1).Value = pt.Name

ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Connection"
ActiveCell.Offset(0, 1).Value = pt.PivotCache.Connection

ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "SQL"
ActiveCell.Offset(0, 1).Value = pt.PivotCache.CommandText

ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "MDX"
ActiveCell.Offset(0, 1).Value = pt.MDX

For Each pf In pt.PageFields
ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Page"
ActiveCell.Offset(0, 1).Value = pf.Name
ActiveCell.Offset(0, 2).Value = pf.CurrentPageName
Next pf

For Each pf In pt.ColumnFields
ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Column"
ActiveCell.Offset(0, 1).Value = pf.Name
Next pf

For Each pf In pt.RowFields
ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Row"
ActiveCell.Offset(0, 1).Value = pf.Name
Next pf

For Each pf In pt.DataFields
ActiveCell.Offset(1, 0).Select
ActiveCell.Value = "Data"
Next pf

Next pt
Next ws
End Sub
```

# Convert MDX fields to SQL

A number of our customers have reporting systems that use both MDX and SQL, retrieving data from both OLAP and SQL Server databases. This generates the problem of converting an MDX field ([Dimension].[Hierarchy].&[Attribute]) into SQL Server field value (Attribute). The following code is a Reporting Services custom code section that will rip off the MDX and leave you with the value.

Public Function MDXParamToSQL(Parameter As String, All As String) As String

Dim Val As String
Val = Parameter

If Val.Contains(“[“) Then
If Val.ToLower().Contains(“].[all]”) Then
Return All
Else
Val = Val.Substring(1, Val.LastIndexOf(“]”) – 1)
Val = Val.Substring(Val.LastIndexOf(“[“) + 1)
Return Val
End If
Else
Return Val
End If

End Function

Lets say that you have a report using an MDX dataset, if you want to call a drillthrough report based on SQL Server you will need to pass at least one attribute through as a parameter to filter the second report. If you add the code above to the custom code section, you can set the parameter value of the second report to

=Code.MDXParamToSQL(Fields!MyField.Value,”%”)

The second report will then just receive the member name, not the full MDX unique identifier.

The Frog Blog

Team Purple Frog specialise in designing and implementing Microsoft Data Analytics solutions, including Data Warehouses, Cubes, SQL Server, SSIS, ADF, SSAS, Power BI, MDX, DAX, Machine Learning and more.

This is a collection of thoughts, ramblings and ideas that we think would be useful to share.

Authors:

 Alex Whittles(MVP) Jeet Kainth Jon Fletcher Nick Edwards Joe Billingham Lewis Prince Reiss McSporran

Data Platform MVP