To understand how a filter can affect performance, first we need to understand what a filter is.
A filter is a temporary table that is created on demand, held in memory, and then used as a reference for the function being filtered. The rows of the temporary filter table are all DISTINCT. This is because for the purposes of filtering, the engine doesn’t care whether there are 1, 2, 3 or 250 occurrences of a value in table, only whether it exists or not.
Let’s take a look at some DAX:

This measure is adding together all the values of Sales Amount, but only where the List Price of the product being sold is greater than 100.
What we are asking the DAX engine to do is:
- Create a temporary table of product list prices where list price is > 100.
- Iterate through the FactInternetSales table and grab all the SalesAmount values from those records where the related product has a list price that appears in the temporary table.
- Add all the SalesAmount values together.
While this measure will work and produce the expected results, it isn’t the most efficient way of doing things.
This FILTER function is populating the temporary table with every column per row of DimProduct that meets the criteria of DimProduct[ListPrice] > 100, that table will look something like this:
For our predicate of DimProduct[ListPrice] > 100 to work, we only need to check one column, List Price, yet we are pulling every column into memory unnecessarily. As we are including every column, including the ProductKey, every row will be distinct regardless of whether a specific list price has already been found on another record.
This means the table will contain more columns and rows than we need. The wider and longer this table, the more memory we are taking up with data we don’t need to perform the filter.
So, is there a better approach?

The use of ALL in the FILTER (Line 5 above) means we can specify the column(s) we want to filter by rather than the entire table, so the temporary table held in memory is now only 1 column wide.
The predicate still has everything it needs to function, a temporary table of distinct list prices with which to cross-reference DimProduct.
Remember, a temporary filter table always contains distinct rows. Now that we only have one column, where there are duplicate list prices we will only have one row for each. A shorter and narrower table will consume a lot less memory than the wider, longer one created by the previous query.
Below is a table showing both measures produce the same results.
A smaller temporary table means less memory usage and less to scan through, which in turn equates to more speed as seen here in the Performance Analyzer:
What if there are global filters in place?
The use of ALL means that we are removing any global filters from the measure’s filter context, so any page filters or slicers will be ignored. If you want these to remain in effect, simply wrap the filter in a KEEPFILTERS function, this will allow the global filters to remain but still allow you to only pull one column into memory.

While this may seem trivial at first glance, the performance analyser shows the speed increase between the two queries to be more than double. If your dataset contains very wide tables with millions of rows and your report pages contain a lot of visuals with lots of measures (CALCULATE being one of the most popularly used functions), this speed increase will scale to be meaningful and noticeable by your end users.
Very clear and easy to consume. Thank you for sharing this.
Hello, I’ve been wondering, is there a way to change this DAX syntax using KEEPFILTERS and get the same results. I can’t figure it out. Thanks.
CALCULATE(
[source_wordcount_withoutdup],
ALLSELECTED(),
VALUES(dump[TM matchs %])
)
Hi Nicolas
Without an example it’s difficult to advise. The VALUES function you’re using is pulling every DISTINCT value (plus a blank) from the “dump[TM matchs %]” column into the filter. So every field in your table will have the same value.
I’m guessing you only want to see a result for each selected value in the correct row? If so try:
CALCULATE(
[source_wordcount_withoutdup],
KEEPFILTERS(
VALUES(dump[TM matchs %])
)
)