Cube perspectives and the history of the calculated measure returning null

Featured

To make a long story short

If you use an other measure in a calculated measure, you better remember to include it in your perspective – otherwise you always get a big fat nothing as  result

The long story

At a certain client of ours we had implemented a unique count measure by counting the number of corresponding attribute members in a related dimension.

The fact would be structured this way (PatientExpandedFact):

DateKey     PatientKey Admitted Wating …

20120101 10101010     1                0

20120102 10101010     1                1

20120103 10101010     1                0

And PatientKey would point to a dimension (PatientDim):

PatientKey SocialSecId

10101010  0987362891

10101011  0987362891

The MDX for the calculated measure would be someting like this


Unique Patients:=

Iif(
  Count(
    nonempty(existing([PatientDim].[SocialSecId].members
    )
,[Measures].[Count PatientExpandedFact])
 )>0
 ,Count(

nonempty(
existing(
[PatientDim].[SocialSecId].members
),
[Measures].[Count PatientExpandedFact]
)
)-1
,NULL
)

This approach for calculating unique counts has pretty good performance compared to the “normal” DistinctCount in SSAS and it worked perfectly – UNTIL!

UNTIL, something like springcleaing hit the cube. [Measures].[Count PatientExpandedFact] was visible in the cube and available in the perspective the users connected to which it was not supposed to be. So we made the measure hidden and removed the check mark marking its availability in the perspective. Oh boy, we should never have done that, mistake # 1:-)

Our second mistake was not to check the measure right away. It was only after the deployment a week later or so, that the client noticed that the measure returned nothing in Excel.

After several hours of tiresome experiments ,bug tracking, googling and using plain mdx in ssms did it occur to us, that it was related to the perspective.

Data Explorer Preview – notes on importing csv-files

Featured

It is always worth noticing new releases of products in Microsoft’s Self-Service BI stack and so I have. So I downloaded and installed it and went on to perform some initial and simple tests. What better example than importing csv? As I was quite surprised it turned out to be a not so trivial task.

Step 1: The csv-file

For this example I’ve prepared a small csv file which lists the number of unique visits to a website as it is tracked on Google Analytics.

# —————————————-
# All website data
# Target group overview
# 20130212-20130314
# —————————————-

Date,Visits
12/02/13,0
13/02/13,0
14/02/13,0

As one can see, it is a fairly trivial csv-file: 2 columns and 6 rows with headerinformation and 1 row with column names.

Step 2: Import

I start the import procedure and let the GUI direct me. I mean, this has got be a simple case to import:-)

Step 3: The error

After having supplied the guide with the filename, the next step unfortunately presents the following screen to me

What strikes me is that all my datavalues are represented with an ‘Error’. If I click on one of them ( which is the same as right-click and Drill down) I get the following error:

Step 4: The solution

I then tried several steps to overcome this error:

1. Was it a semicolon/comma issue. I use a danish regional setup on my laptop and it configures the default list separator to ; (semicolon). But no luck there, changed my setup to us/english without any change in behaviour.

2. Was it necessary to strip header info (6 first lines) from  the file first? Perhaps – but why should I even try? From all the examples I’ve seen in other blogs, I would expect that to be an operation I could perform in some later steps, and hence it ought to be unnecessary.

3. Instead I needed to dig a little deeper and the answer was hidden in the following link Data Explorer Library Specification. In this document is described to some detail what options you have to tweak the functions you  see as you build/edit the query. Basically I needed to supply the initial import (Csv.Document function) with information regarding the csv-file, primarily how many columns it was to expect in the file and what field separator to use:

The observant reader will notice that there are already several other steps listed in the right side. I have listed them below and the do exactly what they are named after:

RemoveFirstRows:

FirstRowAsHeader:

ChangedType:

Conclusion

So the Data Explorer wasn’t as intuitive (yet) as I would have expected. But when I look into the language specification and see what the language has to offer, I say:”Nice GUI, but I click on the button that opens the query editor directly”:

How to specify the schema for a writeback table

In SSAS it is possible the enable writeback for a partition and with Office 2010+ it is in fact also a very useful feature. I use it quite a lot and my clients are very happy for this feature.

BUT, have you ever found yourself being challenged by the dba over the schema for the writeback table… The gui in BIDS/SSDT does not allow you the specify the schema:

Writeback table options

One possible solution is to adapt the approach suggested by Nikki Amie-Fong in this post. The downside to this approach is tha you have to fiddle with xml-definition of the cube, which by my standards almost never should be considered

But I found another solution, which is so simple one can hardly believe it. Since the writeback table is created by the user processing the cube it is possible to specify a different schema than dbo for that user. It is really simple, you just edit the properties for the relevant login or database user in SSMS:

SSMS user properties

The end result is that the writeback table is created in the default schema:

The writeback table in SSMS