Monday, August 31, 2015

MDX Monday - Running MDX in Smart View

I've been a bit rough on Oracle for their MDX output -- I think with good reason. Trying to use it in a production batch environment is challenging. But there is some good news. Most of us have a tool at our disposal that gives beautiful MDX output: Smart View!

Simply right click on a connected database in the Connections Panel and select Execute MDX.



Then paste your query into the dialog box and click Execute. There's no need to include the database name specification or the semi-colon.



Here are the results. I got a neatly formatted Excel sheet and Smart View even put the where clause members into my POV! Very nice.



What happens when I use the calculated columns query from this post?



It works! Unfortunately you lose the calculated fields when you refresh the sheet.




It would be great if we could print member properties too. If we could do that, then we could do things like export dimensions in a parent child format directly into Excel. Let's try it. (This is the first query in the post with a PROPERTY_EXPR addition which should print each Market member's parent.)





<sad face> Smart View appears to ignore the properties returned in an MDX query. We don't get an error but we also don't get what we want. Hopefully there will be a future enhancement to Smart View that will add this functionality.

Friday, August 28, 2015

Essbase Crashes

Welcome to laid back Friday. I want to talk a little bit about Essbase crashes today. Since I started working with Essbase in 1998 I've experienced many Essbase agent and application crashes along with a fair share of agent hangs. I've spent countless days troubleshooting the problems, searching logs, trying to figure out which user was the last to perform an action in an application with the hope that I could call them and find out what they were doing just before the problem occurred. The burden of proof with support, whether it be Arbor, Hyperion or Oracle has always been on the customer. I'd have to figure out exactly what to do to replicate the problem -- only then could I feel confident that they'd call the issue a bug and address the problem. Better yet, if I could reproduce the problem in Sample.Basic, then I wouldn't have to send them my outline or data.

Flash forward a decade and I still see seemingly random crashes and hangs. I still search through the logs but mostly there's nothing I can do. Maybe play with some thread settings or increase a cache here or there. I feel a bit powerless. I'm sure others of you are in a similar boat. I don't have much advice to share other than, keep on doing your due diligence. Check the logs, talk to users when applicable and open tickets with Oracle. I will say that the crashes and hangs are few and far between compared to the Essbase 6 and prior days.

Troubleshooting Guide

This brings me to a recently released Oracle document that helps guide you when you do experience a crash. It's available HERE on the Oracle Suport Website which does require a login.

One section in the document is about the rda.sh/rda.cmd script (Oracle Remote Diagnostics Agent) which will zip up a bunch of your server information for easy sending to Oracle Support.

Process Monitor

In most of my Essbase server environments I've had a script that runs regularly throughout the day which will check if the Essbase server is responsive. If not, it will send me an email (or page in the old days). I'd recommend having some type of process monitor running as it's better to find out about an issue and start working on it before the users start calling.

Thursday, August 27, 2015

Exporting data using MDX Part 3 -- More Totals

A question came up on Network54 (the greatest forum in the history of forums) the other day about how to create an MDX report where the column value is from the parent of a row member. This isn’t easy to do in the Excel Add-In (we’d probably have to create a calculated Measure in the outline) but MDX makes it relatively simple. Once again I’ll use our beloved Sample Basic database.

Using MDX we can create a member that crosses dimensions. In this case we will create a Measure whose value is Sales for a relative of a Market member. I’m going to create a report that shows Sales for the States in one column and the Sales for that State’s Market (i.e. East, Central, etc...) in the column next to it. Just for fun I’ll add a calculated column showing what percentage of Sales each State accounts for. The key here is to create a tuple with the member from the Measures dimension you want along with the reference to the row member.

1:  WITH  
2:  SET [_States] AS '[Market].Levels(0).Members'  
3:  MEMBER [Measures].[_MarketSales] AS '([Sales],[Market].CurrentMember.Parent)'  
4:  MEMBER [Measures].[_MarketSales%] AS '([Sales],[Market].CurrentMember)/([Sales],[Market].CurrentMember.Parent)'  
5:  SELECT  
6:  {[Sales],[_MarketSales],[_MarketSales%]} on columns,  
7:  NON EMPTY  
8:  [_States]  
9:  on rows  
10:  from [Sample.Basic]  
11:  where ([Year].[Qtr1],[Scenario].[Actual],[Product]);  

Line 2 creates the _States SET which will give us the list of states.

Line 3 creates the calculated member called _MarketSales which is defined by the tuple crossing Sales and whatever the Parent is of the Market member on the row -- in this case it will be a state.

Line 4 creates our calculated member that calculated the % of sales a state contains for the given market.

Lines 6 defines our columns which are made up of the members we just created along with Sales.

Line 8 references the SET we created on Line 2 which will make up our row selection.


And there we have our report. New York has sales of 7,705. New York's Market (East) has sales of 20,621. New York makes up 37.36% of the sales for the East.


Wednesday, August 26, 2015

Overwriting Data with #MISSING in ASO Calcs

The way ASO Custom Calculations and Allocations update data in an ASO database is through the loading of slices. This is something to keep in mind if you perform a lot of them – eventually you should merge the slices together if retrieval performance becomes an issue. The other thing to keep in mind is that you can’t use a Custom Calculation or Allocation to copy a #MISSING cell over a cell with data. Why is this important?

Suppose we have a database where we copy Actual data over Forecast data after each month is closed. Suppose further our company exits the Sasparilla business in Massachusetts in May so we end up having no sales going forward. In this case, if I copy May Actual data to Forecast, it won’t do what I want which is to clear out the 15 in the May Forecast cell.


1: Execute Allocation Process on Database SampA.Basic with
2: POV "{([Sales],[May],[Massachusetts],[Sasparilla])}"
3: AMOUNT "([Actual])"
4: TARGET ""
5: RANGE "{[Forecast]}"
6: SPREAD;



The work around I came up with is to clear out the POV before the Calculation/Allocation is run.

1:  Alter Database SampA.Basic clear data in region "{[May],[Forecast]}";  

So if you have a Custom Calculation or Allocation where there is a potential that you’ll need to copy #MISSING values over data, make sure you add a clear statement ahead of it.

Note: I did log a Service Request with Oracle on this subject and they acknowledged this as a bug (#19025751).

Tuesday, August 25, 2015

Exporting data using MDX Part 2 -- Totals

I feel like adoption of MDX within Essbase is limited because 1) the output format is messy and 2) the documentation is challenging to following and understand. I couldn't find any examples in the Tech Ref for adding column totals to a query (let me know if you find any) so I'll try to address that here.

The following query selects the children of Qtr1 in Sample Basic and totals them in the final column. The difference here is that we're using MDX to do the aggregation instead of Essbase. It might not make sense to do that when you already have Essbase giving you the total but there could be a case where you need a custom rollup not available in your cube.

SET

1:  WITH  
2:  SET [_TimeSet] as '[Qtr1].CHILDREN'  
3:  MEMBER [Year].[_TimeTotal] AS 'AGGREGATE([_TimeSet])'  
4:  SELECT  
5:  {[_TimeSet],[Year].[_TimeTotal]} on COLUMNS,  
6:  {[100].Children} ON ROWS  
7:  FROM Sample.Basic;  

Line 2 creates a SET called _TimeSet. I start it with an underscore so I can easily tell that it's not a member within the Sample.Basic outline.

Line 3 then creates a calculated member called _TimeTotal which is an aggregate of _TimeSet.

Line 5 places the contents of _TimeSet and _TimeTotal onto the columns.

The beauty of the SET command is that if I ever need to change the columns in this extract, all I need to do is change line 2. The other lines automatically inherit the changes. It might not seem like a big deal on a seven line MDX statement, but just imagine one that's 50 lines long.


SUM()

While reading the documentation of the Sum() function I came upon what I think is an awful example. It looks like the developer wanted to print a report with the major markets and a sum of them at the bottom. They use the UDA to pick them out in the report but take the time to write out the individual states that are major markets. Why use UDAs if you're going to type out all of the members with that UDA? You'd have to update your report if any of the UDAs changed. <wagging finger> Also, the total doesn't add up all of the lines but only the lines that are states. Let's see if we can improve on this report.

We need the Level 0 members that are major markets. We can create that set with the following statement:

Intersect(UDA([Market], "Major Market"),[Market].Levels(0).Members)

The Intersect() function selects the, ahem, intersection of two lists -- what they have in common. In this case the list of members with a "Major Market" UDA and the list of members that are at Level 0.

We can either choose to keep the Market rollups:

1:  WITH MEMBER [Market].[Sum Expense for Main States]  
2:  AS  
3:  'Sum (Intersect(UDA([Market], "Major Market"),[Market].Levels(0).Members))'  
4:  SELECT  
5:  {[Measures].[Total Expenses]}  
6:  ON COLUMNS,  
7:  {UDA([Market], "Major Market"),  
8:  [Market].[Sum Expense for Main States]}  
9:  ON ROWS  
10:  FROM  
11:  Sample.Basic  
12:  WHERE ([Scenario].[Actual])  

Lines 1-3 define a member that will sum the intersection of those members with a Major Market UDA and those members at Level 0.

Line 7 lists the members with a Major Market UDA on the rows and line 8 prints the Sum that we created in lines 1-3.

Or we can print the report with only the states:

1:  WITH MEMBER [Market].[Sum Expense for Main States]  
2:  AS  
3:  'Sum (Intersect(UDA([Market], "Major Market"),[Market].Levels(0).Members))'  
4:  SELECT  
5:  {[Measures].[Total Expenses]}  
6:  ON COLUMNS,  
7:  {Intersect(UDA([Market], "Major Market"),[Market].Levels(0).Members),  
8:  [Market].[Sum Expense for Main States]}  
9:  ON ROWS  
10:  FROM  
11:  Sample.Basic  
12:  WHERE ([Scenario].[Actual])  

The difference between this report and the last is that now we omit the Major Market members with UDA that are above Level 0. Now the column of figures actually adds up to the sum at the bottom of the report.


If we need to use the same set multiple times within the same query we can write it using the SET command:

1:  WITH  
2:  SET [_MajorMarketSet] as 'Intersect(UDA([Market], "Major Market"),[Market].Levels(0).Members)'  
3:  MEMBER [Market].[Sum Expense for Main States]  
4:  AS  
5:  'Sum ([_MajorMarketSet])'  
6:  SELECT  
7:  {[Measures].[Total Expenses]}  
8:  ON COLUMNS,  
9:  {[_MajorMarketSet],  
10:  [Market].[Sum Expense for Main States]}  
11:  ON ROWS  
12:  FROM  
13:  Sample.Basic  
14:  WHERE ([Scenario].[Actual])  

This is essentially the same report as the prior one but now we've employed the SET command. If we ever need to update the list of members in this extract, we just update line 2. Line 5 inherits the change as does line 9. In my opinion, a much cleaner way of writing the MDX.

Monday, August 24, 2015

ASO Procedural Calcs Part 1

I've shown before that using ASO Allocations is generally favorable to using ASO Procedural Calcs. But when are you forced into using a Procedural Calc?

Writing a BSO calc script to subtract Forecasted Cost of Goods Sold from Actual Sales is relatively simple.


  • "Margin" = "Actual"->"Sales" - "Forecast"->"COGS";


But this is where we hit a snag with ASO Allocations.

Within an ASO Allocation statement we can use the Amount and AmountContext fields. We can use those to set the source of the data I'm going to allocated. The Amount field allows us to do things like this:


  • Amount "([Sales] - [COGS])"


as well as some simple arithmetic. We can use the AmountContext field to fine tune Amount field. We can add something like this:


  • AmountContext "([Actual])"


or like this:


  • AmountContext "([Actual],[Jan])"


What we cannot do is cross dimensions within the Amount field like this:


  • Amount "([Sales],[Actual] - [COGS],[Forecast])"

>>ERROR - 1200679 - The amount expression is invalid. Only arithmetic operations and members from a single dimension are allowed.

This is the point at which we need to switch over to using a Procedural Calc. Here's an example:

1:  Execute Calculation on Database Sample.Basic With  
2:  LOCAL SCRIPT FILE "simple_calc.csc"  
3:  POV "CrossJoin({[Jan]:[Dec]},  
4:  CrossJoin({[Forecast]},  
5:  CrossJoin({Descendants([Total Markets],Levels([Market],0))},  
6:  {Descendants([Total Products],Levels([Product],0))})))"  
7:  SOURCEREGION "CrossJoin({[System Margin],[Sales],[COGS]},  
8:  CrossJoin({[Jan]:[Dec]}, {[Actual],[Forecast]}))";  
9:    
10:  /* Contents of simple.calc.csc */  
11:  ([System Margin]) := (([Actual],[Sales]) - ([Forecast],[COGS]));  
12:    

If you're translating many BSO calc scripts to ASO calcs then chances are good that you'll run into one that forces you to use a Procedural Calc. Why the developers chose to implement Procedural Calcs and Allocations in such a different manner doesn't make a lot of sense to me but my hope is that they will extend the functionality in them.

Friday, August 21, 2015

Call For Ideas



My goal for this blog is to post at least once a week. My stretch goal is to post every business day. Generating that much content won't always be easy so I'm asking for some ideas. I'd love to hear your ideas along the following lines:

I have the choice of doing things to different ways with Essbase, which is faster?

How does x functionality work in the latest release of Essbase? This could be something brand new or something that's existed for a while.

Any other Essbase related questions.

Thursday, August 20, 2015

The Plunge Into Hybrid Part 2

Yesterday I thought I had good results from my new Hybrid cube. I was running a MaxL script comparing ASO, BSO and Hybrid. Hybrid won in a landslide. What I neglected to do was connect via Smart View and do some ad-hoc analysis. I tried that today. The results... well, um, not so good.

My expectations of an Essbase cube are extremely high. If I'm not getting sub-second response on smallish queries I go back to the drawing board. My initial query on the Hybrid cube was 2.5 seconds. The same query in BSO registers 0 seconds in the log. While 2.5 seconds might not seem like a long time, it is to someone who expects instantaneous response. Also, it will spell disaster for larger queries. So it's back to the drawing board.

First I tried messing with some cache settings. That didn't help at all. Next I tried some unorthodox dense/sparse settings -- also didn't help. So what's next? There are a few possibilities:

  1. This is one of those *fringe* cases that requires ASO or pure BSO.
  2. I need to buy better hardware.
  3. We just need to accept slightly slower retrievals from Hybrid (at least for the time being).
  4. I'm missing some tuning trick.


The sum total of Hybrid documentation available from Oracle is about two pages in length. That's not much to go on. So what have people seen? Is Hybrid ready for prime time and if so, what are the tuning tricks?

Wednesday, August 19, 2015

The Plunge Into Hybrid Part 1

Hybrid Essbase has been around for over a year now but from what I can tell it hasn't yet been widely adopted. It is my hope to test Hybrid extensively over the next several months to see how promising (or not) this new architecture is. As a brief refresher, Hybrid technology combines the calculation functionality of BSO with the fast on-the-fly aggregations of ASO.

My first task was to create essentially the same database in ASO, BSO and Hybrid. I had a generic financial type model I could use in both ASO and BSO. In order to create the Hybrid model, I simply downloaded Celvin Kattokaran's utility that changes upper-level sparse members to dynamic calc. I made a copy of the BSO cube, ran it through the utility, added the following line to my essbase.cfg file (with a restart of Essbase just for good measure)

ASODYNAMICAGGINBSO HY_APP FULL

and I was off to the races. Thanks, Celvin!

For my first test, I created an MDX query that pulled from several different levels and ran it on all three databases. Very simple, just a smoke test to see if there's fire. The results in a moment.

At Kscope15 I felt like Hybrid was shoved down my throat -- to the point that I felt badly that I hadn't already converted all of my production cubes and I became afraid that my precious ASO would be going away in the next two weeks. I was reassured by Essbase product management that ASO is not going away, at least not soon, and that there will likely always be some *fringe* cases for ASO as well as for pure BSO. Phew...

Now the results (sorted from slowest to fastest).

Test 1: BSO[63.08] seconds
Test 2: ASO (no aggs)[21.43] seconds
Test 3: ASO (recommended aggs)[19.06] seconds
Test 4: Hybrid[11.86] seconds

Well you could knock me over with a feather. I think I need to do some more investigation. Has anyone else seen similar results?

Notes: BSO DB Size: pag 34,507,358,289 b; ind 557,080,576 b
ASO DB Size: input 768,832 kb; agg 607,616 kb
Hybrid DB Size: 918,061,137 b; ind 32,792,576 b
MDX Output Row Count: 5,899
Most difficult part of this test: Waiting for the silly old BSO calc to finish.

Tuesday, August 18, 2015

Exporting Data using MDX

Essbase has had the ability to export data using MDX for over 10 years. Some people still don't like it. They have good reason. You still can't get a clean MDX extract from Essbase after all of these years.  Go ahead and export Sample.Basic level 0 data and what do you get?

MDX Script (this will export all level 0 data suppressing missing records, shared members and formula members):
set column_width 80;

alter session set dml_output alias off;

spool on to "out.txt";


SELECT {[Jan]:[Dec]} on columns,

NON EMPTY

Crossjoin(Filter(Descendants([Measures], Measures.Levels(0)), [Measures].CurrentMember.Member_Type <> 2 AND NOT Measures.CurrentMember.Shared_Flag),

Crossjoin(Filter(Descendants([Product], Product.Levels(0)), [Product].CurrentMember.Member_Type <> 2 AND NOT Product.CurrentMember.Shared_Flag),

Crossjoin(Filter(Descendants([Market], Market.Levels(0)), [Market].CurrentMember.Member_Type <> 2 AND NOT Market.CurrentMember.Shared_Flag),

Filter(Descendants([Scenario], Scenario.Levels(0)), [Scenario].CurrentMember.Member_Type <> 2 AND NOT Scenario.CurrentMember.Shared_Flag))))

on rows

from [Sample.Basic];

spool off;

exit;

So what's the output to my out.txt file look like?

MAXL> SELECT {[Jan]:[Dec]} on columns,
   2>
   3> NON EMPTY
   4>
   5> Crossjoin(Filter(Descendants([Measures], Measures.Levels(0)), [Measures].CurrentMember.Member_Type <> 2 AND NOT Measures.CurrentMember.Shared_Flag),
   6
 
and so on...

Hey, that looks familiar. I'll just add the setting that removes the MaxL statement from the output file. Oh, wait, there's no such thing. Nice...

And then the output is lovely. Aghh!!!


 Axis-1                                                                          (Jan)                                                                           (Feb)                                                                           (Mar)                                                                           (Apr)                                                                           (May)                                                                           (Jun)                                                                           (Jul)                                                                           (Aug)                                                                           (Sep)                                                                           (Oct)                                                                           (Nov)                                                                           (Dec)                                           
+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------+-------------------------------------------------------------------------------
 (Sales, 100-10, New York, Actual)                                                                                                                           678                                                                             645                                                                             675                                                                             712                                                                             756                                                                             890                                                                             912                                                                             910                                                                             790                                                                             650                                                                             623                                                                             699

The Swiss Army Chainsaw, Perl, to the rescue! If you're on Unix or Linux then you're in luck because they all come with Perl and this script is so simple that it should run with no modifications. If you're on Windows, you'd have to install Perl. There are two output options -- regular and classic. Classic makes the output look like an old fashioned Essbase extract. Supply the input file name and the output is sent to STDOUT. Add the word "classic" after the input filename and the file will look just like an Essbase extract.

formatMaxl.pl:

#!/usr/bin/perl

if (scalar (@ARGV) < 1){

  die "usage: formatMaxl.pl input [classic]";

}


# if there is a second argument, the file will be output in the classic Essbase export format: no commas, quotes around member names

if (exists($ARGV[1]) && $ARGV[1] eq "classic"){

   $classic = 1;

}else{

   $classic = 0;

}


my $tabfile = shift @ARGV;

open INFILE, $tabfile;


my $counter = 0;   # initialize the counter to zero

my @prevColumns;   # initialize the prevColumns array

my @printColumns;  # initialize the printColumns array

my @columns;       # initialize the columns array


# read through each line of the file

while (<INFILE>){

    ###################

    # handle header row

    ###################

    if ($_ =~ /Axis-1/){

        while(/\((.+?)\)/g ) {   # read all of the column member names in parenthesis

            if($counter == 0){       # check if it's the first member name

                if ($classic == 0){  # if it is a regular conversion

                     print $1;       # just print the member name

                }else{               # otherwise

                     print "\"$1\""; # put quotes around the member

                }

                $counter++;          # increment the counter so we know we need commas in front of the rest of the members

            }else{

                if ($classic == 0){

                     print ",$1";    # same as above except need the member preceeded by a comma

                }else{

                     print " \"$1\"";

                }

            }

        }

    print "\n"; # new line after the header


    # now handle the data rows

    } elsif($_ =~ /^ \(/) {    # when we come to the beginning of a line followed by a parenths it's time to get to work

                               # the members should be in a format like so: (Sales, 100-10, New York, PY Actual)

         # split the items in parentheses by commas and the rest by space

         $_ =~ s/#Missing/#Mi/g;   # change #Missing to #Mi

         $_ =~ /\(([^\)]+)\)(\s+)([^\)]+)/;    # this regexp puts everything within parethsesis into $1, the data all goes into $3


         @columns = split /\,/, $1;    # split the members in parentheses ($1) by commas


         if ($classic == 0){

             push ( @columns, split /\s+/, $3);  # split the rest of it, the data, ($3) by space and then push it onto the end of the columns array

             print join(", ", @columns), "\n";   # now print each row with a comma seperator

         }else{

             undef @data;                                      # start with a clean array

             push ( @data, split /\s+/, $3);                   # split the data by space

             s{^\s+|\s+$}{}g foreach @columns;                 # this takes the spaces off the beginning and ends of the member names


             #########################################

             # handle repeated members in this section

             #########################################

             undef @printColumns;                                                # start with a clean array

             $memberCount = 0;                                                   # reset the member count

             foreach $member (@columns){

                 if($columns[$memberCount] ne $prevColumns[$memberCount]){       # if the members in the arrays don't match up, the put the member into the printColumns array

                     push @printColumns, $member;                                # so it will get printed

                 }

                 $memberCount++;

             }

             @prevColumns = @columns;                                            # take a copy of the member names to use on the next row

             ###############################

             # end handling repeated members

             ###############################


             $members = join ' ', map { qq/"$_"/ } @printColumns;   # this command puts quotes around each member name and puts a space between them then puts them into the $members scalar

             print $members, " ";                                   # print the list of member names followed by a space

             print join(" ", @data), "\n";                          # print the data array with a space separator followed by a new line

         }

    }

}

close INFILE;
 
 
An example of regular output:
Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec
Sales,  100-10,  New York,  Actual, 678, 645, 675, 712, 756, 890, 912, 910, 790, 650, 623, 699
Sales,  100-10,  New York,  Budget, 640, 610, 640, 670, 710, 840, 860, 860, 750, 540, 560, 620
Sales,  100-10,  Massachusetts,  Actual, 494, 470, 492, 519, 551, 649, 665, 664, 576, 474, 454, 510
Sales,  100-10,  Massachusetts,  Budget, 460, 440, 460, 490, 520, 610, 630, 630, 540, 400, 400, 450

An example of classic output:
"Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
"Sales" "100-10" "New York" "Actual" 678 645 675 712 756 890 912 910 790 650 623 699
"Budget" 640 610 640 670 710 840 860 860 750 540 560 620
"Massachusetts" "Actual" 494 470 492 519 551 649 665 664 576 474 454 510
"Budget" 460 440 460 490 520 610 630 630 540 400 400 450

This script gives you the flexibility to output using MDX with a clean format that you can use to load into another cube or send off to another system. Using MDX also allows you to change the output order of the dimensions which is not easily done using the standard Essbase export function.

Monday, August 17, 2015

Moving data from ASO to BSO

Moving data from ASO to BSO seems like taking a step backward to me. But as a part of testing Hybrid Essbase I want to compare it with ASO. To do this I need to create a BSO cube and move data from the ASO cube with which I'm comparing it. This seems really simple -- just export out of the ASO cube and import into the BSO cube. Unfortunately there's more to the story.

I'd love to just set up a replicated partition from ASO to BSO but that's not possible. The tests:

1. Baseline -- Load data into BSO using BSO generated export
First I loaded data into my BSO cube and exported the file. Then I cleared and loaded that file into the cube. Theoretically this should be a completely fragment free cube and should be the fastest load possible.
Elapsed Time: 16.19 seconds
Page file size: 25,215,057 bytes

2. Load data into BSO using ASO generated export (no compression dim)
Next I exported the data from ASO and imported it into the BSO cube. This is dog slow. It took almost 15x longer and 5x the disk space. This is probably the least optimal way of loading data into BSO since it was loading each block into memory and writing to disk for just about every number that got loaded.
Elapsed Time: 238.84 seconds
Page file size: 127,402,065 bytes

3. Load data into BSO using ASO generated export (compression dim is Time)
Next I set the compression dimension for the ASO cube to Time. This changes the way the data export is written to disk -- the compression dimension members will be in columns. This helped a bit because now at least the Time members are loaded and written to a block at the same time since Time is dense in the BSO cube.
Elapsed Time: 58.72 seconds
Page file size: 66,355,281 bytes

4. Load data into BSO using ASO MDX export
Finally, I exported the ASO data using an MDX script. This allows you to get the columns in the right order (largest sparse to smallest sparse then dense) so that each block will be written only once in the target BSO cube. This test is a bit more complex. First of all, the export didn't work in version 11.1.2.2 -- the size of the CrossJoin() was too large.

ERROR - 1200712 - Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits.

The good news is that it worked like a champ in 11.1.2.4.002. This is interesting because the MDX limits listed in the 11.1.2.2, 11.1.2.3 and 11.1.2.4 Tech Refs appear to be all the same. The bad news is that MDX is still exported in a very messy way. I wrote a Perl script to format the data -- more about that here. Now we're back to the same results as the pure BSO export.
Note: the MDX export was about 8 seconds slower than the ASO export. Also, the file conversion in Perl took about 9 seconds.
Elapsed Time: 16.38 seconds (plus 8 seconds for the export and 9 seconds for conversion of the export file)
Page file size: 25,215,057 bytes

Conclusion:
Don't blindly export an ASO cube and import the file into a BSO cube. At least try to set your compression dimension to be the same as the most dense dimension in the BSO cube. If you can, export using MDX.

Thursday, August 13, 2015

Limit Changes in 11.1.2.4

This probably won't impact most databases but it's worth noting that there are a few changes in the 11.1.2.4 limits documentation.

1. Number of stored dimension level combinations in an aggregate storage outline is now gone.

2. Number of dimensions in an aggregate storage or block storage outline is 255. This limit was not mentioned in 11.1.2.3 limits documentation. I'd like to see someone running a production database with 200+ dimensions. On second thought, no I don't.

3. Number of possible sparse blocks in a block storage database for 64 bit systems goes from 2^128 down to 2^104. This is the same value as it was for 32 bit systems in 11.1.2.3.

4. UDA limit goes from 80 bytes or characters in non-unicode and unicode respectively to 1024 bytes or characters.

5. It looks like a bug in the documentation was fixed for the BSO Limits (or maybe one was introduced?). Number of non-zero values in a dense data block in a block storage database went away. Number of possible cells per block in a block storage outline changed from 2^52 to (2^28-1). Seems strange to me.

Wednesday, August 12, 2015

Using Filter() in ASO Allocations and Procedural Calcs


 The following statement worked in 11.1.2.2.100. Pay special attention to the green section. This Filter() statement serves to pull all level 0 members from the Measures dimension and remove any that are dynamic (MEMBER_TYPE of 2). Since dynamic members aren't allowed in a POV, this is a very useful command.


execute allocation process on database ASOSamp.Sample with
POV "Crossjoin( {FILTER([Measures].LEVELS(0).Members,[Measures].CurrentMember.MEMBER_TYPE <> 2)},
Crossjoin( {[Curr Year]},
Crossjoin( {[Jan]},
Crossjoin( {[Sale]},
Crossjoin( {[Cash]},
Crossjoin( {[Coupon]},
Crossjoin( {[1 to 13 Years]},
Crossjoin( {[Under 20,000]},
Crossjoin( {[004118]},
{[Digital Cameras]})))))))))"
AMOUNT "100"
TARGET ""
RANGE "{[80101]}"
SPREAD;

The previous statement throws an error in 11.1.2.4.002.

ERROR - 1300040 - Argument [Target] is missing dimension [Measures].
ERROR - 1241192 - Allocation terminated with Essbase error 1300040 in TARGET.

It's acting like I never bothered mentioning the Measures dimension.

Now there is some debate as to whether you should be able to use Filter() in a POV. The POV is a an MDX Set after all. So maybe we were getting a free ride for the past couple of years? I'm using Filter() all over the place in my ASO calcs and allocations. It would be a shame to lose the functionality. Hopefully I'm just doing something wrong. Has anyone else tried this in 11.1.2.4?

Update on 9/9/2015: This issue has been raised with Oracle and defect 21798739 has been assigned.

Update on 12/14/2015: This issue has been fixed in 11.1.4.2.006.

Tuesday, August 11, 2015

Which is faster a calc or an allocation? Part 1

There are two or more ways to do of things in Essbase. Calculations within an ASO cube are no exception. You can write an allocation or a procedural calc. I tend to prefer to write allocations because then I'm not forced to maintain a second file which is necessary when writing procedural calcs. So I thought I'd test out which was faster doing essentially the same operation. In this case I'm copying a scenario in a very large ASO cube. The cube has approximately 25GB of input data.

First the allocation:
execute allocation process on database TEST.TEST with
pov "CrossJoin({[Version1]},
CrossJoin({Descendants([Time],Levels([FY],0))},
CrossJoin({Descendants([ALL_ACCOUNTS],Levels([ACCOUNT],0))},
CrossJoin({Descendants([ALL_PRODUCTS],Levels([PRODUCT],0))},
CrossJoin({Descendants([ALL_STORES],Levels([STORE],0))},
{Descendants([ALL_DIVISIONS],Levels([DIVISION],0))})))))"
amount "([2014 ACTUAL])"
target ""      
range "{[TEMP]}"
spread;

OK/INFO - 1300006 - Essbase generated [3.07368e+08] cells.
OK/INFO - 1013374 - The elapsed time of the allocation is [4230.69] seconds.





Now the calc:
execute calculation on database TEST.TEST with
local script_file "test.csc"
pov "CrossJoin({[Version1]},
CrossJoin({Descendants([Time],Levels([FY],0))},
CrossJoin({Descendants([ALL_ACCOUNTS],Levels([ACCOUNT],0))},
CrossJoin({Descendants([ALL_PRODUCTS],Levels([PRODUCT],0))},
CrossJoin({Descendants([ALL_STORES],Levels([STORE],0))},
{Descendants([ALL_DIVISIONS],Levels([DIVISION],0))})))))"
sourceregion "{[2014 ACTUAL]}";

OK/INFO - 1300006 - Essbase generated [3.07368e+08] cells.
OK/INFO - 1013372 - The elapsed time of the custom calculation is [4477.39] seconds.

Contents of test.csc:
([TEMP]) := ([2014 ACTUAL]);


So the allocation ran in 70:30. The calc ran in 74:37. The allocation was almost 6% faster. Not an enormous difference with this particular operation but it's another reason to use allocations over calcs when possible.

Sunday, August 2, 2015

Resurrecting This Blog

I'm officially resurrecting this blog which I barely touched back in 2007 and haven't touched since.

Essbase development seems to be filled with trade-offs: design decisions giving up functionality for performance, a number of different ways to do the same thing. Very rarely do these different ways perform in the same way. My goal for this blog will be to test out various ways of doing similar things. If you have test cases for me to try, please feel free to send them my way and I'll do my best to test them out. If you see I've doing things one way and you think you know a better way, let me know and I'll try to re-test. My goal for this blog is to have a toolbox filled with options. When you know all of your options and how each behaves, you can be a better developer.

In most cases I'll be testing out using the very latest release of Essbase. Until further notice, that's 11.1.2.4.002. I'll try to make a post whenever I do an upgrade as well. Enjoy!