Tuesday, December 15, 2020

New Essbase On-Prem Version Is Now Available!




For the first time in almost six years, there's a new on-prem release of Essbase available. I'm calling it on-prem but Essbase 21c is the same version whether you run it in Oracle's cloud, somebody else's cloud or your own Linux box. (Click here to see which distribution versions are supported) This is an exciting day! I've already had a chance to kick the tires and I'm really excited about the performance improvements I've been seeing.

As always, Applied OLAP is ready for Dodeca to attach to the latest versions of Essbase. We're happy to set up a demo of Dodeca and Essbase 21c for those who are curious -- just shoot us a message.

Release notes are available here and the installation file is available on Oracle's eDelivery website.

Enjoy!


Monday, October 26, 2020

Essbase MDX Exports

I delivered a presentation last week at the most excellent Hyperion Solutions 2020 on the topic of Essbase MDX. In it I walked the audience through the evolution of Essbase MDX exports. This is a summary of that section. Be sure to read until the very end as there is an update since the presentation.


Generation 1: The Ugly

In the first generation, if you wanted to export data during batch using MaxL the output was nearly unusable. I had to write a parser in Perl to get the output I needed.

Here's the code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
spool on to 'gen1.txt';

select {[Mar],[Apr]} on columns,
{[300],[400]} dimension properties
level_number, member_unique_name on rows,
crossjoin({[Actual],[Budget]},
{[Opening Inventory],[Ending Inventory]})
dimension properties level_number, member_unique_name on pages
from [Sample].[Basic];

spool off;

Below is my actual output file. I mean, seriously, what am I supposed to do with this?

MAXL> select {[Mar],[Apr]} on columns,

   2>

   3> {[300],[400]} dimension properties

   4>

   5> level_number, member_unique_name on rows,

   6>

   7> crossjoin({[Actual],[Budget]},

   8>

   9> {[Opening Inventory],[Ending Inventory]})

  10>

  11> dimension properties level_number, member_unique_name on pages

  12>

  13> from [Sample].[Basic];


 Axis-2              Axis-2.properties   Axis-1              Axis-1.properties   (Mar)               (Apr)

+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------

 (Actual, Opening In (LEVEL_NUMBER = 0,  (Cream Soda)        (LEVEL_NUMBER = 1,                29095               30334

 (Actual, Opening In (LEVEL_NUMBER = 0,  (Fruit Soda)        (LEVEL_NUMBER = 1,                26409               27588

 (Actual, Ending Inv (LEVEL_NUMBER = 0,  (Cream Soda)        (LEVEL_NUMBER = 1,                30334               32266

 (Actual, Ending Inv (LEVEL_NUMBER = 0,  (Fruit Soda)        (LEVEL_NUMBER = 1,                27588               29550

 (Budget, Opening In (LEVEL_NUMBER = 0,  (Cream Soda)        (LEVEL_NUMBER = 1,                27380               28460

 (Budget, Opening In (LEVEL_NUMBER = 0,  (Fruit Soda)        (LEVEL_NUMBER = 1,                27230               29030

 (Budget, Ending Inv (LEVEL_NUMBER = 0,  (Cream Soda)        (LEVEL_NUMBER = 1,                28460               30190

 (Budget, Ending Inv (LEVEL_NUMBER = 0,  (Fruit Soda)        (LEVEL_NUMBER = 1,                29030               31520


 OK/INFO - 1241150 - MDX Query execution completed.


MAXL>

Generation 2: The Bad

With Essbase 11.1.2.4.010 I could specify an output delimiter which made things far better. The column_seperator command was supposed to solve all of my problems.

Here's the code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
set column_separator "|";
spool on to 'gen2.txt';

select {[Mar],[Apr]} on columns,
{[300],[400]} dimension properties
level_number, member_unique_name on rows,
crossjoin({[Actual],[Budget]},
{[Opening Inventory],[Ending Inventory]})
dimension properties level_number, member_unique_name on pages
from [Sample].[Basic];

spool off;

The output was better but not perfect. I still have to deal with the commands being written to the file.

MAXL> select {[Mar],[Apr]} on columns,

   2> {[300],[400]} dimension properties

   3> level_number, member_unique_name on rows,

   4> crossjoin({[Actual],[Budget]},

   5> {[Opening Inventory],[Ending Inventory]})

   6> dimension properties level_number, member_unique_name on pages

   7> from [Sample].[Basic];


Scenario|Measures|Scenario.LEVEL_NUMBE|Product|Product.LEVEL_NUMBER|Mar|Apr

Actual|Opening Inve|0|Actual|0|Opening |Cream Soda|1|300|29095|30334

Actual|Opening Inve|0|Actual|0|Opening |Fruit Soda|1|400|26409|27588

Actual|Ending Inven|0|Actual|0|Ending I|Cream Soda|1|300|30334|32266

Actual|Ending Inven|0|Actual|0|Ending I|Fruit Soda|1|400|27588|29550

Budget|Opening Inve|0|Budget|0|Opening |Cream Soda|1|300|27380|28460

Budget|Opening Inve|0|Budget|0|Opening |Fruit Soda|1|400|27230|29030

Budget|Ending Inven|0|Budget|0|Ending I|Cream Soda|1|300|28460|30190

Budget|Ending Inven|0|Budget|0|Ending I|Fruit Soda|1|400|29030|31520


 OK/INFO - 1241150 - MDX Query execution completed.


MAXL> spool off;


Generation 3: The Good

The next iteration, only available in Essbase 19c or higher, is what should have been available in the first place.

Here's the new code. Notice the first line where I can now specify the export file name and the delimiter. Nirvana...

1
2
3
4
5
6
7
8
EXPORT INTO FILE "Gen3" OVERWRITE USING COLUMNDELIMITER "|"
select {[Mar],[Apr]} on columns,
{[300],[400]} dimension properties
level_number, member_unique_name on rows,
crossjoin({[Actual],[Budget]},
{[Opening Inventory],[Ending Inventory]})
dimension properties level_number, member_unique_name on pages
from [Sample].[Basic];

Here's the file output:

Scenario|Measures|Scenario.LEVEL_NUMBER|Scenario.MEMBER_UNIQUE_NAME|Measures.LEVEL_NUMBER|Measures.MEMBER_UNIQUE_NAME|Product|Product.LEVEL_NUMBER|Product.MEMBER_UNIQUE_NAME|Mar|Apr

Actual|Opening Inventory|0|Actual|0|Opening Inventory|Cream Soda|1|300|29095|30334

Actual|Opening Inventory|0|Actual|0|Opening Inventory|Fruit Soda|1|400|26409|27588

Actual|Ending Inventory|0|Actual|0|Ending Inventory|Cream Soda|1|300|30334|32266

Actual|Ending Inventory|0|Actual|0|Ending Inventory|Fruit Soda|1|400|27588|29550

Budget|Opening Inventory|0|Budget|0|Opening Inventory|Cream Soda|1|300|27380|28460

Budget|Opening Inventory|0|Budget|0|Opening Inventory|Fruit Soda|1|400|27230|29030

Budget|Ending Inventory|0|Budget|0|Ending Inventory|Cream Soda|1|300|28460|30190

Budget|Ending Inventory|0|Budget|0|Ending Inventory|Fruit Soda|1|400|29030|31520


Generation 2 Part 2:

This is the plot twist and the part that I did not cover in my presentation. In reading through the release notes, I noticed there's a new command which became available in 11.1.2.4.018. It allows you to turn off the echo of statements in the spooled file. It does appear that you can get a clean MDX export in 11.1.2.4 (as long as you're on patch 18 or higher). The other thing I noticed is that I only got clean output in my export file if I was running the script in non-interactive mode. If I manually logged in using the MaxL command line interface and tried running this, I'd still get the unwanted statements in my output file.

Here's the code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
set message level warning;
set column_separator "|";
set echo_mode off;
spool on to 'Gen2p2.txt';

select {[Mar],[Apr]} on columns,
{[300],[400]} dimension properties
level_number, member_unique_name on rows,
crossjoin({[Actual],[Budget]},
{[Opening Inventory],[Ending Inventory]})
dimension properties level_number, member_unique_name on pages
from [Sample].[Basic];

spool off;


Here's the file output:

Scenario|Measures|Scenario.LEVEL_NUMBER|Scenario.MEMBER_UNIQUE_NAME|Measures.LEVEL_NUMBER|Measures.MEMBER_UNIQUE_NAME|Pr

oduct|Product.LEVEL_NUMBER|Product.MEMBER_UNIQUE_NAME|Mar|Apr

Actual|Opening Inventory|0|Actual|0|Opening Inventory|Cream Soda|1|300|29095|30334

Actual|Opening Inventory|0|Actual|0|Opening Inventory|Fruit Soda|1|400|26409|27588

Actual|Ending Inventory|0|Actual|0|Ending Inventory|Cream Soda|1|300|30334|32266

Actual|Ending Inventory|0|Actual|0|Ending Inventory|Fruit Soda|1|400|27588|29550

Budget|Opening Inventory|0|Budget|0|Opening Inventory|Cream Soda|1|300|27380|28460

Budget|Opening Inventory|0|Budget|0|Opening Inventory|Fruit Soda|1|400|27230|29030

Budget|Ending Inventory|0|Budget|0|Ending Inventory|Cream Soda|1|300|28460|30190

Budget|Ending Inventory|0|Budget|0|Ending Inventory|Fruit Soda|1|400|29030|31520

Wednesday, May 13, 2020

Federated Partitions in Essbase

In this post I'm going to try to explain what a Federated Partition is in the Essbase world, how to create one and then I'll throw in some speculation about them just for fun.

So what exactly is a Federated Partition? I can find no documentation on them (with the exception of the RealtimeCSV_Updates.xlsx file in the Cloud Gallery). They were briefly presented by Oracle at Kscope19. My only understanding comes from trying to set them up and speaking directly with Oracle product management.

Very simply put they are transparent partitions to either relational databases or flat files. The data is not stored in Essbase, it remains in the source. "Realtime Partitions" is how they were presented at Kscope19.

This architecture has a number of ramifications I'm sure. The word "fast" does not come to mind but the jury is still out.

Now I'll walk through creating one in Essbase 19c. I tried getting this to work in OAC back when OAC meant Essbase Cloud but ran into many issues and was not successful. This is going to be a very simple example using a flat file source. When the data in that flat file changes, the data in Essbase will change -- like magic. Yeah, mind blown, I know, let's get started...


Create the Cube and Partition 

Essbase 19c comes with a gallery of applications well beyond Sample Basic. The one designed to demonstrate Federated Partitions is called RealtimeCSV.


Import the cube from the Gallery.
image

Select the RealtimeCSV_Updates.xlsx file which contains everything Essbase needs to build the cube.
image

Now I can see cube in my list of applications.
image

Under Sources, I select Create Connection and File.
image

I named my connection RealTimeCSV_Conn and selected the Realtime_DS.csv file from the shared folder on the cloud. Since you cannot update the files in the Gallery, I made a copy of the Realtime_DS.csv file which will allow me to update it.
image

Next I selected Datasources and Create Datasource. Then I chose the Connection I just created.
image

I named the Datasource RealtimeCSV_Conn and clicked Next.
image

I changed the Units and Price column to have the Double type. Then clicked Next.
image

Finally I was prompted with a Preview of my data and I clicked Create.
image

Next I clicked on the Areas tab. From there I clicked the Add Area button and entered my Target Area using Essbase functions and member names.
I then clicked the Validate button followed by the Save and Close button.



I then went to pull in some data to test the cube. The data is coming into this cube in real time.
image

Finally, I went back and updated the csv file with some different numbers. I refreshed my data and the changes were reflected.
image


Just to prove what's going on here, I'll show you the database storage statistics. You can see that there are no blocks and no pag or ind files.


Speculating Wildly

So what does this all mean? Why would Oracle be spending time on a feature like this? All of this is speculation influenced by comments I've heard from product managers over the past year...

Oracle is a database company. They make a really powerful database that runs a significant portion of the world. Wouldn't it be really cool if they could scrap page and index files altogether and just have their relational database store all of the level 0 Essbase data? Well that's what this is. Right now. Slap hybrid calculations on top and you have real-time updates to your cube. No loads, no calcs.

It always comes down to performance. This cube is three dimensions. It's got training wheels and makes Sample Basic look complicated. I'll remain skeptical, outside of small use cases, until I see large cubes using Federated Partitions with sizable data sources. In the meantime it's another tool in the toolbox. Drop me a line if you start using it in a production environment.


Disclaimer

I urge extreme caution in trying to use Federated Partitions. Keep in mind that Oracle has probably not documented them for a reason. For now I would recommend using traditional techniques for loading data into a cube.

Thursday, January 9, 2020

AUTOMERGE

I started writing this post four and one half years ago. I'm finally getting around to finishing it because I had to use it in real life this week.

I ran into what I thought was strange behavior in ASO the other day. I'd submit data to a cell, then submit a zero. I'd see a zero. Then I'd submit data to a cell, then submit a zero. I'd see a #MISSING. I'd repeat this over and over again. The first time I'd get a zero, the second time I'd get a #MISSING. At least that was consistent!

I had a hunch it had to do with ASO data slices and the merge process. In 11.1.2.4 the default ASO behavior is for slices to be automatically merged after some number of slices have been created. The two essbase.cfg settings that control this are AUTOMERGE and AUTOMERGEMAXSLICENUMBER.

So the first send of a number created slice 1, the send of the zero created slice 2, the send of the next number created slice 3, the send of the next zero created slice 4. Four happens to be the default value for the AUTOMERGEMAXSLICENUMBER setting. ALWAYS happens to be the default value for the AUTOMERGE setting. So when that fourth slice was created, it triggered Essbase to automatically merge the slices back together.

The documentation states that the merge happens after the slice number is exceeded, so it should happen on the fifth load of data. I observed that it was happening on the exact number, so on the fourth load of data.

The one thing you can't control here is that when the merge happens it will turn your zeros into #MISSINGs. In most cases this is fine, but in certain circumstances when you need an actual zero, it could be a problem. Merging the data slices using MaxL allows you to control whether or not the zeros are removed.

alter database ASOsamp.Sample merge all data remove_zero_cells;
I'm curious to know if anyone has run into any problems with this setting causing performance problems. Depending upon how many users are sending data to the cube I would imagine running so many merges could be an issue. Then again, it might help since you reduce query cost after merging slices.