Showing posts with label filldb. Show all posts
Showing posts with label filldb. Show all posts

Tuesday, September 22, 2015

Using Ramdisk with ASO

I wanted to see what Essbase ASO performance would be like if not bound by disk. So I thought I'd start out by creating a ramdisk and storing my ess*dat files on it. First of all, ramdisk is just a file system that you create using RAM rather than a hard drive. So it's fast -- turbo-chicken-fast. If you understand what RAM is then you know that when the power is turned off on a machine you lose everything in RAM. Luckily machines and power and such are much more reliable these days so it's not a huge deal to use. I wouldn't, however, recommend going and creating a whole bunch of ramdisk in production and putting all of your cubes on it. You've got to have a good design for it and I'm not in that line of business so I won't get into all of that here.

The Disclaimer

I ran a few very basic tests using ASOsamp. I understand that not all hardware is created equal so my results might not match your results. This test serves as one point of data that will hopefully further our knowledge of how Essbase behaves in different environments. This test was run on 11.1.2.4.002.

The Digression

I run Essbase on Linux. It's very easy to create a ramdisk on Linux. I used the following command:

mount -t tmpfs -o size=1024m tmpfs /mnt/ramdisk

In order to have the database use the ramdisk, I changed the ASOsamp.Sample storage settings in EAS to point the default and temp tablespaces over to the /mnt/ramdisk directory.

The Test

My test data was generated using dbfill. The data I loaded filled the ess00001.dat file to 285MB. I captured the load times. I then ran queries in Excel using the default retrieve, zoom-in on products to bottom level, pivot products to columns, zoom-in on stores to bottom level. I then created aggregate views based upon the Essbase recommendations. I then ran the same queries again. Note: I did not use a compression dimension in the outline.

The Results


The Analysis

These results make a lot of sense to me and illustrate just how cool Aggregate Views are. Here are my thoughts on some of the results:

The Data Load isn't much different because Data Load seems to involve a high use of CPU. Anywhere that Essbase is CPU bound, the ramdisk isn't going help very much. I assume that when Essbase is writing the data that it's also doing a lot of checking to see that members have the correct names and exist in the outline and such.

The buffer commit is much faster under ramdisk because this seems to be a heavy disk use operation. The data is initially written to the /temp directory and then moved over to the /default directory. There probably isn't a lot of data validation going on during this step.

The queries without aggregations are virtually identical. When I watch server utilization during a long running Essbase query it's almost always CPU bound -- the ESSSVR process pegs a CPU at 100% most times. So this makes a lot of sense to me. Having the data file in memory doesn't help Essbase much since it's still got to do a lot of calculating.

The default agg is quicker under ramdisk as it must rely fairly heavily upon reads and writes. Again, it needs to write the Aggregate Views to the /temp directory and then move them over to the /default directory.

The last four queries are where the magic happens. Our query times with the views created effectively drop to instantaneous -- and you don't need super fast disk for this to happen. Aggregate Views are so cool! It only takes 12.55 seconds to create 13 views but that actually saves over 24 seconds on the Store dimension zoom-in. This shows how powerful Aggregate Views can be. We will really need the ability to create these in Hybrid as soon as possible.

The Conclusions

Super fast disk is great to have but it really only makes sense when you have disk bound operations. Costs and benefits need to be weighed as in most design and purchase decisions. The results might be different as the ASO size scales up so I'd encourage others to try to replicate this test with a larger data set. I need to get more RAM on my home server before I can do that.

The Question of the Day

Smart View or Excel Add-In?

Thursday, September 3, 2015

ASO Test Data or: How I Learned to Stop Worrying and Create Hybrid

I wanted to fill ASOSamp.Sample with some data so I could run a few tests. I wanted to use the completely undocumented "filldb" command within ESSCMDQ in order to do so. Unfortunately, it only works on BSO. So I needed to turn ASOSamp into a BSO database first. There's no way to do that, right? Wrong. A very wise Essbase developer points out in this OTN post that one need only run the Aggregate Store Partition Wizard available within EAS in order to do so.

Let's Recap:



  1. Run the Aggregate Store Partition Wizard to make a BSO copy of your ASO cube. It may give you some warnings so I suggest first making a copy of that ASO cube and using the wizard on the copy. If you get warnings, go into the ASO outline and fix the issues. You'll need to remove things like solve order and other ASO specific items.
  2. If you only need to turn the BSO cube into a Hybrid cube, you can use Celvin Kattookaran's utility available here that changes all upper level sparse members to dynamic calc members. Stop here if you don't need a test bed of data.
    1. Don't forget to set ASODYNAMICAGGINBSO in your essbase.cfg file.
  3. Download and unzip ESSCMDQ. It's probably easiest to put this file in the same place where ESSCMD or essmsh already run so you don't need to fiddle with paths, libraries and such.
  4. Run ESSCMDQ. Login as you would ESSCMD (no semi-colons!):
    1. login hostname user password
  5. Select the database you need to fill with data:
    1. select appname dbname
  6. Run the filldb command. See the link for more information on this command. I would suggest starting out with a small size. I had a very small block size in my cube and it went way past the 1 GB mark even though that's what I set it to. With a normal size block it was close (within 10%) to 1 GB.
    1. filldb [size density distance]
  7. Export the data from the BSO cube.
  8. Import the data into the ASO cube.
Now I can easily create a test bed of data for ASOSamp at whatever density I require. I'll be using this in future posts.