If you were to go by the DBA Guidelines for cache (32MB of cache for each 2GB of input data) you would need 64MB of cache if you wanted to use the default number of threads when aggregating a 2GB ASO database.
Let's see how this translates to the real world.
I have an ASO cube with 9,999,840 KB of input level data. That should require, well, let's do the math:
First figure out how many times larger my input data is than 2GB: 9,999,840/2,097,152 = 4.768...
Now just take the square root of 4.768 = 2.183...
Now multiply that by 32MB = 69.876...
Hang on, if you read further down the page you see:
To improve the performance of building aggregates, take the following steps.
- Increase the size of the aggregate storage cache to at least 512 MB or 20% of the input data size, whichever is smaller.
All that math for nothing. So I should set it to 512MB or 1,999MB (20% of my input size), whichever is smaller. That's easy. 512MB < 1,999MB.
<scratching head>So do I need to allocate 512MB per thread or is that in total? My guess is that's in total.
Once again, Let's see how this translates to the real world.
Let's just test it out and see the performance based upon a bunch of different settings. My CALCPARALLEL is set to 16 in this case.
Here's the MaxL script I'll use:
1: alter database TEST.TEST clear aggregates;
2: alter application TEST set cache_size 32MB;
3: alter system unload application TEST;
4: execute aggregate process on database TEST.TEST stopping when total_size exceeds 1.1;
The Results
The table shows the cache size and respective aggregation process time. Each time 13 views were created. As you can see on the graph, there is a diminishing return after about 164 MB. That's well below the recommended 20% or 512MB guideline.Cache Size (MB) | Aggregation Process Time (s) |
32 | 753.78 |
64 | 383.53 |
70 | 334.61 |
100 | 283.98 |
128 | 257.95 |
164 | 241.38 |
200 | 236.79 |
256 | 233.31 |
512 | 234.8 |
1024 | 235.01 |
2000 | 228.21 |
Graph: Time in seconds is on the y-axis, cache size in MB is on the x-axis.
No comments:
Post a Comment