Solving a TM1 performance problem

Introduction

One of our major customers was facing severe performance problems. The problems related to 2 cubes, in which we wanted to execute a rather tricky elimination. Please follow along while we discuss the cube architecture.

Cubes architecture

We have an allocation cube with the following dimensions:

  1. Financial code (10 n elements)
  2. Year (2008, 2009, 2010, 2011, 2012)
  3. Month (12 months plus YTD consolidations)
  4. Scenario (actuals, budget rounds)
  5. Allocation phase (7 phases, populated by Turbo Integrator)
  6. Entity (combined with country in 1 dimension)
  7. Cost center (several hundreds)
  8. Product (several thousands)
  9. Relation_EN.htmtity (the counterpart dimension of Entity, for eliminations in a Reporting cube)
  10. Relation_cost center (the counterpart dimension of Cost center, for eliminations in a Reporting cube)
  11. Account (several thousands)

As you can tell, this is a heavy cube. It contains actuals and budget data, for very detailed analysis dimensions. Several millions of data points are loaded using TI in a "base" phase. After that other processes will populate the other allocation phases. No rules are used for allocations. Lookup cubes containing percentages aid us in carrying out the allocations.

The reporting cube is similar, and goes as follows:

  1. Product type (a Text attribute on the Product dimension)
  2. Financial code
  3. Year
  4. Month
  5. Scenario
  6. Allocation phase
  7. Entity
  8. Cost center
  9. Product
  10. Account

The reporting cube is rules-based. The rules differ for allocation phases. For certain phases, we want to carry over the respective data from the allocations cube. For other phases, however, we want to eliminate certain activity within the (multinational) group: this will be both at the Cost center level and at the Entity level. Therefore, we added the counterpart dimensions for Cost center and Entity (see above): they contain the same elements, but with a prefix to ease the eliminations.

Reporting logic of eliminations

This is the logic to be followed (Cost center = CC, Entity = E):

  1. For an n-level CC and E, we show the respective values from the other cube;
  2. For an n-level CC and a consolidated E, we show the total activity in that CC and E, EXCLUDING the activity in that CC between the n-level E’s of that Group E;
  3. For an n-level E and a consolidated CC, we show the total activity in that CC and E, EXCLUDING the activity in that E between the n-level CC’s of that Group CC;
  4. For consolidated E’s and CC’s, we have a double elimination. We show the total activity in that E and CC, EXCLUDING all activity in the n-level E’s and CC’s among themselves.

I hope you are still with me; the most difficult part of the article is finished by now. Given the above logic of eliminations, we need to use TM1 calculation rules. As far as I know, there is no solution with consolidation structures and TI processes to achieve the same result. The chosen selection for Cost center and Entity determines what eliminations are done. If the structure of the Cost centers or Group entities changes, the eliminations will change as well.

Prefixes to eliminate data

Judicious use of the 2 counterpart dimensions in the allocations cube (and a prefix for element names) allows us to use generic and relatively simple rules. We carry over a number from the allocations cube, and subtract either 0, 1 or 2 consolidated values: 0 subtractions for case A above, 1 subtraction for cases B and C, 2 subtractions for case D.

Feeding…?

But then feeding these rules… This poses a big challenge for Cost centers and Entities: every single number in an allocation phase would feed a big number of consolidated Cost center AND consolidated Entity elements. The carthesian product is there and the result is that the cube explodes as the data volume of the allocation cube grows. An 8 GB RAM server machine quickly became insufficient. The model went out of memory when top consolidated values were retrieved in the Reporting cube. This is obvious.

The solution!

What exactly did the trick to feed the cube rules? Since data loads in the allocation cube are monthly without manual data entry, it was not a big problem to copy over the base data from the allocations cube to the corresponding n-level cells in the Reporting cube. By doing this, the existence of the data at n-level serves the purpose of feeding the rules at C-level. We even skip the need for a rule at n-level Cost center and Entity because the copy process takes care of this. We needed to "feed" the consolidated values so that they show up in Execute Viewer (Advisor in the IBM Cognos Express suite) and to be able to use zero suppression. The model went from an estimated 10 GB RAM memory and server load times of more than an hour, to about 800 MB RAM memory and 1 minute of load time.

Flowers to the Netherlands!

Kudos to my former colleague Luc Cusiel for coming up with this nice solution.




Homepage

Section contents

About Wim

Wim Gielis is a Business Intelligence consultant and Excel expert

Other links