Thursday, 1 April 2021

Using Groovy to check and fix cube to cube validations

 Why would you do this?

I don't profess to be skilled at coding with groovy - without google I would be lost, but what I have found is groovy is great and fast at running what you want - but also good at helping determine what should shouldn't run.

So it's month end - all the planning is done and we are going to take a snapshot of our budget and copy it to forecast - BUT we should check a few things first to have data integrity in the application.

In this instance we have a planning cube and a reporting cube - we want to make sure they are aligned that the data maps have been run are matching.  I could simply have a task list and do these manually to make sure everything is OK - or we could use groovy to run a few checks and if not aligned correctly try to fix them up for us. 

What is it going to do?

The Pseudo code:

  • Run a rule on our BSO planning cube with an @XREF to the ASO cube checking values match
  • If they match do nothing
  • If there is a mismatch run a data map to re-push the data from BSO to ASO
  • Run the rule again to check to see if the problem is fixed
  • If data is still out try running an aggregation on the BSO cube
  • Run the check rule again to see if it is fixed
  • If fixed all good - if still an error we simply give an error message (but we could expand the logic to run individual Entity checks and report the Entity that has the mismatch)

 Now show me the code

Here is an example of the code - scripts are cut down and this could be extended to run other checks but hopefully gives you an idea of the theory behind the process... 

/* First set the applicaiton connection for running the scripts */ Cube cubeFin = operation.application.getCube("Finance") /* Define the check script, the script is not calculating anything and will return an error depending on if the cubes match or there is a mismatch */ def script = """ FIX("Month" "FY21" "Plan" "Final" "All_Accounts" "Plan Element" "ENTITY") JAN( IF(YearTotal== @XREF(_ASO1Cube_,YearTotal)) /*Totals Match so retrun "No error" message and the values of the cubes*/ @RETURN(@CONCATENATE("No Errors ",@HSPNumtoString(YearTotal)), ERROR); ELSE /*There is a mismatch so panic and return an error message of mismatch*/ @RETURN("Error: ASO BSO mismatch", ERROR); ENDIF ) ENDFIX""" /*This is our consolidation script to be used if all else fails, hopefully this will never get used*/ def consolScript = """ FIX("Month" "FY21" "Plan" "Final" "All_Accounts" "Plan Element") AGG(ENTITY); ENDFIX """ /* The logic starts here*/ try { /*run the check script*/ cubeFin.executeCalcScript(script) } /*we know there will be an exception so we catch it*/ catch(CalcScriptException e) { /*prtint the retrun message so we can see what is going on*/ println "Return Message " + e.atReturnMessage /*run an if statement to see if the atReturnMessage is good or bad and if bad run more code*/ if (e.atReturnMessage == "Error: ASO BSO mismatch") { /*if we get here we try to fix the mismatch by running a data push*/ println "Running Data Push" operation.application.getDataMap("Finance_to_Rpt").execute(true) try{ /* The push has run so now we re-run our inital rule to see if it has fixed the problem*/ cubeFin.executeCalcScript(script) } catch(CalcScriptException e2) { /*Again we know it script will "Fail" so we see if it is good or bad*/ println "Retrun Message " + e2.atReturnMessage if (e2.atReturnMessage == "Error: ASO BSO mismatch") { /*if we get here there is still an issue so run an aggregation rule*/ println "Running Aggregation" cubeFin.executeCalcScript(consolScript) try{ /*run our final check to see if it has fixed the mismatch*/ cubeFin.executeCalcScript(script) } catch(CalcScriptException e3){ println "Return Message " + e3.atReturnMessage if(e3.atReturnMessage == "Error: ASO BSO mismatch" ) /*if still no good then give up and do manual investigations*/ {println "More Investigation Required"} else{ /*if it is fixed then print a line in the output showing that the aggregation solved the issue and we should see why it needed to be run in the first place */ println "Aggregation Resolved Issue" } } } else{ /*if we get here then the push resloved the issue so return that message*/ println "Push resolved issue" } } } else { /*this is what we want to begin with so no errors run and we didn't need to run any data pushes or aggregations*/ println "No Errors" } } 
 

 Done!

 

Saturday, 26 January 2019

Why the @SPLINE function is worth a look

The Spline function according to the Oracle documentation is defined as:



On the face of that - I'm not sure I would ever use it, all it does is take a data set and attempt to flatten the line to remove the outlier data points.  So what makes this useful? We can copy data points using @XRANGE, we can take one data point and assign it to multiple cells, and in doing so will create blocks in the target data set.  As it creates blocks in the target range we don't have to calculate these members independently, making the rule run quick.

So, before I start looking at the @SLPINE function, I will spend a bit of time looking at another function that makes this even more useful: @CalcMgrDateToString. I am going to assume we have a Period Dimension with months and a Year dimension separate to that, as is the case in 90% of standard PBCS implementations.


 So if we parse a data value of 20190320 in the @CalcMrgToString function we can get it to return a value in the SimpleDateFormat in Java. The two Java date formats I constantly use are MMM and yy. MMM will return Jan, Feb etc, yy will return the last two digits of the year.  This means we can write a function using a data value to return a member using the following syntax:

@MEMBER(@CalcMgrDatetoString(20190320,"MMM")) = "Mar"


and

 
@MEMBER(@CONCATENATE("FY",@CalcMgrDatetoString(20190320,"yy"))) = "FY19"



Now we have that, let's have a look at how we can use the @SPLINE Function:

First we are going to look at how we can take a single cell FTE and populate a range according to a start and end date.

Here is our starting point


Using a "No Year" - or base Year as we have called it, we enter our data variables; FTE, start date and end date.

To Populate all years going forward including creating blocks in those periods we can use the following script:



The results are as follows:


As you can see, the FTE is populated for all required periods just by fixing on FY10 we populate all years/months (in this model years dimension is sparse, as mentioned we don't have any block creation problems with this script).  Just because we can, I have included a section of code to calculate the proportion of FTE in the first and last periods. 
NB. the extra code is useful as the @SLPINE function requires two unique cells in the target, so if you have a start and end date in the same month it will not work.
     
     

https://github.com/DavidAmblerBlog/Spline/blob/master/FTE 


 This will give the following results:


Another useful feature of the @SPLINE functions is that we can "Copy" data using within an @XRANGE from a source to a target data set.

I use this when modeling employee leave as we don't calculate a salary for employees on leave. Copying the date range that the person is on leave to a member I call "FTE Before Leave", the following code runs this process.

Our starting point is as follows:


Entering a start and end date as shown:


Then on save in our data entry form we run the following code.



The code runs the following processes
  1. Copies any pre entered FTE_Before_Leave back to FTE, so this assumes there is no leave and the FTE is restored to a full FTE
  2. Clears out the FTE_Before_leave entered so it is a clean slate, this is done to ensure if leave is canceled the FTE returns to a normal FTE profile.
  3. Using the start and end dates of leave populate the FTE_Before_Leave by taking a copy of the FTE for the months the person is on leave and assigns it to the FTE_Before_Leave member
  4. Calculate the FTE for for periods when the person is on leave taking the portion of the month that the person is away away from the FTE, so if they are away for the entire month the FTE for that month is zero
  5. Copies the leave dates to another period so next time the calc is run it can run step 1.
The results are as follows:


So as you can see the SPLINE function is handy, flexible and provided you keep the data set to a minimum a very fast function - and if you don't currently include it as part of your logic give it a go..

Sunday, 16 December 2018

New PBCS v18.12 ASO range functions in Calculation Manager

Just a very quick entry, as with the recent release of 18.12 there are some additional functions now available when limiting a range or POV to apply a formula to in Calculation Manager for ASO rules.

These are additional MDX statements that would work in the same fashion as the equivalent standard MDX statement:

@COUSIN
@DISTINCT
@EXCEPT
@FILTER
@FIRSTCHILD
@GEN
@GENMBRS
@INTERSECT
@LASTCHILD
@NEXTMBR
@PREVMBR
@ISIBLINGS

To view how the functions work the Oracle documentation covers that.

The first function I wanted to test was the @EXCEPT function as we have had a requirement for that for time.

To test I tried to remove one code from the level zero descendants of my profit and loss accounts; syntax I thought would be simple enough.


@EXCEPT(@Level0Descendants("PL"), "A12345")

Where  A12345 is a level zero account.

However the rule failed to validate with the following error message:

Error Cause: Syntax error in input MDX query on line 1 at token ')' Rule  xxxxx

As the functions were not listed in the release notes of this version I initially figured that they had somehow slipped through the migration cracks and should't have been included in this release, but decided to give it a bit more of a go and then tried the following syntax:
 

 @EXCEPT(@Level0Descendants("PL"), @Level0Descendants("A12345"))

 As these were not documented and this is the first release Oracle may fix this issue, but if not this is how to use the @EXCEPT function

Tuesday, 4 December 2018

Loading multiple UDA's via Data Management

In a recent project to migrate an existing application from on premise to PBCS we were tasked a straight forward ODI metadata load to a Data Management load - seemed simple enough and was going smooth until we had to load more that one UDA.

I'm not going to go through the process of setting up a metadata load - there are already a few blogs around for that, my pick would be John Goodwin's blog

Our load file in this case is tab delimited, with the two UDA's in double quotes with a tab separating them



We have set the data mapping as a simple like * to * the data validates without any issues and looks ok




While the data load validates if fails on Export with the following error stating the number of fields are wrong.


Looking at the error message it seems that we had to change the delimiter between the UDA's , ODI is comma separated so we tried that,  passing on validation on Export, so we look at the member properties only to find a single UDA has loaded




Not really what we were after, so we revert back to the original format however we adjust the mapping from a straight * to "*"


This time on the import looks like 


 So we are on the right track looking at the final result we can have success


So there you have it, multiple UDA's via Data Management, the solution is simple enough just ensure that there are double quotes around the string of UDA's.