Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

Thursday, 1 April 2021

Using Groovy to check and fix cube to cube validations

 Why would you do this?

I don't profess to be skilled at coding with groovy - without google I would be lost, but what I have found is groovy is great and fast at running what you want - but also good at helping determine what should shouldn't run.

So it's month end - all the planning is done and we are going to take a snapshot of our budget and copy it to forecast - BUT we should check a few things first to have data integrity in the application.

In this instance we have a planning cube and a reporting cube - we want to make sure they are aligned that the data maps have been run are matching.  I could simply have a task list and do these manually to make sure everything is OK - or we could use groovy to run a few checks and if not aligned correctly try to fix them up for us. 

What is it going to do?

The Pseudo code:

  • Run a rule on our BSO planning cube with an @XREF to the ASO cube checking values match
  • If they match do nothing
  • If there is a mismatch run a data map to re-push the data from BSO to ASO
  • Run the rule again to check to see if the problem is fixed
  • If data is still out try running an aggregation on the BSO cube
  • Run the check rule again to see if it is fixed
  • If fixed all good - if still an error we simply give an error message (but we could expand the logic to run individual Entity checks and report the Entity that has the mismatch)

 Now show me the code

Here is an example of the code - scripts are cut down and this could be extended to run other checks but hopefully gives you an idea of the theory behind the process... 

/* First set the applicaiton connection for running the scripts */ Cube cubeFin = operation.application.getCube("Finance") /* Define the check script, the script is not calculating anything and will return an error depending on if the cubes match or there is a mismatch */ def script = """ FIX("Month" "FY21" "Plan" "Final" "All_Accounts" "Plan Element" "ENTITY") JAN( IF(YearTotal== @XREF(_ASO1Cube_,YearTotal)) /*Totals Match so retrun "No error" message and the values of the cubes*/ @RETURN(@CONCATENATE("No Errors ",@HSPNumtoString(YearTotal)), ERROR); ELSE /*There is a mismatch so panic and return an error message of mismatch*/ @RETURN("Error: ASO BSO mismatch", ERROR); ENDIF ) ENDFIX""" /*This is our consolidation script to be used if all else fails, hopefully this will never get used*/ def consolScript = """ FIX("Month" "FY21" "Plan" "Final" "All_Accounts" "Plan Element") AGG(ENTITY); ENDFIX """ /* The logic starts here*/ try { /*run the check script*/ cubeFin.executeCalcScript(script) } /*we know there will be an exception so we catch it*/ catch(CalcScriptException e) { /*prtint the retrun message so we can see what is going on*/ println "Return Message " + e.atReturnMessage /*run an if statement to see if the atReturnMessage is good or bad and if bad run more code*/ if (e.atReturnMessage == "Error: ASO BSO mismatch") { /*if we get here we try to fix the mismatch by running a data push*/ println "Running Data Push" operation.application.getDataMap("Finance_to_Rpt").execute(true) try{ /* The push has run so now we re-run our inital rule to see if it has fixed the problem*/ cubeFin.executeCalcScript(script) } catch(CalcScriptException e2) { /*Again we know it script will "Fail" so we see if it is good or bad*/ println "Retrun Message " + e2.atReturnMessage if (e2.atReturnMessage == "Error: ASO BSO mismatch") { /*if we get here there is still an issue so run an aggregation rule*/ println "Running Aggregation" cubeFin.executeCalcScript(consolScript) try{ /*run our final check to see if it has fixed the mismatch*/ cubeFin.executeCalcScript(script) } catch(CalcScriptException e3){ println "Return Message " + e3.atReturnMessage if(e3.atReturnMessage == "Error: ASO BSO mismatch" ) /*if still no good then give up and do manual investigations*/ {println "More Investigation Required"} else{ /*if it is fixed then print a line in the output showing that the aggregation solved the issue and we should see why it needed to be run in the first place */ println "Aggregation Resolved Issue" } } } else{ /*if we get here then the push resloved the issue so return that message*/ println "Push resolved issue" } } } else { /*this is what we want to begin with so no errors run and we didn't need to run any data pushes or aggregations*/ println "No Errors" } } 
 

 Done!

 

Tuesday, 4 December 2018

Loading multiple UDA's via Data Management

In a recent project to migrate an existing application from on premise to PBCS we were tasked a straight forward ODI metadata load to a Data Management load - seemed simple enough and was going smooth until we had to load more that one UDA.

I'm not going to go through the process of setting up a metadata load - there are already a few blogs around for that, my pick would be John Goodwin's blog

Our load file in this case is tab delimited, with the two UDA's in double quotes with a tab separating them



We have set the data mapping as a simple like * to * the data validates without any issues and looks ok




While the data load validates if fails on Export with the following error stating the number of fields are wrong.


Looking at the error message it seems that we had to change the delimiter between the UDA's , ODI is comma separated so we tried that,  passing on validation on Export, so we look at the member properties only to find a single UDA has loaded




Not really what we were after, so we revert back to the original format however we adjust the mapping from a straight * to "*"


This time on the import looks like 


 So we are on the right track looking at the final result we can have success


So there you have it, multiple UDA's via Data Management, the solution is simple enough just ensure that there are double quotes around the string of UDA's.

 

** Slight Edit for this - I've found that regardless of the file's delimiter, the UDA string itself must be tab-delimited — for example: "S1803<TAB>JS1803Prep". If you're generating this string from a SQL database, use CHR(9) in Oracle or CHAR(9) in SQL Server to insert the tab character correctly.