Cloud

O SpecGold OracleBusIntApps7 clr

 Gcloud

 

   Call us now 

  Manchester Office

  +44 (0) 8450 940 998

 

  

 

Welcome to the Beyond Blog

As you'd expect from the winners of the Specialized Partner of the Year: Business Analytics at the Oracle UKI Specialized Partner Awards 2014, Beyond work with leading edge BI Applications primarily within the UK Public Sector. We intend to share some of our ideas and discoveries via our blog and hopefully enrich the wider discussion surrounding Oracle Business Intelligence and driving improved insight for customers

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that have been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Team Blogs
    Team Blogs Find your favorite team blogs here.
  • Login
    Login Login form
Subscribe to this list via RSS Blog posts tagged in BI Applications

This issue arose at a customer today, and I've seen it happen in the past, so I thought it worthwhile making a quick note.
A situation had arisen which had caused the overnight ETL execution to be delayed. Once this had completed the users complained that the dashboard was missing some data. This was tracked down to a shared filter which contained a restriction using the variable LAST_REFRESH_DT. When we checked this value of this variable we found it was two days behind.

This had happened because of the way repository variables are refreshed in OBIEE. They are not refreshed as part of the ETL, they are refreshed on the initialization block.

Refresh

So in this case, at midnight every night. That clearly is no good if our ETL starts anything later than midnight as we miss the update on w_day_d.
We need to set it to something that is a factor of the latest expected finish time of the ETL, and the latest time the end users are willing to wait for the refresh. And as the latter always has to be after the expected finish time, then we can use that. Let's say it's 8am. We should therefore change the time on the schedule o 8am to ensure the refresh of the variable (not just this one, but in theory any repository variable) is done after the warehouse refresh has completed. If there are any exceptional circumstances we need to be aware of these and deal with them accordingly.

Changed Refresh Time

There is of course the option for changing this to an hourly refresh instead, however this similarly needs a change from the default. It's more about being aware of the issue and knowing that there needs to be some planning in place that is a function of your ETL schedule and end user expectations.

That's all - short and sweet! :)

Last modified on Continue reading
Tagged in: BI Applications
in Business Intelligence 239 0
0

Session variables are used in the OBIEE repository for (amongst other things) setting user context information for use in analytics and dashboards. These can either be standard variables, or row-wise variables, the difference being that a row-wise variable acts as an array, i.e. it can hold multiple values.
A standard variable may be used to hold a single value, populated by an initialization block - such as the example below which populates the user_id value from E-Business Suite.

Single Initialization Block

A typical use for a row-wise variable is to hold a list of accessible elements for use in security data filters, as can be seen in the following example from Oracle BI Applications (11.1.1.8.1) where the variable BUSINESS_GROUP holds the lists of business groups that a user has access to according to their responsibility set up in E-Business Suite.

Initialization Block
Row-wise Initialization Block

Data Filter

Data Filter

In this example the variable BUSINESS_GROUP can take on multiple value and the data filter then expands this to an In clause in a fashion similar to the following:

Select Sum(headcount)
  From wc_headcount_f whf
 Where -- Some predicates
   And whf.business_group_id In (1,2,...);

I.e. the value of the variable is explicitly inserted into the SQL statement. As a side note, this can lead to performance issues (no bind variables used, in-list with many elements etc) however that is a topic that I will probably cover in a separate blog.
Anyway, onto what I actually want to show in this blog - that is the manipulation of row-wise variables to give a set of single-element variables which can be used as if they were defined individually. One might ask "Why would you want to do this?". Well, for a start I might not know how many elements a data set might return. Let's say I have a manager > employee relationship and I wish to populates a variable for each level in the management hierarchy when a user logs in. That user could sit anywhere within the hierarchy so I would have to define a fixed "upper" number of variables up-front. Not very scalable and also might look confusing to someone picking up your work in the future (why have you got a hundred variables defined when only 3 get populated??). So let's take a look at that SQL statement again which is used to populate the business group row-wise variable:

SELECT 
DISTINCT 'BUSINESS_GROUP',
TO_CHAR(FND_PROFILE.VALUE_SPECIFIC('PER_BUSINESS_GROUP_ID', USER_ID, RESPONSIBILITY_ID, RESPONSIBILITY_APPLICATION_ID))       
FROM
(SELECT 
USER_ID, 
RESPONSIBILITY_ID, 
RESPONSIBILITY_APPLICATION_ID
FROM 
FND_USER_RESP_GROUPS
WHERE 
START_DATE < SYSDATE
AND (CASE WHEN END_DATE IS NULL THEN SYSDATE ELSE TO_DATE(END_DATE) END) >= SYSDATE
AND USER_ID = (CASE WHEN 'VALUEOF(NQ_SESSION.EBS_SSO_INTEGRATION_MODE)' = 'Integrated' 
THEN VALUEOF(NQ_SESSION.OLTP_EBS_USER_ID) ELSE (SELECT USER_ID FROM FND_USER WHERE USER_NAME = ':USER') END)
AND RESPONSIBILITY_ID = (CASE WHEN 'VALUEOF(NQ_SESSION.EBS_SSO_INTEGRATION_MODE)' = 'Integrated' 
THEN VALUEOF(NQ_SESSION.OLTP_EBS_RESP_ID) ELSE RESPONSIBILITY_ID END)
AND RESPONSIBILITY_APPLICATION_ID = (CASE WHEN 'VALUEOF(NQ_SESSION.EBS_SSO_INTEGRATION_MODE)' = 'Integrated' 
THEN VALUEOF(NQ_SESSION.OLTP_EBS_RESP_APPL_ID) ELSE RESPONSIBILITY_APPLICATION_ID END)) 

See what it's doing? The first column BUSINESS_GROUP is the name of the variable and the second is the value. So what if we were to generate the name dynamically? I.e. :

Select 'XXMY_VARIABLE' || To_Char(Level) var, 'Value ' || To_Char(Level) val
  From dual
Connect By Level <=5;

What the above gives us isn't one variable with a set of values, but a set of variables named XXMY_VARIABLE1 to XXMY_VARIABLE5 with a single value each.
So why might this be useful? Well, in the first instance for the situation I've already described where we have an unknown number of hierarchical levels, this technique provides a scalable solution with no need to make changes to the RPD if that number increases in the future, but secondly, it allows for a mechanism to be created for storing fixed value constants outside of the RPD whose values can be maintained independently of an RPD deployment. Let's assume a simple structure like this, created in the E-Business Suite ERP source system and maintained using a custom form:

Create Table xxobiee_parameters (
  param_name  Varchar2(30) Not Null,
  param_value Varchar2(240),
  -- Usual audit columns etc
);

Now a simple initblock as follows will load any parameter entered into that table as a single-value variable.

Select 'XXGENPARAM_'||param_name param_name, param_value val From xxobiee_parameters;

I've prefixed with XXGENPARAM_ purely so that we can be sure we don't also create a variable within the RPD directly with a name that conflicts with one created here. You could of course still do this in theory, however it simply requires an agreed standard to not use that prefix for variable names.
One may also wish to take this a little further and allow for user-specific values. However if that is the case, my suggestion would be to make use of E-Business Suite functionality that exists already - profiles values. This would provide a generic mechanism for surfacing user profile values without having to make a single change in the RPD after the initial setup. Ideally we would flag the profiles to extract so as to avoid loading unnecessary data, however due to the lack of a descriptive flexfield on profile options an alternative solution must be used. This can be anything you wish, an application lookup, value set, custom table etc. For the purpose of this example I will use a common lookup:

Select 'XXPROFILE_' || fpo.profile_option_name, fnd_profile.value(fpo.profile_option_name) profile_val
  From fnd_profile_options fpo, fnd_lookup_values flv
 Where flv.lookup_code = fpo.profile_option_name
   And flv.lookup_type = 'XXOBIEE_PROFILE_VALUES'
   -- View application, security group etc. 
   ;

Remember to ensure you set the execution precedence of this initblock to run after the E-Business Suite authentication initblocks.

You'll need to decide for yourself how useful this may or may not be to your organization. It is of course true that quite often profile values need further processing on them to be useful (such as the extrapolation of ledgers from a data access set), however as you can see this is very quick to set up and even easier to maintain for those odd occasions where such functionality is needed.

Last modified on Continue reading
in Techniques 4998 0
0

Just as you're getting used to 11.1.1.8.1 here comes 11.1.1.9.1 with additional functionality!  One of the most interesting enhancements we're about to check out is the new Essbase GL Balances model.

Note that the documentation states that the Windows version is not available as of initial release.

The release notes are available here http://docs.oracle.com/cd/E53672_01/doc.111191/e53678/chapter.htm#BIARL113

 

Last modified on Continue reading
in Installation and Patches 3726 0
0

I spend a lot of time talking to customers whom have installed (or had installed for them) Oracle BI Applications and the lessons they have learnt from that process.  If I couple those lessons with some of the items that Beyond deliver as part of a typical BI Applications installation, then here's my rather succinct list of some key deliverables that I think are important but are often not all delivered.  This is applicable to both 7.9.6.4 and 11.1.1.8.1 customers.

See how many you score compared to your implementation!  This isn’t intended to be exhaustive and is purely to get you thinking. Please also leave some comments as I’m always keen for a dialog on what people consider to be important and what isn’t.

 

(1) Pre-Installation Assessment Document.

Are you actually ready to commence – you need to know that the environment you’re considering installing on is fit for purpose.  You should be able to work out an optimal blueprint for the architecture and how the software is to be installed.  By this I mean that someone has taken the time to consider the licenses purchased or to allow for that to happen by devising the architecture that will underpin the implementation. 

 

 E.g.

·         Where are we to install the application?  Across how many machines and are they physical tin or virtualised.  Which O/S and which version and for how many environments (Dev, Test, Pre-production/UAT, Prod?). 

·         What’s the storage situation – dedicated or on a shared SAN?  What’s the performance of that?

·         Focus in on questions such as are you going to split the Warehouse and the metadata database and if so is that on different DB’s or different instances in one DB.  What versions of the Database are you going to use?

·         You need a diagram showing the machines that are going to support the apps and database tiers and the core factors/NUPs for each component.

·         Cross check with Oracle’s latest certification documentation to make sure that your browsers ( and yes … that’s ALL the browsers that will be in play when you go live ), your O/S, your DB versions, BI EE versions, Java Versions, etc are all compatible and certified together.  This needs to be detailed, right down to stuff like “open files” limits on Linux.   

 

(2)    Patch Analysis Guide

The base BI Applications can be downloaded and installed from Oracle E-Delivery.  However, there are important quarterly BI APPS bundle patches (assuming you’re using 11.1.1.8.X version here which was current at time of writing as the older 7.9.6.X steam was handled differently) as well as frequent BI EE rollup patches that provide fixes and capability (e.g. the ability to use say IE11). 

 

The patching that will be applied to the base downloaded BI Applications should be discussed, agreed and documented in a patch analysis guide.  Exactly which rollup patches will be applied to the initial installation and propagated throughout Test/UAT/etc. 

 

One thing to also keep in mind is that if you’re having a longer running project (a topic in itself) then you should monitor at least the quarterly patches and BI Enterprise Edition tech patches as the versions you start the project with may not be the versions you finally go live with, so make some provision upfront to assess throughout the project lifecycle.   

 

(3)    Your Installation Guide

It’s the “YOUR” bit that is key here.  I rarely see this and yet what it gives you, I believe, is invaluable.  What we do when installing for a customer is to fully screenshot and document the whole process.  It’s a relatively painless process to do whilst the installation is being performed.  

 

·         It allows the customer to see all the steps that were followed, and in which sequence, to install BI Applications in THEIR development environment.  This provides documented proof that everything was successful and nothing was missed (including patches agreed earlier …. ).

 

·         This document can be used by the customer as a basis to do their own installation, perhaps in a test environment, knowing that the steps they are following have been successful in development and that the subsequent environments are all being created exactly the same.  This is a pretty important point, as what you certainly don’t want are any “differences” between environments.  This really helps with customer “self-sufficiency” for customers who want to be very closely involved with the project and be able to manage and support it going forwards.

 

(4)    Configuration Document/Guide

 

A successful BI Applications installation requires user input into the configuration.  A really successful project I believe works best with the users actually doing some of the configuration.  So, whilst we typically ask the questions regarding “What do you want your balance sheet to look like” and we map the group accounts with the customer, we find that if the customer actually reviews their own key domain values and performs some mappings has a number of benefits.  It certainly provides the customers with the opportunity to understand how the configuration tool works and how they can perform tasks such as adding new warehouse domain members to manage the system moving forwards.     

 

That joint approach aside, and here the new Configuration tool in the 11.1.1.X version facilitates that by allowing us to create task owners and monitor their completion, I really do think that some supporting documentation for the decision making process is key.  I would expect to provide customers with a document that explains the following

 

·         A detailed list of ALL the tasks that constitute the functional areas that are being implemented with a description of their purpose.

·         What decisions were made regarding setting any values associated with the task.  E.g. The Initial Extract date …. What was this set to?

·          Who decided the value above and why?

·         Links to any supporting documentation

 

Having this is place is important as everyone can see from a single document the decision making process, which is quite important if questions are raised later in the project when perhaps personnel have changed and memories are dimmed by time.

 

 

(5)    Security Matrix

 

I’m a firm believer of security being built in from the beginning of a project and not retrofitted later on.  There’s a multitude of different security stripes that can be applied.  The customer needs to be guided by a clear matrix that allows them to make decisions about the items such as:

 

·         Which E-Business responsibilities need creating to access the Application (assume the same of other source systems )

·         The dashboard access that is granted/secured

·         The analytic access that is granted/secured

·         The capabilities to create/amend objects

·         The “security” stripes that are applied to the data ( restrict by Operating Unit, by Ledger, by Value set, etc )

 

A matrix that allows for each comprehension, discussion and amendment upfront really helps here and establishes any security ground rules that will need to be enforced, both “out of the box” or anything custom that may need applying.

 

 

(6)    Cutover Guide

A successful BI system is one that is undergoing constant enhancement as it helps to drive forward business transformation and improvement.  In order to achieve this, a sound process is required to dictate how to propagate the various components that constitute a BI system (catalog, repository, custom ETL processes, etc) through from development and into production.  If the customer truly wants to gain a degree of self sufficiency then having a guide that takes them through this process on their environment is a key deliverable.

     

(7)    Master Brand and Build Standards

With BI systems consistency is a key factor.  Interactions with dashboard components need to look and feel consistent, not only to purvey a professional aesthetic, but to minimise and confusion and allow the users to quickly and easily adopt new analytics with minimal training or room for error.  In this respect you need a “master brand” and build standards. 

Unless you have this there is a danger of parallel developments not hitting that target and requiring refactoring.  For example:

 

·         Consistency in colours ( not just “a green” in a RAG status but A SPECIFIC ONE )

·         Analytics all have the data and time executed on them and a standard title and set of links underneath them

·         Consistency across all events ( e.g. perhaps with action links if only one link available then immediate drill to it don’t prompt the user )

·         Company Logos are all consistent

 

You’ll soon see that having a guide to the form and function of dashboard components is absolutely vital and getting assistance in developing one prior to the commencement of any development is an important step.

 

(8)    Enablement

There are plenty of courses available from a number of sources that can cover the core competencies of ETL (whether you’re using Informatica or ODI), Modelling, Presentation Services development, etc and these need sequencing.  It’s my view that the customer benefits most from having some training in developing analytics/dashboards very early in the project as that then gives them the capability to “explore” the out of the box deliverables within the first few weeks of the project when the installation is done and the first warehouse load is performed.

 

What I think is a really important part of enablement though is someone walking the customer through their own data in the out of the box subject areas.  This is a watershed moment in the project as within the first weeks the customers can see their own data surfaced through BI Applications.  If they are then given an introduction to all of the subject areas that they have purchased with a description of each one with some example analytics using the customers’ own data, then this is helps to re-enforce their understanding in the capabilities of the product.  Now, this can do done prior to too much configuration being done and can actually assist the users when doing configuration and witnessing the changes that their decisions make.  I firmly believe this helps to cement engagement with the project.

 

(9)    Performance Guide

For whichever version is being implemented there is an Oracle guide that provides recommendations for increasing performance.  It certainly is reassuring if you have some affirmation that that’s all been checked and advice on memory management, indexes, etc taken into account.  The indexes on E-Business certainly help throughput for incremental runs!

 

That’s pretty much a whistle-stop tour of some of the items that I think are important for the customer to know and be involved with (if they want to of course).  It’s not everything and I’d be happy to elaborate on some other items if you’d care to contact me, but if you are striving for either complete or a degree of “self-sufficiency” then I hope that it’s given you some food for thought.

Last modified on Continue reading
in Strategic Views 2630 0
0

Prune days... often misunderstood, often just left "as default". Possibly not helped by the fact that entering the search term into Google yields the following as the top hit:

The LAST_UPD column in Siebel transactional database tables is used for incremental change capture. This timestamp reflects the actual event time. It is therefore possible for a data row to be committed to the transactional database with a LAST_UPD date that is older than the date on which the last refresh was executed. This will cause the data row to be missed in the subsequent extract (if based purely on LAST_UPD date). However, the LAST_UPD date column still provides an opportunity to improve the change capture process by overlapping the extraction date window by the number of days set in this parameter. The records extracted in the overlapped window are filtered by comparing this information with information in the Image table. The Prune Days setting ensures that the rows that had values in LAST_UPD older than values in LAST_REFRESH_DATE are not missed. This is a parameter that can be set based on experience with processes, such as remote sync, that potentially can cause records to be missed. This parameter cannot be less than 1. For example: Assume the table W_PERSON_D was refreshed on January 15th by querying the table S_CONTACT. And, the Prune Days setting was set to 5. The next time S_CONTACT is queried to load W_PERSON_D, the records that have a LAST_UPD value since January 10 are compared with the ROW_ID of the Image table to cover for any missing records between January 15 and January 10 (the overlap period).

Source: http://docs.oracle.com/cd/E12102_01/books/AnyInstAdm/AnyInstAdmWinInstall43.html

First it's probably worth explaining what this parameter actually does. PRUNE_DAYS is a number of days which is subtracted from the LAST_EXTRACT_DATE parameter before it is passed by the DAC to Informatica. So if we have a LAST_EXTRACT_DATE of 02-JAN-2015 21:00:00 and a PRUNE_DAYS of 1, the value actually used will be 01-JAN-2015 21:00:00.

So do you need to use this on an E-Business System? The answer is... yes. But why?
Consider a scenario where you have an incremental ETL process scheduled to begin at 9pm each evening to extract records from table X. At 20:55 an automated process starts (or even a user manually inputting records) which inserts a batch of records into that table. The process takes 20 minutes to complete and (correctly) commits at the end of processing. It uses a value of SYSDATE for the last_update_date field, which is calculated at the point of insert.
In the mean time however, the ETL process has completed its load of data by 21:10, and has reset the LAST_EXTRACT_DATE parameter to be 21:00 on the current evening in preparation for the next incremental ETL process. However it has no visibility of the records inserted by the batch process as these were not committed at the point of select.

Without prune days some or all of the data inserted by your batch process will be lost as the last_update_date field is prior to the current LAST_EXTRACT_DATE value for those records. What the PRUNE_DAYS parameter does is reduce the window by a day (or whatever value you have set) to pick up such events.

Prune Days

There is of course a trade off with using a higher value for prune days in that the incremental ETL process will be reconsidering more data that it has previously already processed (and will therefore disregard).

Last modified on Continue reading
in Techniques 5763 0
0