Cloud

O SpecGold OracleBusIntApps7 clr

 Gcloud

 

   Call us now 

  Manchester Office

  +44 (0) 8450 940 998

 

  

 

Welcome to the Beyond Blog

As you'd expect from the winners of the Specialized Partner of the Year: Business Analytics at the Oracle UKI Specialized Partner Awards 2014, Beyond work with leading edge BI Applications primarily within the UK Public Sector. We intend to share some of our ideas and discoveries via our blog and hopefully enrich the wider discussion surrounding Oracle Business Intelligence and driving improved insight for customers

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that have been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Team Blogs
    Team Blogs Find your favorite team blogs here.
  • Login
    Login Login form

Current, accurate and secure data are essential ingredients for the customer to understand their business.  Couple this with a cloud based Information Visualization toolset and a specialized partner to guide you along the way then you have the perfect recipe for up to the minute information discovery.  If you've been unsure if you're ready for the BI Cloud revolution and would like to "test the water" first, then now is the perfect time to act with Oracle's 30 day free trial of BI Cloud Services. 


http://www.oracle.com/webfolder/technetwork/tutorials/infographics/bics_trial_quickview/index.html

BICS

1.  Overview of Trial Setup Process – This provides an overview of the trial setup process, setting up users and administrators and configuring settings to optimize your experience.


2.  “Start Trial” button to actually fill in the information to get going with BICS.

If you would like some assistance with setting up your account and using the BI tools to analyze your data then please get in touch with us via our contact page.

Last modified on Continue reading
in Technical 2596 0
0

Session variables are used in the OBIEE repository for (amongst other things) setting user context information for use in analytics and dashboards. These can either be standard variables, or row-wise variables, the difference being that a row-wise variable acts as an array, i.e. it can hold multiple values.
A standard variable may be used to hold a single value, populated by an initialization block - such as the example below which populates the user_id value from E-Business Suite.

Single Initialization Block

A typical use for a row-wise variable is to hold a list of accessible elements for use in security data filters, as can be seen in the following example from Oracle BI Applications (11.1.1.8.1) where the variable BUSINESS_GROUP holds the lists of business groups that a user has access to according to their responsibility set up in E-Business Suite.

Initialization Block
Row-wise Initialization Block

Data Filter

Data Filter

In this example the variable BUSINESS_GROUP can take on multiple value and the data filter then expands this to an In clause in a fashion similar to the following:

Select Sum(headcount)
  From wc_headcount_f whf
 Where -- Some predicates
   And whf.business_group_id In (1,2,...);

I.e. the value of the variable is explicitly inserted into the SQL statement. As a side note, this can lead to performance issues (no bind variables used, in-list with many elements etc) however that is a topic that I will probably cover in a separate blog.
Anyway, onto what I actually want to show in this blog - that is the manipulation of row-wise variables to give a set of single-element variables which can be used as if they were defined individually. One might ask "Why would you want to do this?". Well, for a start I might not know how many elements a data set might return. Let's say I have a manager > employee relationship and I wish to populates a variable for each level in the management hierarchy when a user logs in. That user could sit anywhere within the hierarchy so I would have to define a fixed "upper" number of variables up-front. Not very scalable and also might look confusing to someone picking up your work in the future (why have you got a hundred variables defined when only 3 get populated??). So let's take a look at that SQL statement again which is used to populate the business group row-wise variable:

SELECT 
DISTINCT 'BUSINESS_GROUP',
TO_CHAR(FND_PROFILE.VALUE_SPECIFIC('PER_BUSINESS_GROUP_ID', USER_ID, RESPONSIBILITY_ID, RESPONSIBILITY_APPLICATION_ID))       
FROM
(SELECT 
USER_ID, 
RESPONSIBILITY_ID, 
RESPONSIBILITY_APPLICATION_ID
FROM 
FND_USER_RESP_GROUPS
WHERE 
START_DATE < SYSDATE
AND (CASE WHEN END_DATE IS NULL THEN SYSDATE ELSE TO_DATE(END_DATE) END) >= SYSDATE
AND USER_ID = (CASE WHEN 'VALUEOF(NQ_SESSION.EBS_SSO_INTEGRATION_MODE)' = 'Integrated' 
THEN VALUEOF(NQ_SESSION.OLTP_EBS_USER_ID) ELSE (SELECT USER_ID FROM FND_USER WHERE USER_NAME = ':USER') END)
AND RESPONSIBILITY_ID = (CASE WHEN 'VALUEOF(NQ_SESSION.EBS_SSO_INTEGRATION_MODE)' = 'Integrated' 
THEN VALUEOF(NQ_SESSION.OLTP_EBS_RESP_ID) ELSE RESPONSIBILITY_ID END)
AND RESPONSIBILITY_APPLICATION_ID = (CASE WHEN 'VALUEOF(NQ_SESSION.EBS_SSO_INTEGRATION_MODE)' = 'Integrated' 
THEN VALUEOF(NQ_SESSION.OLTP_EBS_RESP_APPL_ID) ELSE RESPONSIBILITY_APPLICATION_ID END)) 

See what it's doing? The first column BUSINESS_GROUP is the name of the variable and the second is the value. So what if we were to generate the name dynamically? I.e. :

Select 'XXMY_VARIABLE' || To_Char(Level) var, 'Value ' || To_Char(Level) val
  From dual
Connect By Level <=5;

What the above gives us isn't one variable with a set of values, but a set of variables named XXMY_VARIABLE1 to XXMY_VARIABLE5 with a single value each.
So why might this be useful? Well, in the first instance for the situation I've already described where we have an unknown number of hierarchical levels, this technique provides a scalable solution with no need to make changes to the RPD if that number increases in the future, but secondly, it allows for a mechanism to be created for storing fixed value constants outside of the RPD whose values can be maintained independently of an RPD deployment. Let's assume a simple structure like this, created in the E-Business Suite ERP source system and maintained using a custom form:

Create Table xxobiee_parameters (
  param_name  Varchar2(30) Not Null,
  param_value Varchar2(240),
  -- Usual audit columns etc
);

Now a simple initblock as follows will load any parameter entered into that table as a single-value variable.

Select 'XXGENPARAM_'||param_name param_name, param_value val From xxobiee_parameters;

I've prefixed with XXGENPARAM_ purely so that we can be sure we don't also create a variable within the RPD directly with a name that conflicts with one created here. You could of course still do this in theory, however it simply requires an agreed standard to not use that prefix for variable names.
One may also wish to take this a little further and allow for user-specific values. However if that is the case, my suggestion would be to make use of E-Business Suite functionality that exists already - profiles values. This would provide a generic mechanism for surfacing user profile values without having to make a single change in the RPD after the initial setup. Ideally we would flag the profiles to extract so as to avoid loading unnecessary data, however due to the lack of a descriptive flexfield on profile options an alternative solution must be used. This can be anything you wish, an application lookup, value set, custom table etc. For the purpose of this example I will use a common lookup:

Select 'XXPROFILE_' || fpo.profile_option_name, fnd_profile.value(fpo.profile_option_name) profile_val
  From fnd_profile_options fpo, fnd_lookup_values flv
 Where flv.lookup_code = fpo.profile_option_name
   And flv.lookup_type = 'XXOBIEE_PROFILE_VALUES'
   -- View application, security group etc. 
   ;

Remember to ensure you set the execution precedence of this initblock to run after the E-Business Suite authentication initblocks.

You'll need to decide for yourself how useful this may or may not be to your organization. It is of course true that quite often profile values need further processing on them to be useful (such as the extrapolation of ledgers from a data access set), however as you can see this is very quick to set up and even easier to maintain for those odd occasions where such functionality is needed.

Last modified on Continue reading
in Techniques 4331 0
0

This blog entry is based on some feedback from a quick live demo I did at the UKOUG joint BI and Finance SIG in Birmingham last week.  Here I show how easy it is to throw a spreadsheet into the Cloud using Oracle BICS (BI Cloud Service) and use Visual Analyzer to perform data discovery.  The original presentation had its grounding in some work we are doing for a large UK council with regard to classifying and analyzing their spend data.  I've already given some insight on how to perform spend classification, so let's take it a step further and consider we have a subset of some classified spend data we wish to analyze.  N.B. The data I have provided here is all "fake" and for demonstration purposes only, it is not the actual spend of any UK Council!

Here I start with a spreadsheet of spend data, showing the amount, the category, the cost centre whom authorized the send, the supplier, etc.  Using Visual Analyzer I can simply point to the sheet and upload it.   

 

It successfully loads and adds the data to my project.

This gives me the "pallet" of data elements

 

Now I can select 3 of them and then press "right click" and let Visual Analyzer select the best visualization for me.  I have selected 2 measures ( The Suppler and the Year ) and also 1 fact ( the amount )

 

Let's see what it creates for us to allow us to see the spend by supplier per year.  I can then hover on a bar to see some values.

Pretty good.  Let's just quickly sort it by amount ...

 That's better

 

Well, that took about a minute and now I can clearly see what i've spent the most on and with whom, when and how much it was.  So, let's throw up some other visualisations to get the feel of some of the other capabilities and how easy it is for us to perform some more discovery.

Clicking add visualisation gives me a range of available options, so let's choose .... "Tag Cloud".

 

Drag and drop the Amount and Spend Classification

 

This should create a tag cloud with the relative size of the Spend Category being determined by the amount we have spent.  It becomes instantly obvious that we're spending a lot on Cleaning on Security.

 

Now, wouldn't it be good if we could see who's been spending the money?  Let's select the amount and the cost centre then

 

Let's create another visualization of Treemap to see what we can determine from that.  We just set the visualisation to tree map, drag the fields across and we can see this visualization, where the size of the tile is determined by the relative spend to highlight the biggest spenders.

 

Now we can see all the three analyses together on one screen to give us different perspectives on spend and we can hover the mouse over anything to see how much has been spent.  Here we look at what the Office Management Services have been upto, they have been the ones spending considerably more than everyone else.

 

Finally, let's create a classic stacked bar chart to see the amount by spend category by supplier.  That should be interesting.

 

I can easily just expand the new barchart visualisation to focus on what that's telling me. 

 

Hovering over again I can see I've spend 6k with SecureCam

 

 

and in comparision I can see a 100k going to Securex

 

 

I can actually now focus all these 4 visualizations just on that security data so they are all restricted by that.

I just highlight and right click on "keep selected"

 

And that shows me this

 

All the analyses have synchronized and i'm looking at them all in the context of the data I have requested to remain in focus.  I selected "Security" and that's what they are all showing me based on the perspectives I set up, so running from top left I can see the breakdown between the relative spend on security between all the suppliers whom have spend categories like that, clearly the tag cloud will just show me "security", the tree map is showing all the departments whom have spent in that area and the stacked bar is focused on that selection i've ringfenced.

I can of course cleaer that filter and ringfence say Security, IT and Software and that shows me this

 

 

So the actual operation is very straightforward.  I hope that's given you a flavour of what's possible VERY QUICKLY, but we've clearly not investigated all the capabilities of Visual Anlalyzer in this blog.  For example, what if we just wanted to look at the smaller spend elemets, say all the things where invoices had been raised for values between say 0 and 1000 GBP?

I can just go and drop the amount fact into the filters at the top and tweak to my range

 

All the analyses are "listening" out for this, so they all oblige

 

So can now just focus on the smaller amounts that have been spent. 

Let's clear that amount filter out and go and try something else interesting.  How about we drill into the invoice detail to learn more about the individual invoice spend and let's do it for something specific; for the IT Equipment. 

To start, we highlight the IT Equipment on the bar chart

 

Now right click and select "Drill"

 

We can select "Invoice Description" and see everything by Invoice Description.  Here I can now see the individual invoices that made up the spend of IT Equipment.  Mice and Keyboards and we're spending a lot on mice.....

 

I hope that's whetted your appetite to learn more about Visual Analyzer and please do get in touch if you have any questions or would like any advice on how we can help you explore your data

Last modified on Continue reading
in Techniques 4562 0
0

Spend Classification is a topic that's coming into sharper focus with our clients as they look to better understand and analyze their spend and make savings across the organization.  Oracle provide a Spend Classification complementary component for Oracle BI Applications to assist with this analysis and over the next series of posts we will look at the reasons for considering such a solution, how to implement it and how to actually make use of it.

Products such as Oracle Purchasing allow the buyer to select categorization when entering PO lines, however there's some issues here.  Firstly, it's not normally really "sense checked" at this point, so as long as a valid selection from whatever categorization setup has been implemented is selected then all seems well.  When the product being procured is "computer mouse" and the buyer enters "Hardware.Mouse" to categorize it, then that's a good start.  However, they could just enter something generic like "Misc.Misc" or perhaps (even worse?) something misleading such as "Environmental Services.Mouse".  This categorization has normally been setup by the Finance department but what if you actually wanted to analyze spend in a different way, not aligned to this categorization but a different taxonomy, also what if you had a number of ERP systems in the organization that had completely separate categorization taxonomies, then it's getting increasingly difficult to categorize spend.

What Oracle provide is a "rules engine" solution that can analyse loaded spend data and compare to a central knowledge base in order to assign a classification to the spend data that will allow the organization to slice and dice it how they like.   

The following is the outline of the process :

·         Create a training data for spend classification.  Consider the "training data" as a cleansed dataset ( and a spreadsheet template is provided ) of all the categories that you desire and a number of examples real data with the fields which need to be assessed in order to make that judgement.  Spend classification will find patterns in the "training data set" to describe the spend for each category and the more relevant information that it has the greater the degree of accuracy.

·         Create a Taxonomy.  Basically, this is hierarchical "Hardware.Mouse" breakdown of all the categorisations to be used.

·         Create a Knowledge Base based on the taxonomy and the training data.  This knowledge base will be used as the master repository against which to compare any batches of spend data that need to be analyzed.  If people can buy computer mice from different suppliers, have different descriptions, etc then the more complete the knowledge base, the more accurate the classification process will be.

·         Perform the classification of a real spend dataset.  This utilises the knowledge base as a "rules engine" to assign categorisations to each element of spend.

·         Check, Amend, Approve.

This sounds straightforward, but there's a lot of background detail to this process; for example one can employ a standard knowledge base using APIs or an advanced model that utilizes Oracle Data Mining techniques.    

I was initially a bit cautious when performing the installation of the product as the current version is "7.9.6" which is a nod to the older version of BI Applications.  Fortunately, it is indeed certified with the 11.1.1.8.1 releases of BI Applications - and there's a script specifically to change some of the product views to align them to the new data structures provided with the 11.1.1.X versions of BI Applications.

Once the installation is complete, the application is accessed in a rather unique way that makes access quite seamless for users as some of the Spend Classification dashboards are actually containers for embedded applications as can be seen below.

 The tool actually does integrate with iProcurement so that when any off contract spend is performed, then the buyer can be provided with an assisted pop-up that suggests appropriate categorisation to at least help out in that respect.  This requires the profile "POR: Enable Category Classification on Non Catalog Request page" to be set.

The next planned blog entries for Spend Classification will look at the key documents that Oracle provide for this product, notes on the implementation and how to actually use it to perform analysis.  Stay Tuned!

Last modified on Continue reading
in Techniques 3596 0
0

As is common with stateless web-based applications, Oracle Application Express (APEX) utilizes an Optimistic locking mechanism when updating data via the built in Automatic Row Processing functionality. When a row is read from the database an MD5 checksum is generated. When the record is written back, the MD5 checksum is regenerated prior to writing - if the result now differs, we know that the record on the database must have changed. However developers often find that the automatic processing is not powerful of flexible for their needs and instead write custom PL/SQL processes to perform their updates. The problem with this however is that such procedures very often look like this:

Procedure update_row(p_id In xxMyTable.id%Type, p_value In xxMyTable.value%Type)
Is
Begin
  Update xxMyTable
     Set value=p_value,
         last_update_date=Sysdate,
   Where id=p_id
     And value!=p_value;
End update_row;

The clear problem here is that there is no protection against a situation like the following, whereby two users simultaneously modify a record and the user who saves first ends up losing their changes.

Locking Issue

As an aside, non web-based tools used a Pessimistic locking mechanism, whereby a row is locked when a user begins modifying it (via a Select For Update Nowait statement), and the lock is then released at the point of commit. Any other user then attempting to modify the record (and thus obtain a lock) is refused.

So to implement optimistic locking, we need to do so manually and the purpose of this blog is to illustrate one mechanism for doing so, which builds upon the method used by Oracle E-Business Suite self-service screens. This method stores a value against each row - object_version_number (OVN) - which is read at the point the record is read. The user changes the record, and then at the point of saving, the value is re-read from the table. If this is different from what was was read originally then we know the record has changed. If it is the same, we are ok to make the update and we then increment the OVN.

OVN Mechanism

What is critically important here however is to realise that even now the update routine can be called multiple times and we need to recognise that each session maintains a consistent view of the data unless the other session commits. So we could still be in a situation where two users read a value of X from a the table and both then call our update routine. If both read the value from table before either commits then the value read will be the same for each (i.e. the same value that was read when the record was loaded into the form) and thus the transaction will continue. We are in the same situation as before. So what we need to do here is obtain a lock on the records when we read the OVN. We are marking our intention to update the records. Therefore when the second transaction attempts to obtain a lock on the same row, they are blocked from doing so. We do this in the same way as explained previously in the optimistic example.

Procedure update_row(p_id In xxMyTable.id%Type, p_value In xxMyTable.value%Type, p_ovn In xxMyTable.object_version_number%Type)
Is
  e_record_changed  Exception;
  ln_ovn            xxMyTable.object_version_number%Type;
Begin
  Select object_version_number
    Into ln_ovn
    From xxMyTable
   Where id=p_id
   For Update Nowait;
   
  If ln_ovn != p_ovn Then
    Raise e_record_changed;
  End If;
   
  Update xxMyTable
     Set value=p_value,
         last_update_date=Sysdate,
   Where id=p_id,
         object_version_number = object_version_number + 1
     And value!=p_value;
End update_row;

We can now be sure that a user cannot retrieve a record from the database, modify it, and be affected by changes made by another user at the same time. If such a situation does occur then one of two things will happen.

  1. An ORA-00054 exception will be raised alerting the user that an attempt to obtain a lock could not be satisfied. This occurs when the second session attempts to obtain the lock prior to the first session committing its changes.
  2. Our custom e_record_changed exception will be raised. This occurs when the first session has committed its changes prior to the second session attempting to obtain a lock.

With regards to the (1), we may actually want to catch that exception and re-raise it as something a bit more user friendly (such as "Warning - another user is editing this record"). It should go without saying that this rule equally applies to Delete operations as well as Updates. Both procedures should take the OVN that was read at point of record loading as a parameter.
So if you are writing custom table handler routines using a web-based tool such as Oracle APEX in a multi-user environment (however it's good practice regardless) then it is imperative you consider and cater for such situations. Users become very frustrated when data they know they saved suddenly gets overwritten or vanishes completely, and they will soon lose faith in your system if this is allowed.

Last modified on Continue reading
Tagged in: APEX
in Technical 5218 0
0