Friday, December 7, 2018

Adding 3D Charts to Oracle Analytics

There was a time, not too long ago, when pretty much all business graphics consisted of poorly labelled, hideously colored line charts and bar graphs.There was a dearth of different types of visualization which can help to read and analyze the data in a better way. Then people discovered the ability to create "3D" charts and now the once unattractive but essentially harmless business charts turned into a visually pleasing graphic which wows your audience in a good way. Though there are critics on the usage of the 3D charts, it is still one of the most popular visualizations in modern reporting.


Oracle Analytics helps you consume 3D charts, for example a 3D bar chart in the form of a custom visualization plugin with the ability to zoom in/ out and to rotate. Unlike a static 3D chart, this plugin helps you understand the context of 3 dimensions at much greater detail with ease.

It requires 3 inputs:
- 2 of which are attributes that define the position in the 3D charts
- 1 of which is a measure that determines the height of the corresponding bar charts




3D charts seems to make data more exciting thus making people pay more attention to it and there is popular maxim that goes like this - "by adding 'depth' to a graphic adds depth to the data analysis". However 3D charts can be misleading and can work against you if not looked at correctly.

Are you an Oracle Analytics customer or user?

We want to hear your story!

Please voice your experience and provide feedback with a quick product review for Oracle Analytics Cloud!
 

Monday, November 19, 2018

Comparative Tiled Growth Rate : immediate deep insight on your data using Oracle DV Templates

Oracle Analytics Cloud offers the ability to apply an existing analysis on any other dataset, different from the one it was built on. This feature is called Replace Data Set. By leveraging this capability, one can re-use any existing project to gain insight on virtually any dataset. Pre-defined Projects can be used as templates for anyone to gain deep insights on their own datasets.

In this post, let's see how to leverage the DV Growth Binning example available on Oracle Analytics Library (https://www.oracle.com/analytics-library).

What is the DV Growth Binning example project ?

The Growth Binning example originally uses a dataset about Countries Ecological Footprints. The data shows 50 years of history on an aggregated carbon footprint metric for 176 countries over the world. The DV calculations help grouping these countries into 5 equal groups (quintiles), according to the growth rate of each countries on that metric over 50 years. First group shows slowest growing countries (countrieds that showed decrease in their carbon footprints over years this case), last group showing the top fast growing countries.


The DV project then shows a few more details of which individuals make-up each group, in two different canvases : Details and List. List, in particular, is a page showing all the individuals sorted by their growth rates but also indicating the gross value of the metric. That allows to immediately recognize individuals that have significant contribution in value, and high or low growth rate.



The project delivers this insight for year-based, quarter-based or month-based analysis, to accomodate for datasets that have different time-spans or grain. All of the calculations are run-time project calculations, no Data Flow is needed.

How to apply this insight to my own dataset ?

This project can be used as template on any other dataset, as long as a date and a metric exist in the dataset. This is done by simply using the Replace Dataset feature, the process is very simple and takes just a few seconds. This brief video shows a recording of applying the project to various datasets :

By leveraging the Dataset Replace feature, every project can become a template that anyone can re-use on various datasets, within seconds, like in this example. This empowers Oracle DV users to get to deep insights extermely quickly on any datsets.

Are you an Oracle Analytics customer or user?

We want to hear your story!

Please voice your experience and provide feedback with a quick product review for Oracle Analytics Cloud!
 

Tuesday, November 6, 2018

Un-documented Beta OAC 18.3.3 Feature : Configuring Synonyms for ASK Interface

The ASK interface in with OAC is a very simple and direct way to gain insight about your data in seconds. Just type (or pronounce) phrasal questions, OAC will interpret your question, match it with most likely measures and attributes exist in any indexed dataset you have access to, and return visualizations that best meet your question. Simple and efficient, this works both in the web based OAC UI and via the mobile interface in Day by Day Application.

OAC 18.3.3 (Sept 2018) introduced several enhancements to ASK, like better interpretation of questions constructs, understanding of Top/Bottom question syntaxes, etc. But one feature only made it as Beta in 18.3.3 because a robust finalized UI was still lacking : configuring Synonyms for column name indexes. This feature did not make it into documentation, but is already operational in 18.3.3 pods.

The configuration of synonyms using the beta feature is simple and quick, but quite manual and fragile. File syntax and dataset names are to be strictly respected. This short video briefly describes it.


The feature will be greatly enhanced in upcoming releases of OAC both for the experience, the UI and the capability. The current status of it may not migrate automatically when the feature is fully released. 
But in the meantime it can help address some immediate needs today. Like for example rudimentary translation of system defined column names in different languages so users get to ask simple questions in their native tongue, and get answers from a single source in ASK.

Are you an Oracle Analytics customer or user?

We want to hear your story!

Please voice your experience and provide feedback with a quick product review for Oracle Analytics Cloud!
 


Wednesday, October 10, 2018

Dataflows gets better and smarter

Data flows in latest version of Oracle Analytics Cloud is more powerful and useful than ever. Lets take a look.

Data flows in Oracle Analytics lets users take one ore more data sources, join them and transform the data to produce a curated set of data that users can use to visualize and analyze. Data flows have various inbuilt functions/nodes like Adding new columns/calculations, Removing columns, Grouping, Binning, Training different Machine Learning Models, Forecasting, Sentiment Analysis etc to transform and enrich the data.

In the latest version of Oracle Analytics more new features like Dataset Prompts, Branching, Incremental data processing, Output columns metadata management etc are added to dataflows. We will go through each of those new features in detail in this blog.

Dataset Prompts: 

Dataset Prompts feature lets users choose input or output datasets to a dataflow on the fly at the time of executing/running the dataflow. Prompt option is useful in cases where a user would like to reuse a complex dataflow with another dataset or to return output dataset with a different name without having to edit the flow by opening it. Prompt option can parametrize the input and output datasets of a dataflow. By default this option is disabled and users have to enable it by clicking on Prompt check box. Here are a few snapshots that show how to enable Prompts for dataflows:


                               
Name field takes the default dataset name as input. This should be the name of actual dataset present in the instance.
Prompt field takes the prompt text to be shown when running the dataflow.
Prompt option is available for both input and output datasets of the dataflow. If Prompt option is not selected dataflow will run with the dataset selected during dataflow creation/edit phase.

This is how the prompt window looks like when a dataflow with prompts enabled is run:


To summarize Prompt option adds a great deal of flexibility to running the dataflows by specifying the input and output datasets on the fly without having to edit the dataflow.

Here is a video which demonstrates Dataset prompt feature in dataflows:



Branching:
Branching option allows users to branch the output of a node(except Train ML Model node) in the dataflow into 2 or more branches. Users can apply different transformations on different branches and save the outputs of these different branches to different datasets. End node of each branch will always be Save Data. To add a branch node click on + and select branch node. This is how the branch node and its options look like:

                   
Number of branches can be incremented or decremented by entering the value or using UI options. Each branch can process disjoint subsets of data and return distinct outputs. For example in the below snapshot 3 branches are added to Sample Order Lines dataset.


1st branch computes Sales by Customer Segment and saves it in a dataset.
2nd branch computes Sales by Product Category and saves it in a different dataset and
3rd branch adds month column and saves the entire result in a different dataset.

On running/executing this dataflow three different datasets will be created. Here is a snapshot of the output of these 3 branches:

                                                 

To summarize Branch option is useful when user wants to do different transformations on different subsets of data and save it separately.

Here is a quick video tutorial that shows how branch option can be used:


Output Controls:
Output controls feature lets users decide how an output column of a dataflow should be treated and saved as (Attribute or Metric) in the output dataset. For metric columns default aggregation can also be chosen. On adding Save Data node to the dataflow users are provided with option to decide how each of the output columns should be treated as. Users can select Attribute or Measure from the drop down list. For metric columns users can select the default aggregation rule. Here is a quick snapshot that shows how users can change the data types and default aggregation rule for columns:



To summarize this feature provides more control to users over datatype of output column.
Here is a quick video tutorial that demonstrates this feature:


Incremental Data Processing:
Incremental data processing feature allows users to run the dataflow for incremental data/rows that become available between batch runs. This feature helps in efficient usage of resources to run dataflows only on incremental data rather than re-running on data which is processed already. This option is available only for datasets created from Database connections and this can be enabled only for a single input dataset within a dataflow.

Enabling incremental processing for a dataset is a two step process:
1) First step is to set New Data Indicator column while creating the dataset from a database. To enable incremental processing set New Data Indicator field to one of the columns from the dataset in configuration page. New data added to the database will be identified based on this indicator column. Here is a quick snapshot which shows how to configure the new data indicator column:



In this case New Data Indicator column is set to TIME_BILL_DT (date) column.

2) After adding the newly created dataset as an input to the dataflow, select "Add New Data Only" field to enable incremental processing for this dataset. Here is a quick snapshot that shows how this should done:

                                             

Now this dataset is enabled for incremental processing and any updates to this dataset/table will be processed incrementally in the dataflow when the dataflow is run after making changes to the dataset.

Output of the dataflow for the incremental data can either be appended to the existing output or can replace the existing output. Here is a quick snapshot which shows where to choose this option while saving the output dataset:

                                     
Now all the required parameters are set for incremental processing. When we save and run this dataflow for the first it will be run for the entire dataset and for subsequent runs, dataflow would be run only for the changed(added or removed) data in the configured dataset.

Here is a quick video tutorial that demostrates incremental data processing capabilities of dataflows:


Are you an Oracle Analytics customer or user?

We want to hear your story!

Please voice your experience and provide feedback with a quick product review for Oracle Analytics Cloud!
 

Sunday, September 23, 2018

Replace Dataset Feature

In this blog we will talk about a new feature in the latest version of Oracle Analytics Cloud called Replace Dataset. This feature allows users to intelligently apply (or map) data visualization analyses created using one dataset on to another dataset with few clicks and intuitive column mapping.

Oracle DV allows users to create in-depth and insightful analyses, add sophisticated calculations,  create nice looking infographics and more. Sometimes after spending quite some time and energy on a project, they realize they want to do "similar" analysis on different  dataset or substitute certain columns within those analyses. Replace Dataset feature lets users to capitalize on the efforts they have already put in creating a project by offering a intelligent tool to swap the dataset/columns with its corresponding new dataset/columns. This feature come handy also in situations where you do analysis on one batch of data and later you want to perform the same analyses on a different batch of the same data set (perhaps for a different date range or region or product).

For demonstration purpose in this blog we will use a sample project which analyses Quarter Value growth, Quarter indexed growth, and Quarter Ago growth for a sample sales data. Here are quick snapshots to give you a quick overview of this project:


This analysis looks good and as a end business user I would like to do exact same analysis but for a different data that I have: Expenses data. To achieve this goal I will use Replace Data Set feature and in this process I will walk you through this feature in detail.

Replace dataset feature is quite easy and intuitive to use. To replace dataset in an existing project with another dataset, user has to simply right click on the dataset and select "Replace Data Set" option. Please note Replace Data Set option is available only for non-joined datasets. Here is a quick snapshot that shows how to invoke this option:


This will open a prompt window which lets users select or map the columns of existing dataset to the columns from new dataset. Oracle Analytics identifies columns which are used in the project (visualizations and custom calculations) and prompts user to map only the used columns to relevant columns in the new dataset. Here is a quick snapshot that shows how the prompt window looks like:






Once user clicks Replace button, all the existing visualizations will be updated with metric and dimension columns from the new dataset without any manual intervention. In this case we are replacing the Sample Order Lines dataset with Expenses dataset. This is how the canvases look like with Expenses dataset (with a filter that removes data for 2016, as we do not have data for entire 2016):



Another important functionality of this feature is, if the user would like to replace a column in this dataset with another column in the same dataset, he/she can use Replace Data Set option -> select the same dataset and then map the column to be replaced with a new column in the dataset. In this example if user would like to replace Business unit with Expense type, he/she has to map Business Unit column to Expense Type column.

To summarize, Replace Data Set feature lets users choose a project as a template and use the same analysis for other datasets with few clicks and without having to manually re-create the entire project which is tedious and time taking.

Here is quick video tutorial that demonstrates Replace Data Set feature:
 

'
'
Are you an Oracle Analytics customer or user?

We want to hear your story!

Please voice your experience and provide feedback with a quick product review for Oracle Analytics Cloud!