If you are considering leaving SAP BO.... automation is possible!

 


 

 

If you are considering leaving SAP BO...

SAP had previously announced the end of life of SAP BO.

Finally, the roadmap has changed a little, since it will be a question of going towards SAP BusinessObjects BI 2025 (which is currently a code name).

 

However, a list of components will not be included in the BI 2025 version, including Universe Design Tool (for building .unv), multi-sourced universes, etc. These components will remain in the BI 4.3 release, but maintenance support will end in 2025.  
And the rest is not necessarily very clear, even if SAP announces that it wants to focus on the most adopted tools.

 

In addition, the "self BI" is acclaimed by the business teams, but also by the IT which is delighted to give autonomy to the teams.  

 

That being said, it's a monumental project that is looming, and which rightly frightens the teams: some of our customers have more than 100,000 BO dashboards in production. The movement could be very long and excessively expensive. 

 

In this logic, Ellipsys has created a migration solution for the dataviz layer which is based on the data from the {openAudit} solution.

This feature is at work for a major customer, in conjunction with our SAP BO layer cleanup/simplification module. 

 

In this case, it will be about migrating thousands of SAP BO dashboards to Looker and Power BI, automating the entire process 😊.

 

Below are some highlights of our methodology for this project:  

 

Automated analysis of the BO platform 

 

 

To analyze the complexity of the reports, we rely on the various tables that are generated directly by {openAudit} after the daily parsing of the SAP BO platform.

 

We generally collect the following information:

 

At the dashboard intelligence level

  • The list of expressions and variables used and their nesting level; 
  • The list of basic BO functions used in these expressions and variables;  
  • The sourcing (lineage) of each element published (whether directly in blocks, filters, input parameters, merged dimensions, alerters, etc.)

 

At the layout level 

  • The number of tabs, nested blocks, sections;
  • The number and type of tables (HTable, VTable, XTable) and graphics in the layout;
  • The list of filters in the elements.

 

At source level 

  • The number of data providers and related dimensions;
  • The number of unpublished Data Provider objects (useless for publication);
  • The contexts used;
  • The list of prompts used in the reports;
  • Links between data providers or between statements for multi-statement data providers;
  • The definition of custom SQL (overload or freehand SQL). 

 

{openAudit} will initially identify the dashboards to migrate the list of objects that will necessarily be migrated.

 

The idea is to build an optimized semantic layer in the target technology. 

 

From this list, we determine: 

  • The list of fields and therefore useful tables;
  • The list of joins to link these tables;
  • The list of BO contexts to reproduce;

 

This first list will then allow us to gauge the complexity of the semantic layer to reproduce in the target:

  • List of impacted derived tables;
  • Nested or linked objects;
  • Aggregated objects (aggregate_aware);
  • The views and source SQL to modify to meet the target.

 

 

Migration (e.g. to Looker / Power BI) 

Migrating universes 

BO > Looker  : {openAudit} will reproduce Looker's views and Explore with all the intermediate steps to delegate data merging from BO data providers into the target DB (via explore overloads). Objects are also redefined as Looker view fields and all the classification of these objects is reproduced in the target. Semantic intelligence is also converted into lookML (nested objects, aggregate aware, specific SQL, etc).

 

BO > PowerBI  : {openAudit} will reproduce the universe in an Analysis Services cube that can be consulted by the dashboards (partially factored solution). For customers working on Azure SQL, it is possible to take over the semantic layer in a centralized Tabular model.

Migration of dashboards and intelligence

BO > Looker  : {openAudit} obviously translates all the useful intelligence of the dashboards into lookML and recreates the links between the looks via smart filters. Each tab of a BO document becomes a Looker dashboard. Tab consistency is restored by linking them together in Looker.

 

BO > PowerBI  :  {openAudit}  will convert the objects in the data model tables of each dashboard and organize the data prep to reproduce the intelligence of the BO documents in DAX and/or Code M.

Some optimizations are also implemented to lighten the data prep.

Layout migration

BO > Looker  : {openAudit} will translate the BO layout into LookLM respecting the organization of the components then will reproduce the different tables, sections, graphs according to the existing components of Looker.

 

BO > PowerBI  : {openAudit} converts the BO layout directly into the PBIX which will be exported to Azure. The organization of the components is also respected. The user also benefits from the native features of PowerBI (dynamic filters, slicers, drill, etc.) which enriches the user experience.

Output specifics 

  • Dimensions and measures of BO universes will be organized in the same way in Looker.
  • BO documents will be transformed into several dashboards in Looker: one per BO report (tab), with a menu to navigate between them.
  • Aggregate Aware, dashboard variables and derived SQL are reproduced as is in Looker.
  • The syntax of the source database is translated into the native syntax of the target DB (usually bigQuery).
  • Since the BO semantic layer cannot be completely centralized in PowerBI, the information will be switched to the Power BI Stand-alone dashboard (PBIX). It is also possible to reproduce the universe in an Analysis Services cube so that the Power BI dashboards can consult it live.
  • The strategy for attacking data in PowerBI can be adapted according to the level of performance and autonomy desired in the report by playing on the import or direct access of data and the fusion of this data.

Commentaires

Posts les plus consultés de ce blog

La Data Observabilité, Buzzword ou nécessité ?

BCBS 239 : L'enjeu de la fréquence et de l'exactitude du reporting de risque

Le data lineage, l’arme idéale pour la Data Loss Prevention ?