Massive and automated migrations of dataviz tools

 

 
Massive and automated migrations of dataviz tools
 
Migrating a dataviz platform, a high-risk project! 
 
 
How to decommission an outdated dataviz technology because it is too static, expensive, incompatible with the target architecture, especially in the Cloud?
 
How to go serenely towards the tools of tomorrow, also acclaimed by business lines, often in the Cloud?

{openAudit} allows almost automated migrations between different dataviz technologies, to save infinite time and to avoid harmful regressions, in particular at the level of management rules.
 
Strong similarities between platforms
Most dataviz tools have two things in common:
  1. A semantic layer that interfaces between IT and the business,
  2. A dashboard editor.
 
These theoretical similarities allowed us to establish an overall kinematics allowing to go quickly to the target.
 
3 major steps stand out: 
 
1- Dynamically retrieve source platform metadata with probes and parsers.
 

  • {openAudit} will directly parse the files to retrieve the intelligence, the structure of the dashboards and the semantic layer if there is one
  • {openAudit} will also access the repository of the source solution to keep the consistency of IDs between the different "objects" (semantic layer, queries, dashboards, instances, others);
  • An {openAudit} probe will retrieve certain logs from the various dataviz tools, to identify uses, their frequency, their population.


Output example: data lineage in the dataviz layer: (1')
data lineage in the dataviz layer
 
2- Harmonize source platform
 
 
{openAudit} will allow to know a certain number of elements on the platform in place:
  • How many queries are there?
  • What are the most used queries?
  • What is the really useful data?
  • What are large, complex dashboards?
  • How many are replicated?
  • How many folders are there, with how many associated documents?
  • How many instances?
  • What are the heaviest dashboards that slow down the platform,
  • How many data sources are there, are they all used?
  • …etc.

This inventory will make it possible to "clean" the source platform beforehand, to optimize it so as not to unnecessarily propagate the hypertrophy of the source system towards the target system.
This will also allow prioritization, to essentialize the target platform.
 
Example of output: cleaning of a dataviz platform (1')
cleaner une plateforme de dataviz
 
3 - Migrate: go through a pivot model 
 








1. {openAudit} will create an additional abstraction layer (semantic layer, dashboard template, etc.), agnostic, which will serve as a pivot, and which will be able to pool several technologies.

2. Starting from this pivot layer, {openAudit} will dynamically rebuild the templates, queries, semantic layer, etc., in the target technology.

 

 
 
 
Conclusion
 
90% of the migration can be carried out automatically, to allow engineering teams to focus only on what is ultimately important: the intelligence contained in the dashboards, i.e. the management rules.
 

Commentaires

Posts les plus consultés de ce blog

Sur-administrer une plateforme SAP BO simplement

Migrer de Oracle à Postgre en automatisant le processus !

La Data Observabilité, Buzzword ou nécessité ?