Reverse engineering, for true data observability






Reverse engineering is "the process of deconstructing a product or system to understand and improve how it works." It is the essential prerequisite for true observability. 

 

Reverse engineering can be applied to various fields. 

 

We are particularly interested in everything related to Information Systems, and in particular data - and everything that allows them to be used: storage, processing, exhibition. 

 

For what ?

 

Because it is the most inflationary and heterogeneous part of information systems, and therefore the most complex. 

 

The moves to the Cloud have forced rapid modernizations, but they have not slowed this continuous march toward hyper-complexity. 

 

Projects for "modernization", erasing IT debt, or simply sharing knowledge should be based on solid data observability enabled by technical reverse engineering of systems.

 
 
 

For  objective  observability   ,

 exhaustive reverse  engineering

 

You can't consider wanting to master something if you only half observe it. 

Reverse engineering of the data system must be exhaustive!

 

This approach should make it possible to create overall coherence and make the reading of the system agnostic. 

It seems essential to us to at least analyze the following 5 technical stacks:

Data inventory,  physical persisted or in memory, views, reports...

Log analysis, 

to understand data consumption and injection.

Parsing you schedule, 

to understand the ordering.

Reverse engineering the code  to generate data lineage.

Introspection of the data visualization layer, 

to understand the articulation between technical and business information, to gather intelligence (business rules).

 

For  objective observability  , reverse engineering in continuous time

 

These analyses should be conducted continuously to ensure an objective view of things: what I "observe" must be an exact reflection of reality. 

 

The volumes to be analyzed are often so large that it is necessary to bias them to allow these daily analyses, for example by using a delta analysis, or CDC ("Change Data Capture").

 

The virtues of reverse engineering 

 in a data observability framework

 

Observability and  Compliance

With the advent of the General Data Protection Regulation (GDPR), precise mapping and enhanced controls over processing involving personal data are now essential. From this perspective, observability based on continuous process analysis is a facilitator. It allows for the mapping of various data production processes and the identification of processing errors within chains. This is especially true since investigations can be initiated very simply by teams without technical expertise.

 

Observability and  Security

Reverse engineering helps identify and correct weaknesses and vulnerabilities in the design or security of data systems. 

If a sensitive technical table with an abstract name broadcasts its data in a system, and this data ends up being queried by an unauthorized person, no one will know anything about it. 

Having a precise mapping of flows for sourcing or impact analysis will help in this DLP (Data Loss Prevention) context. 

 

Observability and  Governance

Solid reverse engineering provides the entire company with a detailed repository describing data flows and a shared vision of the system's architecture, thus promoting the development of effective data governance and a data quality strategy. This in-depth knowledge of the system is also extremely useful for optimally architecting projects: 

  • By highlighting strategic flows and all their dependencies. 
  • By offering a valuable tool for designing an optimal architecture through the simplification of information systems: in fact, the definition of information uses coupled with the analysis of processing (data lineage) makes it possible to identify unnecessary data flows in information systems, a preliminary to large-scale decommissioning operations.
 

Observability and  Technological Obsolescence

Reverse engineering allows us to bring lower technical layers back to light by unearthing their specifications and sharing them. 

This can help extend the lifecycle of certain tools to reduce costs. For example, if you have a widely shared, intelligible readout of the entire Cobol layer, there may be less urgency to move out of it.

 

Observability and  Creativity

By drawing inspiration from the best practices and innovations of others, observability encourages the continuation of good practices. Because we only invent what we have forgotten! 

By analyzing historical processes, we can draw on old (good) ideas to generate new methods.

 

Observability and  Maintenance

Reverse engineering analyzes failures by revealing the root causes of malfunctions. This analysis helps prevent future failures and improve system quality. Addressing the root cause of failure is greatly facilitated by making a map of the information system available to as many people as possible.

 

Observability and  Migrations

A solid reverse engineering based on real processes allows to have an understanding of what is managed by the different technologies involved: ETL, procedural code, scheduler, data visualization tools, etc. This opens the door to a technological re-addressing towards third-party solutions, typically in the Cloud, integrating the thorny issue of dependencies and security. These can be databases or data visualization tools.

 

                                              Example :  

Migrate SAP BO to Power BI on a flat-rate basis 
 

Conclusion 

 

Reverse engineering is proving to be an essential tool for ensuring optimal data observability. 

Ellipsys is a tech company that specializes in reverse engineering complex systems to share understanding, simplify them, and automate technical migrations. 

Commentaires

Posts les plus consultés de ce blog

Migration automatisée de SAP BO vers Power BI, au forfait.

La Data Observabilité, Buzzword ou nécessité ?

La 1ère action de modernisation d’un Système d'Information : Ecarter les pipelines inutiles ?