Successfully complete your Cloud migration by introspecting the source system

 


Succeed in your  Cloud migration  by introspecting the source system

 

 

Migrating data to the Cloud is a risky business that requires careful organization. Perhaps a little more…  

 

We believe that the key to the success of these migrations is to really know the data and processes that are at source. In short, to take a real inventory and correct the countless problems before "moving"!

 

Because the lack of knowledge and understanding of the source platform can seriously compromise, or even cause a migration project to fail. We explain it to you!

 

All in the  Cloud  !

Gartner recently reported that the public cloud market has exploded, with growth of 41.4% in 2022 followed by growth of more than 20% in 2023, reaching almost $600 billion!

 

Cloud plays a role in business growth: to meet the demands of current business trends, 46% of companies planned to intensify their digital transformation efforts between 2022 and 2023. And when we talk about transformation, we are mostly talking about cloud migrations.

 

Companies no longer have a choice, but the obstacles are numerous. 

 
 
 

 

The winding path to  successful migration

For businesses, the prospect of data migration can be quite daunting:

  • They say that the process will affect all aspects of the operation, starting with the business. A very distressing prospect.  
  • The people in charge of this migration are being made very clear that they must also respect the budget and the schedule, which they know is very hypothetical.

 

These injunctions are so difficult to reconcile that many migrations are not implemented. And rightly so, because many migration projects fail.

 

Only 16% of migrations were completed on time and under budget (Scientific Research OpenAccess).

 

Over 80% of migrations fail to be delivered on time and/or exceed budget (ItToday).

 

 

But why do cloud migrations   fail  ?

There are a number of factors that contribute to migration failures. 

Each migration is obviously unique, because the datasets and architecture of each company are different, the means implemented too, and therefore the reasons for failures are necessarily so. But some pitfalls are generalizable.   

 

No real introspection of the source system

 

There is a first common denominator to all migration failures: the lack of understanding of the data and flows within the source architecture.

 

Without granular information about the entire estate that actually needs to be migrated, a number of problems arise:

  • Teams can be significantly disrupted by missing or incomplete data after migration, and worse, inaccurate data.
  • There may be regulatory violations if certain types of data are moved inappropriately.
  • Transferring irrelevant data/processes can lengthen the migration window, increasing costs and delays. 
  • Problems that should have been solved at source are being solved in the new environment, taking time and resources away from projects with real added value.

 

Our approach

  • Data uses:  analysis of data uses must be defined to know which business direction will be affected, and through which tools if applicable.
  • Define useful data pipelines: when data that actually has a use is coupled to its source via "data lineage", we can discriminate for the entire source system "the wheat from the chaff", and it becomes possible to operate large decommissionings upstream of the migration. Often, 50% of the system has no use. 

 

With the actual useful data and processes in the source system, it becomes possible to define the effort to be produced to migrate, as well as a realistic timeline.

 

 

Incompleteness of migrations

 

To migrate, you will need to choose a data transfer tool to the Cloud that will allow you to move the data and processes to the target. The market offers a plethora of them.  

 

Simply rewriting the code in the target language will generate countless hyper-damaging regressions.

Indeed, proprietary data transformation tools (ETL/ELT) often handle a large part of the transformations. Schedulers order the flows, etc.

If the migration is to be "As Is" it must therefore include the code, but also the ETL jobs and the scheduling, take care of the dependencies, the articulation with the dataviz tools, etc. It is infinitely more complex. And it is never treated exhaustively. 

Our approach

Translating the code is therefore a very limited proportion of the effort!

Our vocation is to automatically address the completeness of the technical stack in source to make an "As Is" migration, well not totally, since we will have taken care to purify and optimize the system in source!

 

To do this, we place probes and parsers on all storage, processing, scheduling and data exposure technologies, with analyses replayed continuously.

Based on the information collected, the migration will have to be carried out extremely quickly, by automating all the processes so that the business can very quickly use the target platform with confidence.

 

Then we will compare the source system (once "cleaned/optimized") with the target system to ensure iso functionality.

 

Conclusion 

 

Cloud migrations are often failures because they are conducted in a largely manual and empirical manner.

As a result, they stretch out over time and rarely achieve the expected results. We propose a 2-step methodology that allows you to reach the target calmly, within tight deadlines and within the announced budget.

To do this, our strong capacity for introspection of legacy systems for many major clients has allowed us to create an authentic Cloud migration catalyst. 


Commentaires

Posts les plus consultés de ce blog

Migrer de SAP BO vers Power BI, Automatiquement, Au forfait !

La Data Observabilité, Buzzword ou nécessité ?

BCBS 239 : L'enjeu de la fréquence et de l'exactitude du reporting de risque