| Status | |
| Owner | The person responsible for driving this decision and documenting it. Type @ to mention people by name |
| Stakeholders | The business stakeholders involved in making, reviewing, and endorsing this decision. Type @ to mention people by name |
This document outlines the Syensqo-wide approach to data migration and readiness to move to SAP S/4HANA. It establishes a strategic and operational framework to ensure data is clean, reliable, structured and available at go-live.
The objectives are:
To plan, govern and control data migration activities from legacy systems to the SAP S/4HANA platform.
To define scope, dependencies, roles, risks and timelines aligned with cutover planning.
To ensure business engagement and ownership in all data quality and validation activities.
To meet global regulatory, operational and integration requirements with third-party systems.
To institutionalize data quality and governance practices that extend beyond go-live.
The migration to SAP S/4HANA is a core enabler of business transformation and digital integration across Syensqo operations. Accurate and high-quality data is critical for the success of this initiative as it directly impacts core business processes, user adoption, reporting accuracy and legal compliance.
This data approach takes into consideration the need to:
Harmonize disparate legacy system data models into a unified global standard.
Ensure operational continuity during cutover by preloading critical data and validating business readiness.
Enable phased go-lives while managing cross-system data dependencies.
Use repeatable and scalable tools and methods that support global harmonization efforts.
Key steps include:
Inventory and classification of all data objects
Definition of transformation rules and mappings
Execution of data profiling, cleansing and enrichment
Mock load cycles for reconciliation and process testing
Final cutover execution with validation and audit trails
The scope of data migration for this global ERP transformation encompasses all essential master data, open transactional data and selected historical records required to ensure business continuity, legal compliance and readiness at the point of cutover. Data will be migrated from multiple SAP ECC source systems, legacy third-party applications and local tools into a harmonized SAP S/4HANA environment.
Provide the sources of the data to be migrated
Describe the target application for the data to be migrated
The data dependencies diagram will be used to ensure Syensqo's data migration timeline remains achievable, dependent object loads may begin once the corresponding predecessor objects have been technically loaded with 100% success and have passed initial data verification. This approach supports timely execution across all load waves and has been designed to balance efficiency with data integrity.
Link - Data Dependencies Diagram
The data migration process follows a structured and repeatable approach to extract, transform and load data into the SAP S/4HANA system. The process is primarily driven by Syniti Migrate, with the SAP S/4HANA Migration Cockpit used as a supporting tool when required.
Data is extracted from legacy systems, transformed to match the S/4HANA target structure using automated mapping rules and loaded following a controlled sequence. Each data object is guided by a Conversion Specification and aligned with the Data Dependency Diagram to ensure correct load order and integration.
The approach emphasizes repeatability, early and frequent mock loads with cross-functional validation at each stage to support business readiness and cutover precision. This end-to-end process enables a clean, auditable migration into the S/4HANA environment, supporting Syensqo’s global deployment strategy and long-term data governance objectives.
Data extraction from Syensqo legacy systems is performed using Syniti Migrate, a tool designed specifically to streamline and automate the end-to-end data extraction process. This tool has been selected to align with Syensqo’s objective of reducing manual effort, increasing traceability, and accelerating migration readiness.
The adoption of Syniti Migrate offers the following advantages:
Minimizes dependency on legacy system owners by automating data pulls, reducing manual handovers and time-intensive coordination.
Enables both full and delta extractions, supporting iterative load cycles and ensuring data accuracy across multiple mock migrations.
Supports scheduled or on-demand extraction, allowing flexibility to align with validation cycles, mock cutovers and business availability.
Requires minimal to no custom development within the legacy systems, preserving system stability and reducing the risk of disruptions during the extraction phase.
This tool-driven approach has been integral in maintaining data integrity, auditability and repeatability across Syensqo’s global migration waves.
Data transformation is centrally managed through the Syniti Migrate platform, using its integrated tools to Map and Transform to ensure data from legacy systems is accurately and consistently prepared for the SAP S/4HANA environment.
All transformation logic is fully automated within the Syniti Migrate platform, in accordance with the defined conversion approach and documented within the respective conversion functional specifications. Transformation execution is sequenced immediately prior to pre-load validations to ensure consistency with the latest configuration and to maintain data integrity throughout the load cycle.
Data prepared using Data Collection Templates (DCTs) generally does not require structural transformation as the templates are purpose-built to match the target S/4HANA design. However, when reference values such as material numbers, asset IDs or cost centers differ between legacy systems and the target configuration, cross-reference tables are applied to ensure accurate translation and alignment of these identifiers within the S/4HANA environment.
Data Load
The data load phase marks the final step of the migration lifecycle, where validated, transformed and approved data is transferred into the SAP S/4HANA environment. Data loads are executed and controlled through the Syniti Migrate platform, with the SAP S/4HANA Migration Cockpit. Together, these tools enable the end-to-end load process, including execution, monitoring and error handling to ensure accuracy, traceability and control.
The data loading process is designed to be streamlined and controlled with minimal transformation applied at this stage. The load execution strictly adheres to predefined sequencing, validation and approvals to ensure a clean and auditable migration into the SAP S/4HANA environments.
The standard load process follows these core steps for each data object and business unit:
Pre-load validation checks are run by the data team to confirm data completeness and structural readiness.
Pre-load validation files are generated and distributed per object, per business unit.
Approval requests are sent to designated business data owners
Load files are generated and formally released.
Data is loaded into S/4HANA using Migration Cockpit.
Load logs are reviewed to assess technical completion and identify any immediate issues.
Post-load validation checks are executed to confirm accuracy, completeness and integrity in the target system.
Final approval requests are triggered with validation responsibilities assigned to the appropriate data owners and governance teams.
While the preferred approach is to load each object in a single run, exceptions may be approved under specific conditions:
High data volumes requiring split loads or parallel processing to meet technical runtime windows
Cutover sequencing that requires objects to be loaded in multiple phases or site-specific batches
The Conversion Specification for every data object outlines its upstream dependencies, reflecting both functional logic and technical requirements to ensure proper sequencing during the load process. These inter-object relationships are illustrated in the Data Dependency Diagram to support accurate execution and end-to-end traceability.
As a general principle, delta loads are avoided unless warranted by high volumes of business-critical changes between mock load cycles and final cutover. In standard scenarios, once a data object has been loaded and signed off, any subsequent changes in legacy systems must be manually replicated.
Manual loading is only permitted in strictly defined, low-impact scenarios where automation is not feasible or cost-effective:
Retrofit activities where mass changes can be executed via standard SAP transactions
Business-as-usual (BAU) data entry where volume is minimal and aligned with operational timelines
Very low-volume loads requiring less than 30 minutes of effort and not justifying custom tooling
Any manual load scenario must be documented, reviewed and approved as part of the cutover plan to ensure traceability and alignment with data governance standards.
If errors occur at any point in the process, a defect must be logged in ???. Defects must be investigated, resolved and formally closed before proceeding to the next load step. Error handling is determined by the nature of the object, the load tool in use and the dependencies between records.
SAP S4/HANA Migration Cockpit Loads:
When loading via the Migration Cockpit, any failed records are automatically flagged during the simulation or execution phase. These records must be corrected either at source or within the transformation logic and reloaded through a new load cycle using the same tool.
Load File-Based Errors:
For interdependent records (e.g. transactional data referencing master data), the load will halt upon encountering an error. A new file must be generated containing all impacted records and reloaded once corrected.
For independent records, the load can proceed and a follow-up file containing only the failed records will be created and processed separately.
In all scenarios, data corrections must be made at the source either within legacy systems or within Syniti Migrate. Manual editing of load files is strictly prohibited unless formally requested through a defect and approved by the business.
Specify any special requirements or considerations that may impact the data migration process based on specific locations, regulatory compliance or system limitations. Clearly outline any regional or localization requirements such as country-specific data formats, legal reporting obligations or industry standards that must be adhered to (e.g., localization rules for countries like China).
If the data migration involves third-party systems or external data sources, such as Icertis, describe any additional requirements related to data extraction, transformation logic, validation rules or security measures that must be followed.
Identify the data team responsible for executing the migration and outline their key deliverables.
Identify the functional team involved in the project, detailing their deliverables and associated roles to ensure clarity in responsibilities.
Define the roles of each team member within the functional team outlining their responsibilities in relation to the data migration process.
Identify the members of the data team, listing their deliverables and responsibilities associated with the data migration efforts.
Define the specific roles and responsibilities of each member within the data team to ensure accountability and clarity in the migration process.
Identify the business team members engaged in the project, outlining their deliverables and how they contribute to the overall migration effort.
Define the roles and responsibilities of each member within the business team to ensure that all aspects of the migration process are covered and managed effectively.
Outline deliverables that will be produced by third party (e.g. tracker, reports, etc)
Outline the roles of the Third Party
Outline the assumptions that form the basis for this document and the development approach. Ensure the assumptions are realistic, specific and relevant to avoid potential misunderstandings or misalignment.
Describe any risks and issues identified for conversion and corresponding action to resolve/mitigate them
| Risk/Issue | Mitigation Action |
|---|---|
Describe detailed data validation process including proof of completeness of validation, accuracy and data cleansing validation. Include data migration process flow
Clearly describe the measures and protocols necessary to protect sensitive data during the migration process.