| Status | Edited following Approval |
| Owner | |
| Stakeholders |
Purpose
This document outlines the SyWay Program approach to data migration and readiness to move to new business processes as part of Release 2. It establishes an operational framework to ensure data is clean, reliable, structured and available at go-live.
The objectives are:
To plan, govern and control data migration activities from legacy systems to Ariba and Icertis applications.
To ensure business engagement and ownership in all data quality and validation activities.
To meet global regulatory, operational and integration requirements with third-party systems.
Background
This document outlines the data migration and cleansing approach for Release 2, which focuses on the implementation of the Source to Contract (S2C) process as part of the broader program roadmap. Release 2 is an intermediate deployment that introduces standardised procurement processes and integrates with existing ECC systems and third-party applications such as Convergence.
The primary objective of this release is to establish foundational S2C capabilities that can be reused and scaled during subsequent releases, particularly Release 4 (R4), which will involve a broader S/4HANA transformation. As such, the data migration strategy for Release 2 is designed with reusability and future scalability in mind.
Due to the continued reliance on legacy ECC systems in this phase, there are limitations in the extent of data cleansing that can be performed. Existing data in ECC will be migrated largely in its current state, with only targeted validations and enrichment applied where feasible. Cleansing activities will be focused on data sets required for the new S2C processes and integrations, ensuring functional readiness without major disruption to upstream or downstream systems.
Data Migration Scope
The scope of data migration encompasses all master data, open transactional data and selected historical records required to ensure business continuity, legal compliance and readiness at the point of cutover and after. Data will be migrated from multiple SAP ECC source systems and legacy third-party applications into a Standardized SyWay environment.
Data Sources
Data will be extract data from a range of legacy systems that currently support Syensqo’s global operations. These sources span SAP and non-SAP applications and include structured and semi-structured data repositories. The source systems are segmented across regions, functions, and business units and must be accessed in a secure, controlled manner to support data profiling, transformation, and validation activities.
Primary Data Sources include:
SAP ECC systems – multiple instances
Convergence
Contract repositories (other than convergence).
Data Migration Process
The data migration process follows a structured and repeatable approach to extract, transform and load data into the SAP S/4HANA and other non SAP systems. The process is enabled by a specialized data cleansing and migration tool called Syniti Migrate, with the SAP S/4HANA Migration Cockpit used when required.
Cleansed data is extracted from legacy systems or used from the Data Collection templates, transformed to match the S/4HANA data target structure using automated mapping rules and loaded following the data dependency diagram sequence.
The "Load Early, Load Often" approach will be used to ensure repeatability, early and frequent mock loads with cross-functional validation at each stage.
Data Extraction
Data extraction from Syensqo legacy systems will be executed using Syniti Migrate, a platform designed specifically to streamline and automate the end-to-end data extraction process.
During the extract phase legacy data will be pulled from multiple source systems into a centralized source data staging environment. The data in this staging layer will be used for the next step in data migration, namely transformation.
In scenarios where legacy data is missing, incomplete or not system-managed, required data will be manually constructed or collected by Syensqo business users, using predefined Data Collection Templates aligned with the approved data standards.
Data Transformation
Data transformation will be centrally managed through the Syniti Migrate platform, using its integrated tools to Map and Transform to ensure data from legacy systems is accurately and consistently prepared for load.
All transformation logic is fully automated within the Syniti Migrate platform, in accordance with the defined conversion approach and documented within the respective conversion functional specifications. Transformation execution is sequenced immediately prior to pre-load validations to ensure consistency with the latest configuration and to maintain data integrity throughout the load cycle.
Data prepared using Data Collection Templates (DCTs) generally does not require structural transformation as the templates are purpose-built to match the target data standard. However, when reference values such as material numbers, asset IDs or cost centers differ between legacy systems and the target configuration, cross-reference tables will be used to ensure accurate translation and alignment of these identifiers within the target system.
Data Load
The data load phase marks the final step of the migration lifecycle, where validated, transformed and business approved data is transferred into the target systems. Data loads are executed and controlled using the Syniti Migrate platform enabling the end-to-end load process, including execution, monitoring and error handling to ensure accuracy, traceability and control.
Load Execution
The load execution will strictly adhere to predefined sequencing, validation and approvals to ensure a clean and auditable migration into the target systems.
The standard load process follows these core steps for each data object and business unit:
Pre-load validation checks executed by the Syniti/data team to confirm data completeness and structural readiness.
Pre-load validation files generated by the Syniti team and distributed per object to the data team and business data owners.
Pre-load Load Approval / Sign off by the designated business data owners
Load files generated by the Syniti team.
Data loadedby the Syniti team into the target systems.
Load logs reviews by the Syniti team assess technical completion and identify any issues.
Post-load validation checks by the Syniti/data team to confirm accuracy, completeness and integrity in the target system.
Post-load validation files generated by the Syniti team and distributed per object to the data team and business data owners.
- Post-load Load Approval / Sign off by the designated business data owners
Load Execution Constraints and Exceptions
While the preferred approach is to load each object in a single run, exceptions may be approved under specific conditions:
High data volumes requiring split loads or parallel processing to meet technical runtime windows
Cutover sequencing that requires objects to be loaded in multiple phases or site-specific batches
Load Dependencies and Sequencing
The Conversion Specification for every data object outlines its upstream dependencies, reflecting both functional logic and technical requirements to ensure proper sequencing during the load process. These inter-object relationships are illustrated in the Data Dependency Diagram to support accurate execution and end-to-end traceability.
Delta Load Strategy
As a general principle, delta loads will be avoided unless warranted by high volumes of business-critical changes between mock load cycles and final cutover. In standard scenarios, once a data object has been loaded and signed off, any subsequent changes in legacy systems must be manually replicated.
Manual Load Exceptions
Manual loading will only be permitted in strictly defined, low-impact scenarios where automation is not feasible or cost-effective:
Retrofit activities where mass changes can be executed via standard SAP transactions
Business-as-usual (BAU) data entry where volume is minimal and aligned with operational timelines
Very low-volume loads requiring less than 30 minutes of effort and not justifying custom tooling
Any manual load scenario must be documented, reviewed and approved as part of the cutover plan to ensure traceability and alignment with data governance standards.
Error Handling and Defect Management
If errors occur at any point in the process, a defect must be logged in the test tool. Defects must be investigated, resolved and formally closed before proceeding to the next load step. Error handling will be determined by the nature of the object, the load tool in use and the dependencies between records.
SAP S4/HANA Migration Cockpit Loads
When loading via the Migration Cockpit, any failed records will be automatically flagged during the simulation or execution phase. These records must be corrected either at source or within the transformation logic and reloaded through a new load cycle using the same tool.
Load File-Based Errors
For interdependent records (e.g. transactional data referencing master data), the load will halt upon encountering an error. A new file must be generated containing all impacted records and reloaded once corrected.
For independent records, the load can proceed and a follow-up file containing only the failed records will be created and processed separately.
In all scenarios, data corrections must be made at the source either within legacy systems or within Syniti Migrate. Manual editing of load files is strictly prohibited unless formally requested through a defect and approved by the business.
Data Migration Cycles - Mocks
The “Load Early, Load Often” approach will be a core principle for the data migration approach. Mock Migrations are not simply technical exercises, they are critical validation cycles that enable teams to test, refine and build confidence in the end-to-end migration process. Powered by the automation and control offered through the Syniti Migrate platform, each mock cycle will help ensure that data is ready, processes are sound and business operations remain uninterrupted at go-live.
Accelerated Load Cycles and Cutover Readiness
By executing mock migrations early and frequently, SyWay will significantly reduce migration cycle times. Repeatable, proven processes will minimize rework and allow for effective scheduling of activities, resources and system availability. With each mock migration, critical dependencies will be tested, load durations refined and system performance under realistic data volumes will be evaluated. This will enable precise cutover planning, better load leveling and minimized disruption during go-live.
Risk Reduction Through Controlled Rehearsals
Mock migrations will allow the complete rehearsal of the load process, from transformation and validation to post-load checks and business sign-off. Practicing the full sequence will expose process gaps, integration issues and resource constraints early in the timeline. As a result, risks can be mitigated well in advance of production cutover, reducing uncertainty and improving confidence in delivery.
Continuous Improvement in Data Quality
Each mock cycle contributes to measurable improvements in data quality. As data is progressively cleansed, transformed and validated through mock migrations, stakeholders gain better visibility into the completeness, accuracy and usability of migrated content. Issues can be addressed, priorities adjusted and functional alignment strengthened with every cycle. This ensures that when Syensqo enters System Integration Testing (SIT) and User Acceptance Testing (UAT), high-quality, business-representative data is available to validate both the processes and system configuration.
In summary, "Load Early, Load Often" supported by structured Mock Migrations is key to de-risking the cutover and delivering trusted, high-quality data that is fully aligned with business needs from day one.
Migration Schedule
As part of Syensqo’s structured data migration approach, a series of Mock Migrations are planned to validate the end-to-end data conversion process, test system readiness, and support iterative improvement of data quality and load performance.
| Mock Migration Stage | Duration | Data Validation | Group |
|---|---|---|---|
| Mock Load 1 – SIT | 1.5 Months | Project | 1 & 2 |
| Mock Load 2 – UAT | 1 Month | Project & Business | 1 & 2 |
| Mock Load 3 – Parallel Run | 3 Weeks | Project & Business | 1 & 2 |
| Mock Load 4 - Cutover Rehearsal | 2 Weeks | Project & Business | 1 |
| Mock Load 5 - Cutover Rehearsal | 1 Week | Project & Business | 1 |
| Actual Cutover Load | 4 Days | Project & Business | 1 |
| Mock Load 6 - Cutover Rehearsal | 2 Weeks | Project & Business | 2 |
| Mock Load 7 - Cutover Rehearsal | 1 Week | Project & Business | 2 |
| Actual Cutover Load | 4 Days | Project & Business | 2 |
Following the same data migration approach, migration schedules will be defined for the other releases
A Mock Closure Report will be prepared after each mock, including comparisons between data volumes and ELT durations to capture the data migration cycle improvements.
Special Requirements
The data migration approach must support the ability to extract, transform and load data into separate SAP S/4HANA instances for China and the United States, in alignment with Syensqo’s global deployment model. Migration rules, validation logic and load sequences must be configurable and repeatable across multiple migration waves to support phased go-lives and country-specific requirements,
Assumptions
The following assumptions have been made for the data migration approach and serve as the basis for planning, design and execution across all phases of the migration lifecycle.
Data ownership and cleansing accountability and responsibility is with the business, supported by the Data Team and functional leads.
Source systems will remain stable and accessible throughout all planned mock and cutover cycles.
Master data standards and Conversions functional specifications will be "checked in" ahead of each mock load and consistently applied across systems and regions.
Data Collection Templates (DCTs) are aligned to the target data model and generally do not require further structural transformation.
Mock migration timelines are non-negotiable rehearsal checkpoints to validate readiness.
Syniti Migrate will remain the primary platform for managing extraction, transformation, load and validation.
Custom load programs, where required, will be approved through the formal governance process and will follow the same validation and audit protocols as standard loads.
Security, access and privacy protocols are in place to ensure sensitive data is protected throughout the migration process.
Final load decisions will be made based on successful completion of technical validations and formal business approvals from designated data owners.
Change log
Workflow history
| Title | Last Updated By | Updated | Status | |
|---|---|---|---|---|
| There are no pages at the moment. | ||||