| Status | Pending Stakeholder Review |
| Owner | |
| Stakeholders |
Purpose
This document outlines the SyWay Program approach to data migration and readiness to move to SAP S/4HANA. It establishes an operational framework to ensure data is clean, reliable, structured and available at go-live.
The objectives are:
To plan, govern and control data migration activities from legacy systems to the SAP S/4HANA platform.
To define scope, dependencies, roles, risks and timelines aligned with cutover planning.
To ensure business engagement and ownership in all data quality and validation activities.
To meet global regulatory, operational and integration requirements with third-party systems.
To create the foundation for data quality and governance practices that extend beyond go-live.
Background
The migration to SAP S/4HANA is a core enabler of business transformation and digital integration across Syensqo operations. Accurate and high-quality data is critical for the success of this initiative as it directly impacts the seamless execution of the core business processes, user adoption, reporting accuracy and legal compliance.
This data approach takes into consideration the need to:
Standardize disparate legacy system data models into a unified global standard.
Ensure operational continuity during cutover by preloading critical data and validating business readiness.
Enable phased go-lives while managing cross-system data dependencies.
Use repeatable and scalable tools and methods that support global standardization efforts.
Key steps include:
Inventory and classification of all data objects
Definition of transformation rules and mappings
Execution of data profiling, cleansing, de duplication and enrichment
Mock load cycles for reconciliation and process testing
Final cutover execution including business validation before and after data loads and audit trails.
Data Migration Scope
The scope of data migration encompasses all master data, open transactional data and selected historical records required to ensure business continuity, legal compliance and readiness at the point of cutover and after. Data will be migrated from multiple SAP ECC source systems and legacy third-party applications into a Standardized SyWay environment.
Data Sources
Data will be extract data from a range of legacy systems that currently support Syensqo’s global operations. These sources span SAP and non-SAP applications and include structured and semi-structured data repositories. The source systems are segmented across regions, functions, and business units and must be accessed in a secure, controlled manner to support data profiling, transformation, and validation activities.
Primary Data Sources include:
SAP ECC systems – multiple instances
Third-party legacy applications
Locally managed applications
Document management systems
Data Collection Templates – when data is unavailable in source systems.
Data will be firstly cleansed in these sources systems, following the data cleansing approach detailed below
Data Targets
The target environment for the migration is a consolidated SAP S/4HANA landscape, designed to serve as Syensqo’s global ERP platform. Data will be loaded into fully configured clients aligned to business scope and validation cycles. The data migration approach accommodates both shared global environments and separate instances for United States and China.
Primary Data Targets include:
SAP S/4HANA Production Rest of World (S4P)– a dedicated operational system for most business units.
SAP S/4HANA US Instance (S4A) – a dedicated system to meet localization, regulatory and compliance requirements specific to U.S. operations.
SAP S/4HANA China Instance (S4C) – a China specific environment designed to comply with Chinese regulatory and data residency requirements, including support for local integrations (e.g. Golden Tax).
- SAP S/4HANA Parallel Run (S4R) – a dedicated instance to support parallel run activities.
SAP S/4HANA Quality Assurance (S4U) – a dedicated instance for User Acceptance Testing (UAT).
SAP S/4HANA Integration (S4I) – a dedicated instance for System Integration Testing (SIT).
SAP S/4HANA Development (DEV) – a dedicated instance during initial load testing, transformation logic validation and mock migration rehearsals.
SAP S/4HANA Data / Training (S4T) – a dedicated instance supporting user training and business simulation activities.
Data Dependencies
The data dependencies diagram will be used to ensure the right data load sequence, such that dependent object loads may begin once the corresponding predecessor objects have been technically loaded with 100% success and have passed initial data verification. The diagram will also be used to define the Mock plans, balancing data integrity with load efficiency.
Link - Data Dependencies Diagram
Data Migration Process
The data migration process follows a structured and repeatable approach to extract, transform and load data into the SAP S/4HANA and other non SAP systems. The process is enabled by a specialized data cleansing and migration tool called Syniti Migrate, with the SAP S/4HANA Migration Cockpit used when required.
Cleansed data is extracted from legacy systems or used from the Data Collection templates, transformed to match the S/4HANA data target structure using automated mapping rules and loaded following the data dependacy diagram sequence.
The "Load Early, Load Often" approach will be used to ensure repeatability, early and frequent mock loads with cross-functional validation at each stage.
Data Extraction
Data extraction from Syensqo legacy systems will be executed using Syniti Migrate, a platform designed specifically to streamline and automate the end-to-end data extraction process.
During the extract phase legacy data will be pulled from multiple source systems into a centralized source data staging environment. This staging layer is consumed by Syniti Migrate, which performs both the extraction and transformation of the data in accordance with the rules and mappings defined during the design phase.
In scenarios where legacy data is missing, incomplete or not system-managed, required data will be manually constructed or collected by Syensqo business users, using predefined Data Collection Templates aligned with the approved data definitions.
This tool-driven approach has been integral in maintaining data integrity, auditability and repeatability across Syensqo’s global migration waves.
Data Transformation
Data transformation is centrally managed through the Syniti Migrate platform, using its integrated tools to Map and Transform to ensure data from legacy systems is accurately and consistently prepared for the SAP S/4HANA environment.
All transformation logic is fully automated within the Syniti Migrate platform, in accordance with the defined conversion approach and documented within the respective conversion functional specifications. Transformation execution is sequenced immediately prior to pre-load validations to ensure consistency with the latest configuration and to maintain data integrity throughout the load cycle.
Data prepared using Data Collection Templates (DCTs) generally does not require structural transformation as the templates are purpose-built to match the target S/4HANA design. However, when reference values such as material numbers, asset IDs or cost centers differ between legacy systems and the target configuration, cross-reference tables are applied to ensure accurate translation and alignment of these identifiers within the S/4HANA environment.
Data Load
The data load phase marks the final step of the migration lifecycle, where validated, transformed and approved data is transferred into the SAP S/4HANA environment. Data loads are executed and controlled through the Syniti Migrate platform, with the SAP S/4HANA Migration Cockpit. Together, these tools enable the end-to-end load process, including execution, monitoring and error handling to ensure accuracy, traceability and control.
While the majority of objects will be loaded using standard platform capabilities, custom load programs or direct methods may be required for specific objects or scenarios where standard tools do not provide sufficient flexibility or performance. These custom approaches will follow the same governance and validation controls to maintain consistency and auditability across all data loads.
Load Execution
The data loading process is designed to be streamlined and controlled with minimal transformation applied at this stage. The load execution strictly adheres to predefined sequencing, validation and approvals to ensure a clean and auditable migration into the SAP S/4HANA environments.
The standard load process follows these core steps for each data object and business unit:
Pre-load validation checks are run by the data team to confirm data completeness and structural readiness.
Pre-load validation files are generated and distributed per object, per business unit.
Preload Load Approval / Sign off are sent to designated business data owners
Load files are generated and formally released.
Data is loaded into S/4HANA using Migration Cockpit.
Load logs are reviewed to assess technical completion and identify any immediate issues.
Post-load validation checks are executed to confirm accuracy, completeness and integrity in the target system.
Final approval requests are triggered with validation responsibilities assigned to the appropriate data owners and governance teams.
Load Execution Constraints and Exceptions
While the preferred approach is to load each object in a single run, exceptions may be approved under specific conditions:
High data volumes requiring split loads or parallel processing to meet technical runtime windows
Cutover sequencing that requires objects to be loaded in multiple phases or site-specific batches
Load Dependencies and Sequencing
The Conversion Specification for every data object outlines its upstream dependencies, reflecting both functional logic and technical requirements to ensure proper sequencing during the load process. These inter-object relationships are illustrated in the Data Dependency Diagram to support accurate execution and end-to-end traceability.
Delta Load Strategy
As a general principle, delta loads are avoided unless warranted by high volumes of business-critical changes between mock load cycles and final cutover. In standard scenarios, once a data object has been loaded and signed off, any subsequent changes in legacy systems must be manually replicated.
Manual Load Exceptions
Manual loading is only permitted in strictly defined, low-impact scenarios where automation is not feasible or cost-effective:
Retrofit activities where mass changes can be executed via standard SAP transactions
Business-as-usual (BAU) data entry where volume is minimal and aligned with operational timelines
Very low-volume loads requiring less than 30 minutes of effort and not justifying custom tooling
Any manual load scenario must be documented, reviewed and approved as part of the cutover plan to ensure traceability and alignment with data governance standards.
Error Handling and Defect Management
If errors occur at any point in the process, a defect must be logged in TBD. Defects must be investigated, resolved and formally closed before proceeding to the next load step. Error handling is determined by the nature of the object, the load tool in use and the dependencies between records.
SAP S4/HANA Migration Cockpit Loads:
When loading via the Migration Cockpit, any failed records are automatically flagged during the simulation or execution phase. These records must be corrected either at source or within the transformation logic and reloaded through a new load cycle using the same tool.
Load File-Based Errors:
For interdependent records (e.g. transactional data referencing master data), the load will halt upon encountering an error. A new file must be generated containing all impacted records and reloaded once corrected.
For independent records, the load can proceed and a follow-up file containing only the failed records will be created and processed separately.
In all scenarios, data corrections must be made at the source either within legacy systems or within Syniti Migrate. Manual editing of load files is strictly prohibited unless formally requested through a defect and approved by the business.
Data Migration Load Cycles
The “Load Early, Load Often” approach will be a core principle of the SAP S/4HANA data migration strategy. Mock Migrations are not simply technical exercises, they are critical validation cycles that enable teams to test, refine and build confidence in the end-to-end migration process. Powered by the automation and control offered through the Syniti Migrate platform, each mock cycle helps ensure that data is ready, processes are sound and business operations remain uninterrupted at go-live.
Accelerated Load Cycles and Cutover Readiness
By executing mock migrations early and frequently, Syensqo significantly reduces migration cycle times. Repeatable, proven processes minimize rework and allow for effective scheduling of activities, resources and system availability. With each mock migration, critical dependencies are tested, load durations are refined and system performance under realistic data volumes is evaluated. This enables precise cutover planning, better load leveling and minimized disruption during go-live.
Risk Reduction Through Controlled Rehearsals
Mock migrations allow Syensqo to rehearse the complete load process, from transformation and validation to post-load checks and business sign-off. Practicing the full sequence exposes process gaps, integration issues and resource constraints early in the timeline. As a result, risks can be mitigated well in advance of production cutover, reducing uncertainty and improving confidence in delivery.
Continuous Improvement in Data Quality
Each mock cycle contributes to measurable improvements in data quality. As data is progressively cleansed, transformed and validated through mock migrations, stakeholders gain better visibility into the completeness, accuracy and usability of migrated content. Issues can be addressed, priorities adjusted and functional alignment strengthened with every cycle. This ensures that when Syensqo enters User Acceptance Testing (UAT) and System Integration Testing (SIT), high-quality, business-representative data is available to validate both the system and its processes.
In summary, "Load Early, Load Often" supported by structured Mock Migrations is key to de-risking Syensqo’s S/4HANA cutover and delivering trusted, high-quality data that is fully aligned with business needs from day one.
Migration Schedule
As part of Syensqo’s structured SAP S/4HANA migration approach, a series of Mock Migrations are planned to validate the end-to-end data conversion process, test system readiness, and support iterative improvement of data quality and load performance.
| Mock Migration Stage | Environment | Duration |
|---|---|---|
| Mock Load 1 – SIT | S/4HANA Integration (S4I) | TBD |
| Mock Load 2 – UAT | S/4HANA Quality (S4U) | TBD |
| Mock Load 3 – Parallel Run | S/4HANA Parallel Run (S4R) | TBD |
Additional Load Requirements
Beyond the core mock migration cycles, additional targeted loads will be executed to:
Data cleansing loads will be executed to support Master Data Scenario's within dedicated data client, ensuring that end-user enablement and simulation activities are performed with clean, relevant and business-ready data. This approach is critical to building user confidence and validating real-world usability of the SAP S/4HANA system prior to go-live.
Special Requirements
The data migration approach must support the ability to extract, transform and load data into separate SAP S/4HANA instances for China and the United States, in alignment with Syensqo’s global deployment model. Migration rules, validation logic and load sequences must be configurable and repeatable across multiple migration waves to support phased go-lives and country-specific requirements.
Additional requirements for China and the US will be further defined and validated during the detailed design phase.
Team and Deliverables
Functional Team
Deliverables
The Functional Team is responsible for ensuring that business process requirements are accurately reflected in the target data model. They define master data standards, validate mapping logic and ensure that transformation rules align with functional design.
Key Deliverables:
Master Data Standards
Target Data Models
Mapping Review and Approval
Functional Validation of Transformation Logic
Roles
Functional Consultant
Data Team
Deliverables
The Data Team is responsible for coordinating all data-related activities across the business, functional and technical workstreams. This includes overseeing data cleansing efforts to ensure legacy data meets the quality standards required for migration and driving data construction activities where new or restructured data sets must be created to align with the SAP S/4HANA target design. The team ensures that master data standards are applied consistently. Conversion specifications are accurately developed and validation activities are planned and executed.
Key Deliverables:
Data Conversion Specifications
Data Cleansing Oversight and Weekly Quality Reporting
Data Conversion Build Plan (in collaboration with Syniti)
Review of Mock Load Results
Data Validation Frameworks and Execution Support
Exception Tracking and Resolution Coordination
Roles
Data Lead/Specialist
Business Team
Deliverables
The Business Team is accountable for ensuring the data is accurate, complete and fit-for-purpose. They own the source data, validate mappings and confirm readiness at each load cycle. Their active participation in cleansing and approval activities is essential to achieving business readiness at go-live.
Key Deliverables:
Data Cleansing. Data Construction and Enrichment
Source-to-Target Mapping Review
Data Validation (Pre and Post Load)
Signoff for Load Cycles
Roles
Business Data Lead, Business Data Owners, Business SMEs
Third Party Vendor
Deliverables
The data migration partner Syniti, is responsible for delivering the full Extract, Transform, Load (ETL) capability across the SAP S/4HANA migration lifecycle. Leveraging the Syniti Migrate platform, they will lead the technical execution of all data conversion activities, ensure tool configuration and rule implementation and support the governance and traceability of data movement from source to target systems.
Key Deliverables:
End-to-end ETL design and execution via Syniti Migrate
Detailed Data Conversion Build Plan
Baseline Extracted Data Sets from Legacy Systems
Transformation Logic and Cross-Reference Tables
Reconciliation and Error Reporting Dashboards
Build and Deployment of Load Programs
Execution and Monitoring of Data Loads into SAP S/4HANA
Roles
TBD
Assumptions
The following assumptions define the framework for the data migration approach and serve as the basis for planning, design and execution across all phases of the migration lifecycle.
Data ownership and cleansing accountability sits with the business, supported by the Data Team and functional leads.
Source systems will remain stable and accessible throughout all planned mock and cutover cycles.
Master data standards and transformation rules are frozen ahead of each mock load and are consistently applied across systems and regions.
Data Collection Templates (DCTs) are aligned to the target S/4HANA data model and generally do not require further structural transformation.
Mock migration timelines are maintained and treated as non-negotiable rehearsal checkpoints to validate readiness.
Syniti Migrate will remain the primary platform for managing extraction, transformation and validation.
Custom load programs, where required, will be approved through the formal governance process and will follow the same validation and audit protocols as standard loads.
Security, access and privacy protocols are in place to ensure sensitive data is protected throughout the migration process.
Final load decisions will be made based on successful completion of technical validations and formal business approvals from designated data owners.
Data Migration Risks & Issues
| Risk/Issue | Mitigation Action |
|---|---|
| Access to Syensqo legacy systems | Ensure early connectivity is established between migration tools and Syensqo's legacy systems to support timely data extraction and validation activities. |
| Data quality gaps in legacy systems | Early profiling, weekly cleansing tracking, business engagement and mock migration feedback loops. |
| Late changes to mapping or transformation logic | Freeze rules per mock cycle and enforce change control governance. |
| System or tool limitations during high-volume loads | Parallel processing, load batching and pre-approved exceptions for large-volume objects. |
| Business validation delays | Defined validation windows, escalation process, and visible progress tracking via Syniti Migrate. |
| Failure to accommodate requirements for China or US instances. | Capture and address country-specific needs during detailed design and mock rehearsals. |
| Loading of data specifically for integrations is out of scope for the data migration workstream. Integrations will be executed post-migration | Ensure the integration architecture is designed to support this post-migration activation approach effectively. |
| Historical data will not be migrated, except where specific records are required to support application functionality, meet legislative obligations, or ensure regulatory compliance. | Clearly communicated across all applicable project and business stakeholders. |
Data Validation Process
Data validation will be a structured, tool-enabled process designed to confirm that all migrated data is accurate, complete and aligned with the SAP S/4HANA target design. Leveraging the Syniti Migrate platform, validation is executed across multiple checkpoints and supported by detailed, system-generated validation reports.
The validation process includes both technical and business-facing activities to ensure full traceability and accountability. These activities are coordinated across the data, functional and business teams and occur during both mock cycles and production cutover.
The intention is to have more scope starting mock loads and adjust the scope of data entity validation towards Actual Cutover. Business should have more confidence in the data by then due to their involvement in repeated iterations
The scope of Data Validation and Signoff can be different among the Load cycles.
High Level Process
Pre-Load Validation
Preload Data Validation is the process of verifying and ensuring the accuracy, completeness and consistency of data before it is loaded into SAP.
- The different types of preload validations are:
Record Counts (Technical): A count of records and / or field-by-field comparison against each of the converted files/tables from the source system to the records to be loaded into SAP. This will help trace if there are records “lost” during the migration process. - Verify Amounts (Technical): Totals will be calculated for files/tables from legacy systems and data to be loaded into SAP.
- Report / Standard Transaction Codes (Business): Certain SAP Functional Reports and ad hoc data queries will need to be run against the migrated data and verified by the Business SMEs. This could require custom built reports or queries in both the legacy systems and SAP.
- Spot Checks / Sampling (Manual Validation): For small subsets of data
Before data is loaded into SAP, preload sign-off is required to confirm that all transformation rules specified in the mapping have been successfully applied. This ensures that the data is correctly prepared for migration.
Post-Load Validation
Post-Load Validation is the process of verifying that data has been correctly loaded into the SAP after migration. It ensures that the data remains complete, accurate and functional for business operations.
The final output of the Data Migration Process will be an agreement by the Business that the migrated data is complete and accurate. Data is recognized as acceptable and signed off based on the agreed Success Criteria. The nominated business representative must confirm that the load is complete and will support system functionality and business process. Errors identified must be corrected in the conversion tools to ensure the final cutover to production is tested and predictable.
There will be two stages of Data reconciliation and verification:
- Technical Verification: performed at the end of each load by the Project Team prior to the business verification. Following the load, the Functional Team will confirm that the conversion is successful and will provide a data validation summary to the nominated data approver.
- Business Verification: performed by the Business SMEs and /or data owner. The Business is responsible for data acceptance, which ensures that the business controls the data migrated into the target system.
The validation method may vary based on the data object – ranging from 100% (record-by-record) validation, to random sampling, to record counts and sum totals. The business must document any issues identified in the reconciliation as a defect – either as a program error or data error.
The data approver will confirm results via defined sign-off document for the load cycles.
Data Privacy
Data privacy is a critical part of Syensqo’s SAP S/4HANA data migration approach. All activities related to the extraction, transformation, storage, validation and loading of data must adhere to applicable data protection regulations and internal security policies. The migration process has been designed to ensure that personal, sensitive and confidential information is handled with the highest level of care and compliance.
Key Principles
Compliance with global and regional regulations and any applicable local data protection laws relevant to Syensqo’s operations.
Minimization of personal data within migration files and validation reports, limiting the exposure of Personally Identifiable Information (PII) to only what is essential for business continuity and legal compliance.
Data masking applied where required, particularly in non-production environments used for mock migrations, testing and training.
Controlled access to sensitive data, ensuring only authorized users involved in the migration process can view or handle personal data, based on role-based access controls (RBAC).
Secure transfer and storage of data between systems, staging areas and tools using encrypted channels and compliant storage infrastructure.
Auditability and traceability of all data movement and transformations via logging and reporting capabilities within the Syniti Migrate platform.
Change log
Workflow history
| Title | Last Updated By | Updated | Status | |
|---|---|---|---|---|
| There are no pages at the moment. | ||||
