| Status | Pending Stakeholder Review |
| Owner | |
| Stakeholders |
Purpose
This document outlines the SyWay Program approach to data migration and readiness to move to new business processes. It establishes an operational framework to ensure data is clean, reliable, structured and available at go-live.
The objectives are:
To plan, govern and control data migration activities from legacy systems to the SAP S/4HANA platform.
To define scope, dependencies, roles, risks and timelines aligned with cutover planning.
To ensure business engagement and ownership in all data quality and validation activities.
To meet global regulatory, operational and integration requirements with third-party systems.
To create the foundation for data quality and governance practices that extend beyond go-live.
Background
The migration to SAP S/4HANA is a core enabler of business transformation and digital integration across Syensqo operations. Accurate and high-quality data is critical for the success of this initiative as it directly impacts the seamless execution of the core business processes, user adoption, reporting accuracy and legal compliance.
This data approach takes into consideration the need to:
Standardize disparate legacy system data models into a unified global standard.
Ensure operational continuity during cutover by preloading critical data and validating business readiness.
Enable phased go-lives while managing cross-system data dependencies.
Use repeatable and scalable tools and methods that support global standardization efforts.
Key steps include:
Inventory and classification of all data objects
Definition of transformation rules and mappings
Execution of data profiling, cleansing, de duplication and enrichment
Mock load cycles for reconciliation and process testing
Final cutover execution including business validation before and after data loads and audit trails.
Data Migration Scope
The scope of data migration encompasses all master data, open transactional data and selected historical records required to ensure business continuity, legal compliance and readiness at the point of cutover and after. Data will be migrated from multiple SAP ECC source systems and legacy third-party applications into a Standardized SyWay environment.
Data Sources
Data will be extract data from a range of legacy systems that currently support Syensqo’s global operations. These sources span SAP and non-SAP applications and include structured and semi-structured data repositories. The source systems are segmented across regions, functions, and business units and must be accessed in a secure, controlled manner to support data profiling, transformation, and validation activities.
Primary Data Sources include:
SAP ECC systems – multiple instances
Third-party legacy applications
Locally managed applications
Document management systems
Data Collection Templates – when data is unavailable in source systems.
Data will be firstly cleansed in these sources systems, following the data cleansing approach detailed below
Data Targets
The target environment for the migration is a consolidated SAP S/4HANA landscape, designed to serve as Syensqo’s global ERP platform. Data will be loaded into fully configured clients aligned to business scope and validation cycles. The data migration approach accommodates both shared global environments and separate instances for United States and China.
The following provides a high-level overview of the planned migration schedule.
Primary Target Systems:
SAP S/4HANA Production Rest of World (S4P)– a dedicated operational system for most business units.
SAP S/4HANA US Instance (S4A) – a dedicated system to meet localization, regulatory and compliance requirements specific to U.S. operations.
SAP S/4HANA China Instance (S4C) – a China specific environment designed to comply with Chinese regulatory and data residency requirements, including support for local integrations (e.g. Golden Tax).
- SAP S/4HANA Parallel Run (S4R) – a dedicated instance to support parallel run activities (e.g. comparing payroll, inventory balances or financial postings between legacy and S/4HANA systems)
SAP S/4HANA Quality Assurance (S4U) – a dedicated instance for User Acceptance Testing (UAT).
SAP S/4HANA Integration (S4I) – a dedicated instance for System Integration Testing (SIT).
SAP S/4HANA Development (DEV) – a dedicated instance during initial load testing, transformation logic validation and mock migration rehearsals.
SAP S/4HANA Data / Training (S4T) – a dedicated instance supporting user training and business simulation activities.
In addition to the SAP S/4HANA systems, there will be secondary target systems where data will also need to be migrated. Data loads into these systems will follow the same governance and validation approach to ensure consistency and readiness across all platforms.
Data Dependencies
The data dependencies diagram will be used to ensure the right data load sequence, such that dependent object loads may begin once the corresponding predecessor objects have been technically loaded with 100% success and have passed initial data verification. The diagram will also be used to define the Mock plans, balancing data integrity with load efficiency.
Link - Data Dependencies Diagram
Data Cleansing Approach
A significant amount of data must be migrated to the target systems. Traditionally, data migration follows an object-by-object approach, where completeness and accuracy are assessed at the individual data object level. While this method ensures that each object meets its required standards, it fails to provide a holistic view of data readiness for critical business processes.
Following are some of the challenges with the traditional approach:
Siloed Data Validation
Data completeness and accuracy are measured only at the object level (e.g. materials, customers, vendors).
It does not account for all data dependencies required for a business process to execute successfully.
The organization lacks visibility into whether end-to-end business processes are data-ready.
Delayed Availability of Full Data for Testing
Only fully validated records are loaded into the system.
Since data cleansing takes time, the majority of data becomes available only towards the end of the project, just before Cutover Rehearsals.
Testing is performed on incomplete datasets, leading to issues only being identified late in the project.
When the system is finally tested with a full dataset, any defects uncovered are time-consuming to fix, increasing project risk.
To overcome these challenges, the new data approach ensures that data completeness and accuracy are assessed at the business process level rather than the object level.
This approach involves:
Defining critical business processes and identifying the data scenarios required for their execution.
Breaking down each data scenario into its data objects and their respective data views.
Measuring progress based on the readiness of data scenarios, rather than just individual data objects.
Another key improvement in this data approach is enabling early testing with full datasets, even if the data is not yet fully cleansed.
Instead of waiting for fully cleansed data, default values can be used for incomplete records in early test cycles.
Test environments have full data sets earlier in the project, enabling end-to-end process validation with realistic data volumes.
By the time cutover rehearsals begin, the system has already been tested under conditions closer to real-world operations, reducing late-stage surprises.
By shifting to a scenario-based approach and ensuring early data availability, this strategy provides the organization with early visibility into data completeness and accuracy while also enabling more effective system testing throughout the implementation lifecycle.
Definitions
This section explains the Data object concept, their definition and relationships.
Master Data Objects
Master Data Objects hold a collection of information relevant to specific business areas. These objects are the foundation of key business transactions and operations in SAP.
Examples of Master Data Objects:
Materials (Products, raw materials, finished goods)
Customers (Buyers, retailers, business partners)
Vendors (Suppliers, third-party service providers)
Employees (HR and payroll-related records)
Each Master Data Object consists of various attributes that define its characteristics and behaviors in different business processes.
Master Data Views
Master Data Views represent specific functional characteristics of a Master Data Object, organized to support business functions.
For example, a Customer may have multiple views, such as:
Sales View (for order processing, pricing, and customer relationship management)
Accounting View (for billing, payments, and financial reconciliation)
Similarly, a Material may have:
Procurement View (for purchasing, supplier interactions, and cost tracking)
Logistics View (Relevant for storage, transportation, and inventory management)
Master Data Views help ensure that different departments access only the relevant data needed for their operations.
Master Data Scenarios
Master Data Scenarios describe the business contexts where multiple Master Data Objects are linked together. These scenarios capture the dependencies between different data elements needed to enable a complete business process.
For example, a Production Material Scenario may include:
Material Master Data
Bill of Materials (BOM) / Recipe
Routing (defines production steps and sequences)
Production Versions (variants of production processes)
Work Centers (locations where production activities take place)
Another example is a Procurement Scenario, which may include:
Supplier Master Data
Purchasing Contracts and Info Records
Material Pricing Conditions
Supplier Quotas and Procurement Rules
Data Completeness and Accuracy Approach
To ensure early visibility of data completeness and accuracy, data readiness will be tracked at multiple levels:
Data View Level: to determine if individual master data views are complete.
Data Object Level: to ensure all relevant views of a given data object are complete.
Business Scenario Level: to measure the readiness of data within a business process context.
Tracking mechanisms will focus on:
Completeness: to ensure all required data objects exist and are fully populated.
Accuracy: to validate whether the data is correct and can support critical business processes.
Data Cleansing Loads
The data will be loaded in regular cycles(mini load cycles) well in advance of the mock phases, ensuring early availability for analysis, validation, and refinement. These incremental data loads will allow teams to identify and address potential data quality issues before the formal mock cycles.
As part of these cycles, structured reporting will be conducted to evaluate data completeness at three key levels:
Data View Level
Data Object Level
Data Scenario Level
In addition to completeness checks, mini validation cycles will be performed to confirm data accuracy and integrity. These validations will help detect inconsistencies, missing values, and misconfigurations early in the process, reducing risks before the major mock cycles.
A comprehensive list of all scenarios, completion criteria, and validation criteria is available in the subsequent section.
The guiding principles for the the data load approach are:
Automation: wherever possible, data load tools will be automated to reduce manual effort and errors.
Dedicated Data Clients: new dedicated data clients will be created in the development system for data loads.
Incremental Load Cycles: data will be loaded in cycles based on a predefined frequency into the dedicated client.
Reporting & Monitoring: reports will be generated and published after each load cycle to track completeness and accuracy.
Mini Testing Cycles: small-scale testing cycles will be conducted based on loaded data (e.g., Costing scenarios).
Full Load Cycles: a few full load cycles will be conducted where missing, non-critical data fields will be replaced with default values to enable testing with complete datasets.
Data Scenarios
Data scenarios for which the data approach will be are listed below. The detailed analysis for determining the data can be found here.
| Sno | End to End | Scenario | Scenario Description | Completion Criteria | Accuracy Criteria |
1 | S2P | To Buy | Purchase a Material with the correct price | Material master(relevant views) | Load purchase orders with the complete data set and compare it with the last purchase order from BAU systems - the price between both the systems should match |
2 | P2F | To Produce | Manufacture a Product | Material Master (relevant Views) | Execute costing run - Validate the production data based on the costing validation |
3 | P2F | To Transfer | Transfer materials from one company to another, including advanced intercompany transfers. | Material Master (relevant views) | Validate intercompany stock transport orders across company codes. Ensure material documents, quantity and batch details match with legacy system for accuracy. |
4 | L2C | To Sell | Selling finished goods or intermediate products to customers / another company code | Material Master (relevant Views) | Load sales orders with a complete data set and cross validate with the billing documents from BAU |
5 | R2R | To Cost | Cost Product | All Data relevant for the scenarios | Execute Costing Run - Validate the inventory revaluation |
6 | R2R | To Pay (IHB & BCM) | Payment of migrated payables via IHB & BCM | Business Partner (relevant views) | Run IHB and BCM payment cycles using migrated payables. Ensure payment process executes successfully and payment status is correct. |
7 | R2R | To Collect | Collection of migrated receivables via IHB and without IHB involvement | Business Partner (relevant views) | Simulate and execute receivable collections (with and without IHB). Validate open item clearing, bank postings and subledger reconciliation. |
8 | I2M | To Invest | Planning, approvals and capitalization of capital projects | Portfolio & Bucket Hierarchy | Create and update Capital Portfolio items with mandatory attributes required for ranking and scoring required for decision making, budget assignment and approval of projects. Validate execution and capitalization of project expenses via settlements. |
9 | A2D | To Maintain | Execute asset maintenance activities | Catalog Code Groups & Codes | Execute the maintenance activity creating a notification against the relevant equipment, ensuring all technical object attributes and applicable catalog codes are accurately assigned. Subsequently, initiate a work order, incorporate the appropriate task list and verify that the correct materials, labor, external services and associated costs are defined and allocated. |
10 | S2S | To Dispose (Waste) | Disposing of production waste and obsolete materials. | Master Master (waste) | Post waste movement transactions and ensure correct assignment of waste streams, locations and partners. Validate disposal quantities and confirm correct account assignment. |
Data Migration Process
The data migration process follows a structured and repeatable approach to extract, transform and load data into the SAP S/4HANA and other non SAP systems. The process is enabled by a specialized data cleansing and migration tool called Syniti Migrate, with the SAP S/4HANA Migration Cockpit used when required.
Cleansed data is extracted from legacy systems or used from the Data Collection templates, transformed to match the S/4HANA data target structure using automated mapping rules and loaded following the data dependency diagram sequence.
The "Load Early, Load Often" approach will be used to ensure repeatability, early and frequent mock loads with cross-functional validation at each stage.
Data Extraction
Data extraction from Syensqo legacy systems will be executed using Syniti Migrate, a platform designed specifically to streamline and automate the end-to-end data extraction process.
During the extract phase legacy data will be pulled from multiple source systems into a centralized source data staging environment. The data in this staging layer will be used for the next step in data migration, namely transformation.
In scenarios where legacy data is missing, incomplete or not system-managed, required data will be manually constructed or collected by Syensqo business users, using predefined Data Collection Templates aligned with the approved data standards.
Data Transformation
Data transformation will be centrally managed through the Syniti Migrate platform, using its integrated tools to Map and Transform to ensure data from legacy systems is accurately and consistently prepared for load.
All transformation logic is fully automated within the Syniti Migrate platform, in accordance with the defined conversion approach and documented within the respective conversion functional specifications. Transformation execution is sequenced immediately prior to pre-load validations to ensure consistency with the latest configuration and to maintain data integrity throughout the load cycle.
Data prepared using Data Collection Templates (DCTs) generally does not require structural transformation as the templates are purpose-built to match the target data standard. However, when reference values such as material numbers, asset IDs or cost centers differ between legacy systems and the target configuration, cross-reference tables will be used to ensure accurate translation and alignment of these identifiers within the target system.
Data Load
The data load phase marks the final step of the migration lifecycle, where validated, transformed and business approved data is transferred into the target systems. Data loads are executed and controlled using the Syniti Migrate platform enabling the end-to-end load process, including execution, monitoring and error handling to ensure accuracy, traceability and control.
Load Execution
The load execution will strictly adhere to predefined sequencing, validation and approvals to ensure a clean and auditable migration into the target systems.
The standard load process follows these core steps for each data object and business unit:
Pre-load validation checks executed by the Syniti/data team to confirm data completeness and structural readiness.
Pre-load validation files generated by the Syniti team and distributed per object to the data team and business data owners.
Pre-load Load Approval / Sign off by the designated business data owners
Load files generated by the Syniti team.
Data loaded by the Syniti team into the target systems.
Load logs reviews by the Syniti team assess technical completion and identify any issues.
Post-load validation checks by the Syniti/data team to confirm accuracy, completeness and integrity in the target system.
Post-load validation files generated by the Syniti team and distributed per object to the data team and business data owners.
- Post-load Load Approval / Sign off by the designated business data owners
Load Execution Constraints and Exceptions
While the preferred approach is to load each object in a single run, exceptions may be approved under specific conditions:
High data volumes requiring split loads or parallel processing to meet technical runtime windows
Cutover sequencing that requires objects to be loaded in multiple phases or site-specific batches
Load Dependencies and Sequencing
The Conversion Specification for every data object outlines its upstream dependencies, reflecting both functional logic and technical requirements to ensure proper sequencing during the load process. These inter-object relationships are illustrated in the Data Dependency Diagram to support accurate execution and end-to-end traceability.
Delta Load Strategy
As a general principle, delta loads will be avoided unless warranted by high volumes of business-critical changes between mock load cycles and final cutover. In standard scenarios, once a data object has been loaded and signed off, any subsequent changes in legacy systems must be manually replicated.
Manual Load Exceptions
Manual loading will only be permitted in strictly defined, low-impact scenarios where automation is not feasible or cost-effective:
Retrofit activities where mass changes can be executed via standard SAP transactions
Business-as-usual (BAU) data entry where volume is minimal and aligned with operational timelines
Very low-volume loads requiring less than 30 minutes of effort and not justifying custom tooling
Any manual load scenario must be documented, reviewed and approved as part of the cutover plan to ensure traceability and alignment with data governance standards.
Error Handling and Defect Management
If errors occur at any point in the process, a defect must be logged in the test tool. Defects must be investigated, resolved and formally closed before proceeding to the next load step. Error handling will be determined by the nature of the object, the load tool in use and the dependencies between records.
SAP S4 /HANA Migration Cockpit Loads :
When loading via the Migration Cockpit, any failed records will be automatically flagged during the simulation or execution phase. These records must be corrected either at source or within the transformation logic and reloaded through a new load cycle using the same tool.
Load File-Based Errors:
For interdependent records (e.g. transactional data referencing master data), the load will halt upon encountering an error. A new file must be generated containing all impacted records and reloaded once corrected.
For independent records, the load can proceed and a follow-up file containing only the failed records will be created and processed separately.
In all scenarios, data corrections must be made at the source either within legacy systems or within Syniti Migrate. Manual editing of load files is strictly prohibited unless formally requested through a defect and approved by the business.
Data Migration Cycles - Mocks
The “Load Early, Load Often” approach will be a core principle for the data migration approach. Mock Migrations are not simply technical exercises, they are critical validation cycles that enable teams to test, refine and build confidence in the end-to-end migration process. Powered by the automation and control offered through the Syniti Migrate platform, each mock cycle will help ensure that data is ready, processes are sound and business operations remain uninterrupted at go-live.
Accelerated Load Cycles and Cutover Readiness
By executing mock migrations early and frequently, SyWay will significantly reduce migration cycle times. Repeatable, proven processes will minimize rework and allow for effective scheduling of activities, resources and system availability. With each mock migration, critical dependencies will be tested, load durations refined and system performance under realistic data volumes will be evaluated. This will enable precise cutover planning, better load leveling and minimized disruption during go-live.
Risk Reduction Through Controlled Rehearsals
Mock migrations will allow the complete rehearsal of the load process, from transformation and validation to post-load checks and business sign-off. Practicing the full sequence will expose process gaps, integration issues and resource constraints early in the timeline. As a result, risks can be mitigated well in advance of production cutover, reducing uncertainty and improving confidence in delivery.
Continuous Improvement in Data Quality
Each mock cycle contributes to measurable improvements in data quality. As data is progressively cleansed, transformed and validated through mock migrations, stakeholders gain better visibility into the completeness, accuracy and usability of migrated content. Issues can be addressed, priorities adjusted and functional alignment strengthened with every cycle. This ensures that when Syensqo enters System Integration Testing (SIT) and User Acceptance Testing (UAT), high-quality, business-representative data is available to validate both the processes and system configuration.
In summary, "Load Early, Load Often" supported by structured Mock Migrations is key to de-risking the cutover and delivering trusted, high-quality data that is fully aligned with business needs from day one.
Migration Schedule
As part of Syensqo’s structured data migration approach, a series of Mock Migrations are planned to validate the end-to-end data conversion process, test system readiness, and support iterative improvement of data quality and load performance.
| Mock Migration Stage | Duration | Data Validation | Group |
|---|---|---|---|
| Mock Load 1 – SIT | 1.5 Months | Project | 1 & 2 |
| Mock Load 2 – UAT | 1 Month | Project & Business | 1 & 2 |
| Mock Load 3 – Parallel Run | 3 Weeks | Project & Business | 1 & 2 |
| Mock Load 4 - Cutover Rehearsal | 2 Weeks | Project & Business | 1 |
| Mock Load 5 - Cutover Rehearsal | 1 Week | Project & Business | 1 |
| Actual Cutover Load | 4 Days | Project & Business | 1 |
| Mock Load 6 - Cutover Rehearsal | 2 Weeks | Project & Business | 2 |
| Mock Load 7 - Cutover Rehearsal | 1 Week | Project & Business | 2 |
| Actual Cutover Load | 4 Days | Project & Business | 2 |
Following the same data migration approach, migration schedules will be defined for the other releases
A Mock Closure Report will be prepared after each mock, including comparisons between data volumes and ELT durations to capture the data migration cycle improvements.
Special Requirements
The data migration approach must support the ability to extract, transform and load data into separate SAP S/4HANA instances for China and the United States, in alignment with Syensqo’s global deployment model. Migration rules, validation logic and load sequences must be configurable and repeatable across multiple migration waves to support phased go-lives and country-specific requirements, including restricted access to specific data objects.
Team and Deliverables
Functional Team
The Functional Team is responsible for ensuring that business process requirements are accurately reflected in the target data model. They define Master Data Standards, validate mapping logic and ensure that transformation rules align with functional design.
Key Deliverables and Activities:
Master Data Standards
Target Data Models
Mapping Review and Approval
Functional Validation of Transformation Logic
- Business Process Scenarios
- Completion Criteria based on MDS
Roles
Functional Consultant
Syniti Team
The data migration partner Syniti, is responsible for delivering the full Extract, Transform, Load (ETL) capability across the migration lifecycle. Leveraging the Syniti Migrate platform, the team will lead the technical execution of all data conversion activities, ensure tool configuration and rule implementation and support the governance and traceability of data movement from source to target systems.
Key Deliverables and Activities:
End-to-end ETL design and execution via Syniti Migrate
Detailed Data Conversion Build Plan
Baseline Extracted Data Sets from Legacy Systems
Transformation Logic and Cross-Reference Tables
Reconciliation and Error Reporting Dashboards
Build and Deployment of Load Programs
Execution and Monitoring of Data Loads
Roles
Technical and Development Consultants
Data Team
The Data Team is responsible for coordinating all data-related activities with the business and functional and technical project teams.
Key Deliverables and Activities:
Data Conversion Specifications
- Data Cleansing plan and tools
Data Cleansing Co-Ordination and Weekly Quality Reporting
Data Conversion Build Plan (in collaboration with Syniti)
- Plan and execute the Mock activities
Review of Mock Load Results
Exception Tracking and Resolution Coordination: Close all the Cleansing rules defects (in collaboration with Syniti)
Roles
Data Lead/Specialist
Business Team
The Business Team is accountable for ensuring the data is accurate, complete and fit-for-purpose. They own the source data, validate mappings and confirm readiness at each load cycle. Their active participation in cleansing and pre load and post load approval activities is essential to achieving business readiness at go-live.
Key Deliverables and Activities:
Data Cleansing. Data Construction and Enrichment
- Accuracy criteria for the Business Process Scenarios
- Data Validation (Mini Loads)
Source-to-Target Mapping Review
Data Validation (Pre and Post Load)
Signoff for Load Cycles
Roles
Business Data Lead, Business Data Owners, Business SMEs
Assumptions
The following assumptions have been made for the data migration approach and serve as the basis for planning, design and execution across all phases of the migration lifecycle.
Data ownership and cleansing accountability and responsibility is with the business, supported by the Data Team and functional leads.
Source systems will remain stable and accessible throughout all planned mock and cutover cycles.
Master data standards and Conversions functional specifications will be "checked in" ahead of each mock load and consistently applied across systems and regions.
Data Collection Templates (DCTs) are aligned to the target data model and generally do not require further structural transformation.
Mock migration timelines are non-negotiable rehearsal checkpoints to validate readiness.
Syniti Migrate will remain the primary platform for managing extraction, transformation, load and validation.
Custom load programs, where required, will be approved through the formal governance process and will follow the same validation and audit protocols as standard loads.
Security, access and privacy protocols are in place to ensure sensitive data is protected throughout the migration process.
Final load decisions will be made based on successful completion of technical validations and formal business approvals from designated data owners.
Data Migration Risks & Issues
| Risk/Issue | Mitigation Action |
|---|---|
| Access to Syensqo legacy systems | Ensure early connectivity is established between migration tools and Syensqo's legacy systems to support timely data extraction and validation activities. |
| Data quality gaps in legacy systems | Early profiling, weekly cleansing tracking, business engagement and mock migration feedback loops. |
| Late changes to mapping or transformation logic | Freeze rules per mock cycle and enforce change control governance. |
| System or tool limitations during high-volume loads | Parallel processing, load batching and pre-approved exceptions for large-volume objects. |
| Business validation delays | Defined validation windows, escalation process and visible progress tracking via Syniti Migrate and Mock, Cutover rehearsals and ACO progress reports. |
| Failure to accommodate requirements for China or US instances. | Capture and address country-specific needs during detailed design and mock rehearsals. |
| Loading of data specifically for integrations is out of scope for the data migration workstream. Integrations will be executed post-migration | Ensure the integration architecture is designed to support post-migration activation approach effectively. |
| Historical data will not be migrated, except where specific records are required to support application functionality, meet legislative obligations, or ensure regulatory compliance. | Clearly communicated across all applicable project and business stakeholders. |
Data Validation Process
Data validation will be a structured, tool-enabled process designed to confirm that all migrated data is accurate, complete and aligned with the target design. Leveraging the Syniti Migrate platform, validation is executed across multiple checkpoints and supported by detailed, system-generated validation reports.
The validation process includes both business-facing and technical activities to ensure full traceability and accountability. These activities are coordinated across the data, functional and business teams and occur during both mock cycles and production cutover.
High Level Process
Pre-Load Validation
Preload Data Validation is the process of verifying and ensuring the accuracy, completeness and consistency of data before it is loaded into the target system.
- The different types of preload validations are:
Record Counts (Technical): A count of records and / or field-by-field comparison against each of the converted files/tables from the source system to the records to be loaded into the target system. This will help trace if there are records “lost” during the migration process. - Verify Amounts (Technical): Totals will be calculated for files/tables from legacy systems and data to be loaded into the target system.
- Report / Standard Transaction Codes (Business): Certain SAP Functional Reports and ad hoc data queries will need to be run against the migrated data and verified by the Business SMEs. This could require custom built reports or queries in both the legacy systems and target systems.
- Spot Checks / Sampling (Manual Validation): For small subsets of data
Before data is loaded into the target system, preload sign-off is required to confirm that all transformation rules specified in the mapping have been successfully applied. This ensures that the data is correctly prepared for migration.
Post-Load Validation
Post-Load Validation is the process of verifying that data has been correctly loaded into the target system after migration. It ensures that the data remains complete, accurate and functional for business operations.
The final output of the Data Migration Process will be a formal approval by the Business that the migrated data is complete and accurate. Data is recognized as acceptable and signed off based on the agreed Success Criteria. The nominated business representative must confirm that the load is complete and will support system functionality and business process. Errors identified must be corrected in the conversion tools to ensure the final cutover to production is tested and predictable.
There will be two stages of Data reconciliation and verification:
- Technical Verification: performed at the end of each load by the Project Team prior to the business verification. Following the load, the Functional Team will confirm that the conversion is successful and will provide a data validation summary to the nominated data approver.
- Business Verification: performed by the Business SMEs and /or data owner. The Business is responsible for data acceptance, which ensures that the business controls the data migrated into the target system.
The validation method may vary based on the data object – ranging from 100% (record-by-record) validation, to random sampling, to record counts and sum totals. The business must document any issues identified in the reconciliation as a defect – either as a program error or data error.
The data approver will confirm results via defined sign-off document for the load cycles.
Data Privacy
Data privacy must be considered as part of SyWay data migration approach. All activities related to the extraction, transformation, storage, validation and loading of data must adhere to applicable data protection regulations and internal security policies. The migration process has been designed to ensure that personal, sensitive and confidential information is handled with the highest level of care and compliance.
Key Principles
Compliance with global and regional regulations and any applicable local data protection laws relevant to Syensqo’s operations.
Minimization of personal data within migration files and validation reports, limiting the exposure of Personally Identifiable Information (PII) to only what is essential for business continuity and legal compliance.
Data masking applied where required, particularly in non-production environments used for mock migrations, testing and training.
Controlled access to sensitive data, ensuring only authorized users involved in the migration process can view or handle personal data, based on role-based access controls (RBAC).
Secure transfer and storage of data between systems, staging areas and tools using encrypted channels and compliant storage infrastructure.
Auditability and traceability of all data movement and transformations via logging and reporting capabilities within the Syniti Migrate platform.
Change log
Workflow history
| Title | Last Updated By | Updated | Status | |
|---|---|---|---|---|
| There are no pages at the moment. | ||||



