Please see the SAP Analytics Approach document, section 'Documentation' for more information about the context of this document.

Status

Functional Specification Owner
Stakeholders
Jira Request ID

Jira Development (Build) IDEnter the Jira development card URL here (Use the Jira macro to search and add)

High-Level Specification

ParameterValue
Application System (Delivery Tool)

Icertis

Business Process Reference (L4)NA


Functional Overview

The reconciliation mechanism ensures reliability and traceability for custom integrations (e.g., Icertis to CPI) by logging failed messages, managing scheduled retries, tracking retry attempts, and notifying stakeholders of persistent failures. 

Scope and Objectives

Scope

The scope of this development covers the design, implementation, and monitoring of a reconciliation mechanism for custom integrations, such as the Icertis to CPI interface. The solution encompasses:

  • Logging of all failed integration messages in a centralized master data table.
  • Automated, scheduled retry of failed messages, with tracking of each attempt.
  • Escalation via automated notifications to stakeholders if failures persist beyond a defined threshold.
  • Providing visibility to authorized users for monitoring and manual intervention.
  • Ensuring compliance with Syensqo’s security, data privacy, and audit requirements.
  • Supporting reporting and analytics needs as defined in related Jira requests.

This scope encompasses integration with both internal and external systems, covering all technical, functional, and compliance aspects necessary for reliable and transparent message processing.


Objectives

  • Reliability: Ensure that all failed integration messages are captured, retried, and escalated as needed to minimize data loss and integration gaps.
  • Traceability: Maintain a comprehensive audit trail for each integration attempt, including message content, timestamps, error details, and a record of retries.
  • Timely Escalation: Automatically notify relevant stakeholders when persistent failures occur, enabling prompt manual intervention.
  • User Transparency: Provide authorized users with real-time visibility into the status of integration messages for proactive monitoring and troubleshooting.
  • Compliance: Adhere to Syensqo’s IT security, data privacy, and regulatory requirements throughout the process.
  • Reporting: Enable robust reporting and analytics on integration performance, failure rates, and resolution times to support continuous improvement.

Assumptions

  • Source systems (e.g., Icertis) are responsible for initial message transmission and may perform immediate retries, but scheduled reconciliation is handled separately.
  • The master data table is the single source of truth for failed integration attempts.
  • Scheduled jobs (e.g., every 12 or 24 hours) are configured and maintained by IT operations.
  • Email notification infrastructure is available and configured to alert stakeholders after repeated failures.
  • Users have appropriate access to view failed message statuses via the Icertis UI or other authorized interfaces.
  • All integrations adhere to Syensqo’s security and compliance policies.

Dependencies

  • Reliable connectivity between source systems (Icertis) and the target system.
  • Scheduled batch jobs must run after source systems have completed their integration attempts.
  • The email notification system must be operational for timely stakeholder alerts.
  • Data retention and archiving policies must be defined and adhered to.

Special Requirements

  • Integration with third-party systems (Icertis, CPI) requires secure authentication (e.g., OAuth, certificates).
  • Audit trails must be maintained for all message processing and status changes.
  • System must support optimistic concurrency control (e.g., via ETag).
  • All error messages and notifications must be clear, actionable, and compliant with internal communication standards.


Component Diagram


Key Features

  1. Failure Logging

    • When an integration call (e.g., Icertis to CPI) fails, the failed message is logged in a master data table.
    • The log includes details such as message content, timestamp, error details, and a retry count.
  2. Master Data Table

    • Acts as the single source of truth for all failed integration attempts.
  3. Scheduled Retry Job

    • Runs once or twice daily.
    • Picks up failed messages from the master data table and attempts to resend them to the target endpoint.
    • Updates the retry count and status after each attempt.
  4. Retry Count & Failure Notification

    • Each message’s retry count is incremented with every attempt.
    • If a message fails more than 3 times, an automated failure notification email is sent to relevant stakeholders.
    • This ensures timely awareness and manual intervention if needed.
  5. User Visibility

    • Users with access (e.g., via Icertis UI) can view the status of failed integrations directly from the master data.
    • This promotes transparency and allows for proactive monitoring.

Sequence Diagram

Below is a step-by-step flow of how the reconciliation mechanism operates in real time:

  1. Integration Attempt
    • Icertis (or another system) sends a message to the CPI endpoint.
  2. Failure Detection
    • If the message fails (e.g., due to a network error or endpoint issue), the failure is immediately logged in the master data table with all relevant details.
  3. Immediate Retry (by Source System)
    • The source system (e.g., Icertis) may attempt a quick retry, but this does not replace the scheduled reconciliation process.
  4. Scheduled Reconciliation Job
    • At scheduled intervals (e.g., every 12 or 24 hours), the reconciliation job scans the master data for messages with status “Pending” or “Failed.”
    • The job attempts to resend these messages to the endpoint.
  5. Update Master Data
    • After each retry, the master data is updated:
      • Retry count is incremented.
      • Status is updated (e.g., “Retried,” “Success,” or “Failed”).
      • Last attempt timestamp is recorded.
  6. Failure Escalation
    • If a message’s retry count exceeds 3, the system triggers an automated failure notification email to stakeholders.
  7. User Monitoring
    • Users can access the master data (e.g., via Icertis UI) to view the status of all failed or retried messages.

Masterdata Attributes/Columns

Column NameData TypeNullable?Description/Notes
IDbigintNo (not null)Likely the Primary Key, an automatically incrementing large integer identifier.
IntegrationTypenvarchar(150)No (not null)Stores the type of integration as a variable-length Unicode string, max 150 characters.
RequestMessagenvarchar(max)No (not null)Stores the full request message (e.g., XML, JSON) as a variable-length Unicode string.
StartedAtdatetimeYes (null)Records the start time of the event. Can be null if not yet started or relevant.
CompletedAtdatetimeYes (null)Records the completion time of the event. Can be null if not yet completed.
AdditionalInfonvarchar(max)Yes (null)Stores extra information about the event.
ResponseMessagenvarchar(max)Yes (null)Stores the response message received.
TryCountintYes (null)Counts how many attempts were made for the integration event.
IsSuccessbitYes (null)A Boolean flag (0 or 1) indicating if the operation succeeded.
ExternalIdentifiernvarchar(400)Yes (null)An ID from an external system.
ICMIdentifiernvarchar(400)Yes (null)An ID specific to "ICM" (likely an internal system).
EntityNamenvarchar(400)Yes (null)The name of the entity being processed.
Statusnvarchar(400)Yes (null)The current status of the event (e.g., 'Pending', 'Processing', 'Failed').
ETagdatetimeYes (null)A timestamp or version indicator often used for optimistic concurrency control.
IsActiveintNo (not null)An integer flag (likely 0 or 1) indicating if the log entry is active.




Solution Overview

Dimensions & Measures: Requirements View

Dimensions (AKA Characteristics or Entities)

This section defines the level of detail and the business entities by which the key figures and measures will be analyzed. All relevant dimensions—whether used in rows, columns, or as free characteristics—are listed below. Each field is described to ensure clarity and completeness, except where marked as optional.

Dimension NameDescriptionSource Table/Field NameMandatory/OptionalExample Value
Integration TypeType of integration event (e.g., Icertis to CPI)MasterData.IntegrationTypeMandatoryIcertisToCPI
Entity NameName of the business entity being processedMasterData.EntityNameMandatoryContract
External IdentifierReference ID from an external systemMasterData.ExternalIdentifierMandatoryEXT12345
ICM IdentifierInternal system reference IDMasterData.ICMIdentifierMandatoryICM67890
StatusCurrent processing status of the integration eventMasterData.StatusMandatoryFailed
Started AtTimestamp when the integration event startedMasterData.StartedAtMandatory2024-06-01 10:00
Completed AtTimestamp when the integration event completedMasterData.CompletedAtOptional2024-06-01 10:05
Is ActiveIndicates if the log entry is currently activeMasterData.IsActiveMandatory1
Company CodeOrganizational unit for accounting (if applicable)[To be mapped if relevant]OptionalCS09


Security & Authorization

Detail the security requirements for processing this object

Datasphere

- POD Space that this model belongs in

- Dimensions relevant for data access controls / row level security

S/4

- The following objects need to have at least one authority check (either standard or custom): Custom tables, enhancement, transaction codes, programs, Web Dynpro Application, Code Data Services (CDS), Open Data Protocol (OData), BI reports, and Functional Modules. (Custom should be an exception)

- All custom tables will be maintained in a certain Authorization Group (example: ZFIN, etc.)

- Preferably, the authorization check is done at data sources such as CDS Views.

Variables

The variables defined in this section will apply all reports built on this data model. They should therefore be linked to crucial elements such as security and performance (i.e. they should not impact the reusability of the model).


E.g. if a model is secured by company code, a variable to allow the user to restrict the company code to their authorised values should be included.


If a model contains large volumes of data, restrictions should be applied to reduce the dataset returned.


Report

Field Name

Mandatory/Optional

Prompt Type

(Single Value, Multiple Single Values, Interval, Selection Option, Hierarchy)

Default Value(s) or Restrictions

(please provide default value)

Selection Logic
















Data Processing Logic Considerations

  • Failed messages are logged with all relevant details (request, response, error, timestamps, etc.).
  • Scheduled job queries master data for messages with status “Pending” or “Failed.”
  • Each retry increments TryCount and updates status/timestamps.
  • If TryCount > 3, triggers notification and flags record.
  • Users can query master data for monitoring and reporting.
  • Data joins may be required for enrichment (e.g., user info, integration config).
  • Filters applied for active records (IsActive=1).

Detail the business process logic to be applied to the data. It is critical that all combination outcomes are covered and that there is no ambiguity as to how rules should be applied.

Example: Details of unions, table joins, filters, aggregations, parameterization, etc required to build the data model.

History, Plan and Targets

  • Data loads are incremental, capturing new and updated records.
  • Historical data retained per Syensqo policy (e.g., 12 months).
  • Mapping from legacy system identifiers to new system values handled by migration team.

Include details for loading / blending of data with history, plans and targets. N.B. Any mapping from legacy dimension values to new SyWay system values should be done by the data migration team.


Currency and/or Unit Conversions

  • Not applicable unless integration payloads include currency or units; if so, conversions must use standard Syensqo exchange rates and unit mappings.

Currency Conversions: Detail 'from' and 'to' currency source fields ('to' is usually a mandatory parameter), exchange rate type and key date definition.

Unit conversions: Detail 'from' and 'to' unit source fields ('to' is usually a mandatory parameter). For material unit conversions (E.g. PC to EA) detail basis for conversion. I.e. MARM or pack specs (EWM)


Volumetrics

  • Expected records per day: 100–500 (scalable).
  • Long-term growth: 10,000+ records/year.
  • Data retention: 12–24 months, subject to compliance.

Provide volumetrics details: Number of Records, Expected Frequency, Expected Long term Growth and Data Retention Period


Data Load Frequency

  • Scheduled job: Daily (default), can be configured to run twice daily.
  • Near-real-time not required; batch processing is sufficient.
  • Snapshots not required unless for audit purposes.

Should the data be near-real time, daily, weekly monthly? Are snapshots of data required. If so, at what frequency? Consider all source tables.


Performance Considerations

  • Reports should handle up to 10,000 records efficiently.
  • Indexing on key fields (ID, Status, StartedAt) recommended.
  • Scheduled jobs must complete within 30 minutes.

Content Ownership: Reporting & Analytics Consultant

Specify if there are any specific performance factors that need to be taken into consideration during development i.e. report must be able to display 10000 records etc.


Testing

How to test

  • Use test data to simulate failed, retried, and successful messages.
  • Validate retry logic, status updates, and notification triggers.
  • Test both positive (successful retry) and negative (persistent failure) scenarios.
  • Use Data Analyzer or equivalent tool for data validation.
  • Provide test user accounts with varying access levels.

Please provide some guidance and/or test data to help the developer unit test the report. Please include both positive and negative testing (to validate error situations handling)

List any considerations essential for application test planning (e.g., test this before ABC along with DEF separate from GHI).

List any standard reports in the source system that can be used to do high level reconciliation e.g. Fiori apps or CDS views. Data validation will be done using the Data Analyzer report that is delivered in conjunction with this data model.

The developer will need to test repeatedly, so where appropriate provide instructions to reverse the actions performed so the test may be run again, or explain how to create new input data to the test. In particular, the developer will need logons for test users representing the various roles within the approval process.

Test Conditions and Expected Results

IDConditionExpected Results
1Message fails and is loggedEntry created with status "Failed", TryCount=1
2Message retried successfullyStatus updated to "Success", TryCount incremented
3Message fails >3 timesNotification sent, status remains "Failed", TryCount >3
4User queries failed messagesOnly authorized data is visible
IDConditionExpected Results









Testing Considerations / Dependencies

  • Test after integration endpoints are available.
  • Validate email notifications with test recipients.
  • Test UI access for different user roles.

List any considerations essential for application test planning (e.g., test this before ABC along with DEF separate from GHI). If the development encompasses a user interface, explain how to test it. List any insights as to how this component could be tested the most efficiently.  

Other Requirements

  • All logs and notifications must be auditable.
  • System must support future integration types with minimal changes.
  • Documentation must be maintained for all configuration and logic.
  • Ensure compliance with Syensqo IT security and data privacy policies.

Description of requirements not covered by topics above  

See also

Insert links and references to other documents which are relevant when trying to understand this decision and its implications. Other decisions are often impacted, so it's good to list them here with links. Attachments are also possible but dangerous as they are static documents and not updated by their authors.


Change log