You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Next »

Status

  Edited following Approval

Owner
Stakeholders
LeanIX LinkSAP Datasphere

Introduction

SAP Datasphere (DSP), is used by Syensqo to extract data from SAP systems. The data is consolidated for SAP reporting and distribution to MS Fabric for Non-SAP reporting.

Reporting in DSP is performed using the tightly integrated Application Architecture SAP Analytics Cloud.

Both DSP and SAC are now recently incorporated as part of the larger SAP Business Data Cloud offering. SAP will probably try and migrate us to the new product when they are ready.

Purpose

The purpose of this document is to understand the architecture required to support the implementation

Scope & Objectives

The SAP Analytics and Reporting Approach explains what will be implemented and the SAP Analytics and Reporting Standards details how it will be implemented.

This document explains the landscape and integration of the solution

Requirements

Requirement IdentifierRequirement Description


Terminology

  • Client: A self-contained, logically-separated unit in an SAP system (technical instance based on ABAP Application Server) with separate master data, transactional data and configurations that are client specific. E.g., Client 100.
  • Component: Software modules or add-on that are installed in the instance and enables a specific function. E.g., Fiori, GTS.
  • Instance: An entity refers to the entire system including the software and all technical components (DB, application server etc.). E.g., S/4HANA Production.
  • SID: Unique identifier for an SAP instance that consists of three characters.
  • Environment/Tier: Refers to systems that are used for the different stages of the project lifecycle. Each environment serves a distinct purpose and has a dedicated instance to ensure stability and integrity. E.g., Development, QAS. 
  • Landscape: Refers to all the environment for an application or entire project. E.g., S/4HANA landscape, SyWay landscape.

Application Architecture

Architectural Decisions

Below Table provides the details of the architectural decisions made based on the rationale.

Architectural DecisionDescriptionRationale
SSL and SNC will be configured for DSP to encrypt web and RFC trafficBased on SyWay implementation approach, all data in transit must be encrypted. Security is vital
Configure SSO for DSPAs part of SyWay project, a common authentication mechanism (e.g., SAML) will be adopted For ease of access and unified user experience.
Seamless planningTo enable seamless planning, Both DSP and SAC must be deployed in the same data centre and hosted by the same hyperscalerSAP limitation and meeting Syensqo preferences
SACDSP can only connect to a single SAC tenantTight integration.

Application Architecture Design


DSP Details

Customer Number

3008440

Cloud Provider

MS Azure

Cloud Region

Netherlands

URL

https://syw-ds-dev-eu20.eu20.hcs.cloud.sap

Model

Consumption based, meaning we can create as many tenants as we desire

Application Architecture Components

ComponentDescription
Data Lake

A dedicated, on-read schema-flexible storage area in SAP HANA Cloud for raw and archived data repository

Optimized for ingesting and storing large volumes of raw data and acts as the “landing” zone before any modelling or transformation takes place.

Data Store

Staging area for cleansed, modelled data with defined structures.  Intermediate results in a dataflow, ready for analytics or further modelling

A Data Builder artefact that captures the result of a transformation flow and writes it to a persistent table.

Premium outbound integrationPremium Outbound Integration delivers a lean, high-performance data pipeline from SAP to external object stores without persisting data in SAP Datasphere. It emphasizes speed, cost-efficiency, and governance alignment
Catalog we will use the standard catalogue, not the Collibra option
BW Bridgeno planned usage

SAP Cloud Connector

The SAP Cloud connector acts as a reverse invocation proxy to establish network connection between SAP RISE systems and SAP BTP services (Integration suite, API management, DSP etc). Due to its reverse invoke capabilities, the network traffic originates from SAP Cloud connector to SAP BTP and once the link as been established, data can be exchanged between SAP RISE systems and BTP. HTTPS or RFC protocols are used between SAP Cloud Connector and S/4HANA, and HTTPS protocol is used between Cloud Connector and SAP BTP.

To enable outbound internet traffic from SAP RISE, SAP has provisioned a customer gateway server (CGS) with a forward internet proxy installed on it. CGS will be configured with a public IP which will be used for SAP Cloud Connector connection to SAP BTP and this public IP will be whitelisted in SAP BTP. 

For the proposed landscape see Application Architecture SAP RISE (Rest of the World) and China/US instances

A Replication Flow uses Cloud Connector.

𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗢𝗯𝗷𝗲𝗰𝘁 – The dataset you want to replicate (e.g. CDS View). One object = one flow. Max 500 objects per replication flow

𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗙𝗹𝗼𝘄 𝗝𝗼𝗯𝘀 – These are background workers (also known as worker graphs) that handle the actual data movement. Each job uses 5 replication threads by default.

𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗧𝗵𝗿𝗲𝗮𝗱𝘀 – Distributed working. Think of these as the engines moving your data. Max 50 threads per tenant

𝗗𝗲𝗹𝘁𝗮 𝗟𝗼𝗮𝗱 𝗜𝗻𝘁𝗲𝗿𝘃𝗮𝗹 – How often changes are sent from source to target (0-24hrs and 0-59 mins). Set it to 0h 0m for near real-time.


You must install the SAP Analytics Cloud agent for some import data connections to work



PaPM

SAP PaPM Cloud can integrate with SAP Datasphere by sharing an SAP HANA Cloud runtime database (BYOD), exposing artefacts via DPA

Smart Data Access (SDA) and Smart Data Integration (SDI) enable DSP to consume PaPM Cloud database objects as remote sources. You can expose tables, views, or calculation scenarios within DSP without duplicating data, maintaining real-time consistency across both environments

SAC

Data Provisioning Agent

Data Provisioning Agent (DPA) is used for real-time and  batch data replication from S/4HANA to SAP Datasphere. The network connection to SAP Datasphere is initiated by DPA and CGS is used to facilitate the internet connection to SAP Datasphere. 

DPA uses the HTTPS or RFC protocols to communicate with S/4HANA and uses the HTTPS protocol to communicate with SAP Datasphere.   

A DPA agent is required per environment. There is only one active line for the target HANA server name in dpagentconfig.ini.

For the proposed landscape see Application Architecture SAP RISE (Rest of the World) and China/US instances

Application Security

User Access

System

Users

Access Method

S/4HANA

Business users

Web

Support users

Web and SAPGUI

HANA DB

N/A

Can be requested from SAP if required.

SAP Cloud connector

Admin

Web

Data Provisioning Agent

N/A

Raise request to SAP to perform changes as access is via OS command line

Default SAP roles will be used for Web dispatcher and connectors. 

Authentication

Single Sign-on (SSO) will be enabled for S/4HANA system. Since other systems in SAP RISE landscape are supporting systems that will not be accessed directly by business users, authentication will be based on user ID and password. 

Classification

Authentication

Authorisation

Communication Security

All data in transit will be encrypted.

  • SSL is used for all web traffic (Systems are configured to reject HTTP access or redirect to HTTPS). 
  • SNC is used for all RFC and SAPGUI communications. 

See DD-TEC-070 Network and Infrastructure Architecture for details on network security and internet connectivity.


Data Security

Data encryption is enable for SAP HANA DB as part of the system provisioning.

Other Controls


System Landscape

Development Environment

Project Test Environment

Quality Environment

Production Environment


Operation Architecture

Change and Configuration Management

Transport Management

Cloud TMS is to be used.

Release Management

Monitoring

Application Monitoring

Cloud Connector:

There are two main jobs responsible for moving data from the source system to Datasphere:

  • Observer job (/1DH/OBSERVE_LOGTAB) When new data is posted in the base table, the Observer job pushes it from the master logging table to the subscriber logging table.
  • Transfer job (/1DH/PUSH_CDS_DELTA) The Transfer job then moves this data into the buffer table, from there the replication flow picks it up and pushes it to the target system.

Buffer table

 • It splits large datasets into smaller, manageable data packages.
 • If a package fails, it can be resent, making replication more resilient and reliable.
 • Once a package is successfully written to the target, it’s committed and deleted from the buffer to free up space.
 • It also helps in analysing performance throughput and identifying potential bottlenecks.

Transactions used

  • DHCDCMON → Monitor delta capture process
  • DHRDBMON → View buffer tables properties and operations
    • Maximum buffer records
    • Current number of records
    • Package size
    • Packages ready for transfer

Replication metadata:

From $TEC schema, import the REPLICATIONFLOW_RUN_DETAILS

Get all TASK Related Data from DWC_GLOBAL schema and view TASK_LOCKS_V_EXT

By building a model on top of these two tables, you can view all the metadata related to your Replication Flows. This helps you track key details like execution time, status, and any errors. So if something goes wrong, you'll be able to quickly identify and understand the issue.


System Monitoring

Sizing

High Availability

Disaster Recovery

Backup/Restore

Maintenance Plan


Service Introduction

Application Category

Support Team

Skill required

Checklist


Exceptions


See also


No files shared here yet.

Change log

Version Published Changed By Comment
CURRENT (v. 16) Feb 04, 2026 14:54 SHEPHERD-ext, Robert
v. 128 Dec 05, 2025 11:35 WENNINGER-ext, Sascha added stakeholders
v. 127 Dec 05, 2025 09:20 SHEPHERD-ext, Robert
v. 126 Nov 26, 2025 09:19 BARROW-ext, ian
v. 125 Nov 13, 2025 16:38 WENNINGER-ext, Sascha
v. 124 Nov 13, 2025 15:25 SHEPHERD-ext, Robert
v. 123 Nov 13, 2025 15:19 SHEPHERD-ext, Robert
v. 122 Nov 13, 2025 15:09 SHEPHERD-ext, Robert
v. 121 Nov 13, 2025 15:04 SHEPHERD-ext, Robert
v. 120 Nov 13, 2025 15:01 SHEPHERD-ext, Robert

Go to Page History

Workflow history

Title Last Updated By Updated Status  
There are no pages at the moment.

  • No labels