You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 48 Next »

Status

  Edited following Approval

Owner
Stakeholders
LeanIX LinkSAP Datasphere

Introduction

SAP Datasphere (DSP), is used by Syensqo to extract data from SAP systems. The data is consolidated for SAP reporting and distribution to MS Fabric for Non-SAP reporting.

Reporting in DSP is performed using the tightly integrated SAC. There is also tight integration with PaPM.

Both DSP and SAC are now recently incorporated as part of the larger SAP Business Data Cloud offering. SAP will probably try and migrate us to the new product when they are ready.

Purpose

The purpose of this document is to understand the architecture required to support the implementation

The SAP Analytics and Reporting Approach explains what will be implemented and the SAP Analytics and Reporting Standards details how it will be implemented.

This document explains the landscape and integration of the solution

Requirements

Terminology

  • SAP Business Content (BCT): Predefined solution provided by SAP for a functional area 
  • Instance: An entity refers to the entire system including the software and all technical components (DB, application server etc.). E.g., S/4HANA Production.
  • Environment/Tier: Refers to systems that are used for the different stages of the project lifecycle. Each environment serves a distinct purpose and has a dedicated instance to ensure stability and integrity. E.g., Development, QAS. 
  • Landscape: Refers to all the environment for an application or entire project. E.g., S/4HANA landscape, SyWay landscape.
  • CUI: CUI and export controlled data are both highly sensitive.  
  • CMMC 2: Second iteration of the Cybersecurity Maturity Model Certification
  • SaaS Deployment Model: Subscription where you pay for the service vs Consumption where you pay for the usage
  • Space: Virtual work environments with their own databases. Spaces are decoupled, but are open for flexible access, thus enabling your users to collaborate without being concerned about sharing their data

Application Architecture

Architectural Decisions and requirements

Below Table provides the details of the architectural decisions made based on the rationale.

Architectural DecisionDescriptionRationale
SSL and SNC will be configured for DSP to encrypt web and RFC trafficBased on SyWay implementation approach, all data in transit must be encrypted. Security is vital
Configure SSO for DSPAs part of SyWay project, a common authentication mechanism (e.g., SAML) will be adopted For ease of access and unified user experience.
Seamless planningTo enable seamless planning, Both DSP and SAC must be deployed in the same data centre and hosted by the same hyperscalerSAP limitation and meeting Syensqo preferences
SACDSP can only connect to a single SAC tenantTight integration
MS FabricAll data fed to MS Fabric will be via DSPLicencing
ConsolidationConsolidate SAP S/4 data from the regional landscape To provide a unified dataset for further use, eg reporting on SAP data
S/4 ExtractorExtract data from S/4 to be used in other systems such as MS Fabric without breaching SAP data export licencing.Licencing
SAP Business Content (BCT)Start by leveraging the SAP BCT to deliver reports with less effortFaster implementation
CUI dataNo CUI data will be loaded into DSPNextlabs does not work with DSP to provide CMMC2
LandscapeDSP will mirror the SAC 3 tier landscapeSAC is a subscription model so we have to pay per instance
PaPM

Will read data from DSP and write calculations back to DSP

PaPM will mirror the DSP 3 tier landscape

This is the model used by the BYOD model

Application Architecture Design


DSP Details

Customer Number

3008440

Cloud Provider

MS Azure

Cloud Region

Netherlands

URL

https://syw-ds-dev-eu20.eu20.hcs.cloud.sap

Service model

Software as a Service

Model

Consumption based, meaning we can create as many tenants as we desire

Deployment model

We are using the Public model

Application Architecture Components



ComponentDescription
Data Lake

A dedicated, on-read schema-flexible storage area in SAP HANA Cloud for raw and archived data repository

Optimized for ingesting and storing large volumes of raw data and acts as the “landing” zone before any modelling or transformation takes place.

Data Store

Staging area for cleansed, modelled data with defined structures.  Intermediate results in a dataflow, ready for analytics or further modelling

A Data Builder artefact that captures the result of a transformation flow and writes it to a persistent table.

Premium outbound integrationPremium Outbound Integration delivers a lean, high-performance data pipeline from SAP to external object stores without persisting data in SAP Datasphere. It emphasizes speed, cost-efficiency, and governance alignment
Catalog we will use the standard catalogue, not the Collibra option
BW Bridgeno planned usage

SAP Cloud Connector

The SAP Cloud connector acts as a reverse invocation proxy to establish network connection between SAP RISE systems and SAP BTP services (Integration suite, API management, DSP etc). Due to its reverse invoke capabilities, the network traffic originates from SAP Cloud connector to SAP BTP and once the link as been established, data can be exchanged between SAP RISE systems and BTP. HTTPS or RFC protocols are used between SAP Cloud Connector and S/4HANA, and HTTPS protocol is used between Cloud Connector and SAP BTP.

To enable outbound internet traffic from SAP RISE, SAP has provisioned a customer gateway server (CGS) with a forward internet proxy installed on it. CGS will be configured with a public IP which will be used for SAP Cloud Connector connection to SAP BTP and this public IP will be whitelisted in SAP BTP. 

For the proposed landscape see Application Architecture SAP RISE (Rest of the World) and China/US instances

A Replication Flow uses Cloud Connector.

𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗢𝗯𝗷𝗲𝗰𝘁 – The dataset you want to replicate (e.g. CDS View). One object = one flow. Max 500 objects per replication flow

𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗙𝗹𝗼𝘄 𝗝𝗼𝗯𝘀 – These are background workers (also known as worker graphs) that handle the actual data movement. Each job uses 5 replication threads by default.

𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗧𝗵𝗿𝗲𝗮𝗱𝘀 – Distributed working. Think of these as the engines moving your data. Max 50 threads per tenant

𝗗𝗲𝗹𝘁𝗮 𝗟𝗼𝗮𝗱 𝗜𝗻𝘁𝗲𝗿𝘃𝗮𝗹 – How often changes are sent from source to target (0-24hrs and 0-59 mins). Set it to 0h 0m for near real-time.


You must install the SAP Analytics Cloud agent for some import data connections to work

PaPM

SAP PaPM Cloud can integrate with SAP Datasphere by sharing an SAP HANA Cloud runtime database (BYOD), exposing artefacts via DPA

Smart Data Access (SDA) and Smart Data Integration (SDI) enable DSP to consume PaPM Cloud database objects as remote sources. You can expose tables, views, or calculation scenarios within DSP without duplicating data, maintaining real-time consistency across both environments

SAC

DSP can only connect to a single SAC tenant at a time. There is an option to switch tenants

Data Provisioning Agent

Data Provisioning Agent (DPA) is used for real-time and  batch data replication from S/4HANA to SAP Datasphere. The network connection to SAP Datasphere is initiated by DPA and CGS is used to facilitate the internet connection to SAP Datasphere. 

DPA uses the HTTPS or RFC protocols to communicate with S/4HANA and uses the HTTPS protocol to communicate with SAP Datasphere.   

A DPA agent is required per environment. There is only one active line for the target HANA server name in dpagentconfig.ini.

For the proposed landscape see Application Architecture SAP RISE (Rest of the World) and China/US instances

MS Fabric

This will also use the SAP Cloud Connector to move data both to and from MS Fabric

This will use the premium outbound service

  • Reliable Data Exchange: Ensures secure and reliable transmission of business data (such as orders, invoices, shipping notifications) from SAP to external systems.
  • Advanced Monitoring: Provides enhanced monitoring and error-handling for outbound messages.
  • Performance Optimization: Offers higher throughput and lower latency for outbound processes compared to standard services.
  • Support & SLAs: Comes with premium support, faster response times, and higher service level agreements (SLAs).
  • Integration Flexibility: Supports various integration scenarios (e.g., EDI, APIs, web services).


Application Security

User Access

System

Users

Access Method

Datasphere

Business users

Web (very limited usage)

Support users

Web and SAPGUI

S/4HANA

Admin

Web

HANA DB

N/A

Can be requested from SAP if required.

SAP Cloud connector

Admin

Web

Data Provisioning Agent

N/A

Raise request to SAP to perform changes as access is via OS command line

Default SAP roles will be used for Web dispatcher and connectors. 

Authentication

Single Sign-on (SSO) will be enabled for S/4HANA system. Since other systems in SAP RISE landscape are supporting systems that will not be accessed directly by business users, authentication will be based on user ID and password. 

Authentication

Single Sign-on (SSO) will be enabled for Datasphere. 

Authorisation

These values will be leveraged from S/4

Database only users

  • Users can also be created for ‘database’ only access
  • Such users can read and/or write based on database-level privileges
  • Each database user has an Open SQL schema automatically created for them
  • Ideal for 3rd party tools that require only view-level access
  • Such users can not enjoy Business Layer objects such as Analytical Models
  • SAP Analytics Cloud users must connect to SAP Datasphere as regular Application users (i.e. not database-only users)

Communication Security

All data in transit will be encrypted.

  • SSL is used for all web traffic (Systems are configured to reject HTTP access or redirect to HTTPS). 
  • SNC is used for all RFC and SAPGUI communications. 

See DD-TEC-070 Network and Infrastructure Architecture for details on network security and internet connectivity.


Potentially a restricted user in S/4 to ensure that only permissible data is extracted.

Data Security

Data encryption is enable for SAP HANA DB as part of the system provisioning.

Other Controls

SAP provides infrastructure and server logs via its "LOGSERV" service, which can be integrated into Syensqo's SIEM. This is under discussion with Syensqo IT as of July 2025, and the agreed design will be documented in a future revision of this document.


System Landscape

Mapping to S4

draw.io

Source page access restriction: Click the link below to check if the page is accessible.
/display/ER/SyWay+Analytics+Approach

Development Environment

ApplicationPrimary RoleSIDInstanceHostnamePorts
DatasphereCentral Instance




Quality Environment

The environment is planned to be provisioned by SAP on 1 August 2026. This document will be updated after this date.

Production Environment

The environment is planned to be provisioned by SAP on 1 January 2028. This document will be updated after this date.


Operation Architecture

Shared Responsibility Model

PartyServiceResponsibility
SyensqoCustomization & ConfigurationCustomers must configure and customize the application per their business requirements

Management of identity and accessCustomers must manage the complete identity lifecycle, including onboarding and offboarding users, creating and assigning roles, forming user groups, granting and restricting privilege access, and similar functions for their application

Data Integrity RequirementsCustomers must define proper data classification, storage, and deletion requirements. Although SAP will execute processes on data, defining data requirements is a big part of the customer’s responsibility. Protection for data at rest will be assigned by SAP based on the data classification

Application Audit logsCustomers are responsible for capturing, monitoring, and analysing the application audit logs

Application complianceCustomers are responsible for industry-specific certification and compliance for data used by or within the application.
SAPDeploying and configuring ResourcesSAP is responsible for deploying and configuring VMs, databases, container images, and the VM operating system.

Securing VM and imagesSAP is responsible for securing and patching operating systems and container images, as well as hardening configurable items on servers and databases

Logical separationSAP is responsible for logically segregating applications and data within various environments and between various tenants and customers

Protecting dataSAP is responsible for implementing data protection, backup, and restoration, based on the data classification. The data retention policy is defined by customer but can be executed by SAP

Monitoring and incident reportingSAP logs all the security and infrastructure events. Logs will be aggregated in a system information and event management (SIEM) tool, and an alert will be generated based on the predetermined trigger. SAP will also monitor for incidents and will follow SAP’s incident response plan as and when needed.

Audit and complianceSAP is responsible for maintaining and providing certification and compliance for the application and related infrastructure.

Change managementSAP is responsible for managing the maintenance window and other administrative tasks regarding change management

AvailabilitySAP is responsible for deploying and maintaining the availability and meeting the SLA

IaaSSAP maintains responsibility for the IaaS that the hyperscaler provides on SAP’s behalf, and for ensuring each hyperscaler performs as per the contractual agreement
HyperscalerPhysical securityThe hyperscaler is responsible for the physical data centre and the safety and security of people in the data centre. This includes the responsibility for background checks of the people who work in the data center and in connection with other hyperscaler- provided services

ResiliencyThe hyperscaler is responsible for providing the capability of a resilient network and infrastructure across multiple regions and availability zones.

Physical infrastructureThe hyperscaler is responsible for providing a secure network and infrastructure, including hypervisors

Audit and complianceThe hyperscaler is responsible for IaaS compliance with industry standards.

Additional SAP responsibilities

AreaActivities
Application securityApplication security is the heart of the overall security strategy. Application development at SAP follows the secure development lifecycle. The process starts with planning and assessment, which includes a very important security measure: threat modelling. SAP uses the well-known STRIDE threat modelling technique from Microsoft. Developers follow the secure coding guidelines during the development process. The developed code is reviewed under the “Secure code review” step as a part of the process. Next, a static vulnerability scan is performed on any code developed in-house. Any vulnerability found during the review or scan is mitigated – or documented, if not mitigated – before the release. Software is next scanned for open source vulnerabilities, if any open source libraries or components are used. Dynamic application security testing is performed after software is fully developed and compiled. The last step in the application security is unit testing of the security-related functionality to address issues like invalid input parameters.
Once the software is developed and the application is deployed in production, vulnerability scanning is performed at regular intervals and after each new release. Vulnerabilities found during the scanning are managed based on their Common Vulnerabilities and Exposures (CVE) score. SAP does not report or disclose vulnerabilities, but a Service Organization Control 2 (SOC 2) audit report lists any unmitigated vulnerabilities. The SOC 2 report can be obtained from SAP.
Data SecurityThe customer defines the data protection, retention, backup, and deletion requirements. SAP is responsible for making sure that tenant data is logically segregated. SAP also makes sure that data is segregated between nonproduction and production environments.
Encryption
As per the SAP security policy, data in transit and data at rest should always be encrypted. Any communication between the hyperscaler and client uses Transport Layer Security (TLS) with HTTPS. Data at rest is encrypted using disk encryption to prevent data exposure in case of a physical theft of the drive. Other encryption methods, such as volume, backup, or in-application encryption, are used based on the technical, functional, and business requirements of the application and customer.
Encryption Key Management
SAP does not utilize default keys provided by hyperscalers. SAP is responsible for creating, rotating, and deleting the encryption keys. SAP also manages access to the key.
One of the “key” differences between an application hosted by SAP versus third-party hyperscalers is the key storage. When an SAP application is hosted by a third-party hyperscaler, the key is stored with the hyperscaler using the hardware security module (HSM) or other secret management storage that the hyperscaler provides. This key storage or HSM is always FIPS 140-2 compliant.
Any access to this storage is logged and audited by SAP. The encryption key is always managed by SAP, regardless of where the key is stored.
Retention, Deletion, and Backup
Data retention with most SAP applications is automated and customer driven. Customers can create policies or rules in the application stating how long the data should be retained based on their requirements. Data will be deleted at the end of the retention period. Customers can also delete their data at any time they have access.
Data backup and deletion processes and schedules are not impacted by the migration to hyperscaler. These processes remain unchanged.
It is important to note that SAP and hyperscalers will maintain compliance with laws and requirements around personal data, such as EU access, the General Data Protection Regulation, and other industry and geographic regulations. 
Infrastructure and Network SecuritySAP creates virtual resources using cloud APIs and is responsible for everything between and including virtual resources and the application. SAP will deploy and manage everything from the virtual machine up. This means that SAP has responsibility for managing infrastructure, creating and managing various virtual private clouds, and creating and managing security groups and firewalls. SAP is also responsible for managing and patching the operating system and middleware.
SAP will regularly scan the environment for operating system and middleware vulnerabilities. SAP will deploy patches to operating systems and middleware based on the vendors’ specifications.
SAP’s architecture blueprint dictates that database servers and application servers are isolated from each other and from the public-facing Web server. DB server and application servers are hosted within a private subnet, while Web servers are in the public subnets behind the Web application firewall (WAF) and security groups.
SAP’s strategy is to provide database clusters. High availability will not change as a result of migration to a hyperscaler.
Hyperscalers are responsible for providing overall network and infrastructure protection against DDoS and network- or infrastructure-based attacks to the data centres, but it is SAP’s responsibility to provide anti-DDoS, IPS/IDS, WAF, and network monitoring of the resources created by SAP.
It is SAP’s responsibility to perform regular penetration testing, and SAP will work with the hyperscaler for network penetration testing.
The physical security of the data centres and vetting of the workforce who are working in and around data centres are responsibilities of the hyperscaler.
Logging, Monitoring, and Incident ResponseThe customer has full access to application and audit logs.  SAP is responsible for collecting, storing, and analysing infrastructure and security logs. SAP manages the threat triggers and generates alerts from the logs. SAP does not share infrastructure and security logs with customers.
SAP aggregates the logs into the SIEM tool and automates the process of analysing and generating alerts. Monitoring various logs and generating alerts when there is a deviation from the baseline is a very time-consuming but essential part of the security – and SAP handles that for you, so you can focus on your customers. The team of seasoned SAP professionals perform infrastructure monitoring, database monitoring, security incident management, secure admin access, regular backups, security scanning and remediation 24x7 to secure the environment for customers.
Hyperscaler landscapes pose unique challenges, and SAP’s security incident response team works closely together with GCS multi-cloud security operations to continuously improve security incident response process and automation for SAP’s multi-cloud landscape.
Although SAP does not notify customers of every incident, we will provide breach notification report and root cause analysis to customers for any incident that is classified as a personal data breach.
Identity and Access ManagementThe customer is responsible for identity and access management (IAM). SAP provides single sign-on and other IAM-related services as needed. SAP offers solutions that can manage the complete identity lifecycle, integrate on-premise and cloud solutions, work with multi-factor authentication, and simplify the access management process for you.
The customer has complete control over who can access the data and to what extent. Most important, the customer has the ability to provide admin or privileged access to the application. This access should be granted only as needed and must be monitored. SAP has access to cloud accounts as well as privileged access to the application and SAP environment within the hyperscaler environment. SAP employees or partners do not have any access to customer’s data or information.
Connectivity to Cloud

Azure ExpressRoute allows you to extend your corporate or personal network into the Microsoft cloud over a private connection. Azure ExpressRoute provides Layer 3 connectivity between your site and Microsoft cloud. Azure ExpressRoute provides redundancy for the network connection as well as a guaranteed uptime SLA for connectivity.

 Transport Management

Cloud TMS is to be used.

Release Management

  • SAP Datasphere runs on continuous delivery in the background: small fixes and security updates can be deployed anytime.
  • Major functionality is bundled into Quarterly Release Cycle (QRC) updates.
  • Customers can choose if they want to adopt QRC updates immediately or delay them (to test changes first).
  • Updates include new features, fixes, and security patches, and they’re applied automatically by SAP in the background.

  • No customer-side installation or downtime planning is needed.

Monitoring

Application Monitoring

Data loads will be triggered using Task Chains in DSP and tasks in SAC. Hopefully these SAC tasks will become integrated with the Task Chains in DSP fairly soon as promised in the roadmap.

We will need to ensure that the scheduling of jobs does not overload the system. The closer we get to real-time data, the more frequently jobs are scheduled.

As a replication flow works with a pull mechanism, it is working hard constantly to find and new data, like a continuously executed batch dataflow. In contrast, a push mechanism, only interact when there is new data, would be more efficient. With this is mind, we will only request frequent data updates where really required. 

There are SAP Datasphere monitoring views which help you monitor data integration tasks in a more flexible way. They are built on the V_EXT views, and are enriched with further information as preparation for consumption in an SAP Analytics Cloud story.

Besides the SAC stories above, there is also within DSP itself:

  • Data Integration Monitor - monitor data loads
  • System Monitor - data storage, out of memory errors, CPU capacities

Cloud Connector:

There are two main jobs responsible for moving data from the source system to Datasphere:

  • Observer job (/1DH/OBSERVE_LOGTAB) When new data is posted in the base table, the Observer job pushes it from the master logging table to the subscriber logging table.
  • Transfer job (/1DH/PUSH_CDS_DELTA) The Transfer job then moves this data into the buffer table, from there the replication flow picks it up and pushes it to the target system.

Buffer table

 • It splits large datasets into smaller, manageable data packages.
 • If a package fails, it can be resent, making replication more resilient and reliable.
 • Once a package is successfully written to the target, it’s committed and deleted from the buffer to free up space.
 • It also helps in analysing performance throughput and identifying potential bottlenecks.

Transactions used

  • DHCDCMON → Monitor delta capture process
  • DHRDBMON → View buffer tables properties and operations
    • Maximum buffer records
    • Current number of records
    • Package size
    • Packages ready for transfer

Replication metadata:

From $TEC schema, import the REPLICATIONFLOW_RUN_DETAILS

Get all TASK Related Data from DWC_GLOBAL schema and view TASK_LOCKS_V_EXT

By building a model on top of these two tables, you can view all the metadata related to your Replication Flows. This helps you track key details like execution time, status, and any errors. So if something goes wrong, you'll be able to quickly identify and understand the issue.

System Monitoring

The following can be obtained from SAP for me portal .

SAP will be monitoring from the infrastructure layer to the technical basis layer. In the event of an issue, users under Private Cloud Contacts will be notified. 

There is integration with Application Lifecycle Monitoring (ALM) where we can review the system loads

Sizing

The estimates in the original CD - SOL - 020 Reporting Approach , chapter 8, still hold water

In summary, it was suggested

Compute blocks512 GB13,315
Storage 1,344 GB  245
Catalog Storage0,5 GB0
Data Integration72005,488trade off with using DPA
Premium Outbound Integration40 GB1,000
BW Bridge

Not considered
Data Lake

Use MS Azure
DPA server


90GB of data a year was suggested


High Availability

Disaster Recovery

Backup/Restore

Maintenance Plan


Service Introduction

Application Category

Support Team

Skill required

Checklist


Exceptions


See also

LeanIX fact sheet for Datasphere


No files shared here yet.

Change log

Version Published Changed By Comment
CURRENT (v. 48) Feb 04, 2026 14:54 SHEPHERD-ext, Robert
v. 128 Dec 05, 2025 11:35 WENNINGER-ext, Sascha added stakeholders
v. 127 Dec 05, 2025 09:20 SHEPHERD-ext, Robert
v. 126 Nov 26, 2025 09:19 BARROW-ext, ian
v. 125 Nov 13, 2025 16:38 WENNINGER-ext, Sascha
v. 124 Nov 13, 2025 15:25 SHEPHERD-ext, Robert
v. 123 Nov 13, 2025 15:19 SHEPHERD-ext, Robert
v. 122 Nov 13, 2025 15:09 SHEPHERD-ext, Robert
v. 121 Nov 13, 2025 15:04 SHEPHERD-ext, Robert
v. 120 Nov 13, 2025 15:01 SHEPHERD-ext, Robert

Go to Page History

Workflow history

Title Last Updated By Updated Status  
There are no pages at the moment.

  • No labels