Introduction
This guide provides high-level development standards and best practices for SAP Cloud Platform Integration (SCPI) projects. It is intended for integration developers working on complex integrations between SAP and non-SAP systems. Following these guidelines will ensure consistency, reliability, and maintainability across integration flows.
The guiding principle for programming standards and guidelines at Syensqo is to use what is generally accepted by the industry as “best-practice”, rather than defining a bespoke set of rules. By adopting this approach, it is more likely that developers engaged by Syensqo are already familiar with the “best-practice” approach and can work effectively in the Syensqo environment immediately.
This document’s purpose is to provide developers with the standards and guidance required to develop in Syensqo’s landscape.
Assumptions
All tools required to develop best-practice based are available.
Since SAP development tools and approaches are evolving so does this document.
The SAP Development Approach has been understood.
SAP Cloud Integration/SCPI/HCI
Design Principles and Modularization
Design integration flows with simplicity, modularity, and maintainability in mind. Key principles include:
- Modularize Flows: Avoid monolithic, overly complex iFlows. Break down complex processes into smaller, logical units. Use Local Integration Processes (sub-processes within an iFlow) to encapsulate reusable or distinct logic blocks. This makes the main integration process easier to read and maintain. If an iFlow is becoming lengthy or handling many tasks, it’s a sign to split it up.
- One Interface, One IFlow: Design each iFlow to handle a single integration interface or a specific sender-receiver pair. If an integration scenario involves multiple target systems, consider using one iFlow per receiver for clarity and fault isolation. Similarly, if multiple sources send similar messages, you can introduce a dispatcher iFlow that routes incoming messages to separate iFlows for each target/system. This separation improves transparency and monitoring since each iFlow represents a distinct interface. (Decoupling flows via asynchronous queues like JMS can further isolate failures and facilitate retries between parts of a process.)
- Reusable Subflows: For common sequences (e.g., data enrichment, calling a common API, or error handling routines), consider creating reusable integration processes or even separate template iFlows that can be invoked via the Process Direct adapter. This avoids duplicating logic across iFlows and centralizes updates to that logic.
- Layout and Readability: Maintain a clean and logical layout in the iFlow editor. Arrange processing steps left-to-right and top-to-bottom in sequence. Use straight connectors and alignment tools (Auto-Layout) to produce a tidy diagram. This visual clarity helps any developer quickly understand the flow. Additionally, make use of labels and annotations: for example, label branches in a router with the condition name, or add notes for complex logic. Clear design and layout will make troubleshooting and future enhancements much easier.
- Externalize Configurations: Design iFlows to be environment-agnostic. Do not hardcode environment-specific details (URLs, credentials, file paths, etc.) inside flows. Instead, externalize these parameters so they can be configured per environment (Dev/QA/Prod) without altering the flow’s logic. For example, define the endpoint URL or API keys as externally configurable parameters. This makes deployments to higher landscapes simpler and less error-prone and adheres to the principle of 12-factor app configuration. Use Identified and agreed standards for external variable names eg. <SID>CLNT<CLIENT>for SAP systems.
By adhering to these design principles, you enhance the scalability and maintainability of integrations. The goal is to build flows that are easy to understand, modify, and extend, following solid software design practices adapted to integration scenarios.
Mapping and Transformation
Integrations often require transforming data between formats (XML, JSON, CSV, etc.) or structures. Here are best practices for message mappings and transformations:
- Choose the Right Tool for the Job: Use out-of-the-box transformation tools whenever possible instead of custom code. For structured XML-to-XML transformations, leverage Graphical Message Mapping or XSLT mapping – these are optimized for XML and provide a visual mapping interface. If you need to transform XML to JSON or vice versa, consider using the message mapping step (which supports JSON as target format) or an XML-to-JSON converter if appropriate. Use Content Modifier steps for simple value assignments or constructing small payloads (especially if you need to build a new message body from scratch). Reserve Groovy scripts or JavaScript mappings for cases that cannot be handled with standard mapping functions (for example, complex computations or dynamic logic that the graphical mapping cannot easily express). This approach ensures maintainability – mappings are easier for others to understand and adjust than large script code.
- Keep Mappings Manageable: In graphical mappings, maintain a clean structure. Map only the required fields – use the Filter or Remove contexts functions to drop any unnecessary data early. Keep an eye on mapping complexity: if a single mapping becomes too complicated (e.g., with many functions or if/else logic), evaluate if it should be broken into multiple mapping steps (such as a two-step mapping) or complemented by a script for the complex portion. Simplicity improves performance and clarity.
- Handle Value Mappings and Lookups: Often you'll need to map code values (e.g., country codes, status codes) between systems. Use the Value Mapping artifact for this purpose, which acts as a lookup table for cross-reference values. Populate value-mapping tables with source-to-target value pairs rather than encoding such logic in the mapping script. At runtime, the mapping step can call these by the valueMapping function. This separates configuration from logic. Ensure value mappings are named clearly (for example, using a naming pattern like VM_<SourceSystem>_to_<TargetSystem>_<ValueSet> – e.g. VM_SAPtoLegacy_CountryCodes) to indicate their content. Maintaining these artifacts centrally makes updates easier when values change.
- Test Transformations Thoroughly: Develop mappings with sample payloads from real systems. Use the built-in Mapping Simulator (for graphical mappings) or external tools for XSLT to validate that your transformations work as expected (especially for edge cases or optional fields). This will catch issues early. Ensure to handle exceptions in mapping – for instance, if an unexpected value appears, you might map it to a default or throw a controlled error that can be caught by error handling logic.
- Avoid Unnecessary Complexity in Scripts: If you do use a Groovy or JavaScript for transformation, keep the script focused and as simple as possible. Do not replicate features available in mapping steps or adapters via scripting. For example, do not use a script to split messages or to perform simple field mappings that the mapping step can handle. Scripts should ideally be small utilities (e.g., custom date conversion, complex string parsing) within an integration flow. Overusing scripts can make maintenance harder and can introduce performance overhead if not carefully written.
By following these guidelines, you ensure that data mappings are efficient, transparent, and easy to maintain. The key is to use SCPI's rich palette of transformation tools to their strengths and keep custom code to a minimum.
Error handling/Alerting (FEH ? AIF ?)
Pending Business requirements/NFR's, standard alerting should be in place
- exceptions should be handled in all cases
MPL Attachements are bad - DO NOT LOG ENTIRE PAYLOADS.
Async
Retry based on business requirements and data/message criticality
IDOC post ALEAUDIT with error messages
Catch exceptions for alerting (if not posting error to another application system).
Sync
All return appropriate errors to calling system - be as informative as possible
Security
Authentication (endpoints)
Certificates > oAuth > Basic
Data at Rest
Integration should NOT maintain data at rest
Credentials Storage
Stored in secure store
Web based tools and LLMS
DO NOT ! including JSON/XML formatter of data, LLM based coding agents (is there a co-pilot sanctioned by Syensqo ?).
AI tools need to be used as per current IT policy - https://thehub.syensqo.com/en/syensqoai/introducing-syensqos-ai-usage-policy-key-milestone-our-ai-journey?check_logged_in=1
Documentation
Versioning and Documentation Practices
Maintaining clear version history and documentation for your integration flows is crucial for team collaboration and lifecycle management:
- Versioning of IFlows: SCPI allows you to Save versions of an integration flow. Adopt a practice of saving a new version at every significant change or release. After development and testing of a change is complete, save the iFlow as a version and include a descriptive version comment. The comment should summarize what changed (e.g., “v1.1 – Added error subprocess for handling timeouts” or “v2.0 – Updated mapping for new field X”). This built-in version history helps in tracking changes and rolling back if necessary. It’s also beneficial when multiple developers are involved; everyone can see what the latest changes were. If your team uses an external source control or transport mechanism, ensure the version in CPI aligns with that external tracking.
- Documentation: Every integration artifact should be accompanied by clear documentation, either within the tool and in external documents (or both). Leverage the Description fields in integration flows and packages: the package description can outline the purpose of the group of iFlows, and each iFlow’s description can detail the interface’s scope (e.g., “This iFlow sends newly created Orders from SAP S/4 to Salesforce in real-time”). Provide key information like source/target systems, data format, frequency, and any special logic in that description. This is very helpful for onboarding new developers. For complex scenarios, you can attach documents in the package (SCPI allows adding text or attachments) or provide a link to an internal wiki/SharePoint where detailed design docs resides. For instance, a flow that does a complex mapping might have an attached mapping specification document.
- Maintain an Integration Catalog: As projects grow, maintain a high-level catalog/spreadsheet ??? LEANIX ???? of all integration flows with their versions, owners, and a brief description. This acts as an index for anyone needing to find or update an interface. It’s also useful for impact analysis (e.g., “which interfaces are affected if we change system X?”).
- Coding Standards Documentation: In addition to functional documentation, document any coding standards or guidelines (some of which are covered by this guide) in a central place. For example, have a checklist that every iFlow must have: proper naming, exception subprocess, no hardcoded credentials, etc. This serves as a QA reference before deploying flows. A peer review process can be instituted where one developer reviews another’s iFlow against the checklist.
- Comments in Artifacts: While SCPI’s graphical interface doesn’t have code comments like traditional code, you can insert Annotations (note elements) on the canvas to explain a section of the flow. Use these for any non-obvious logic. In scripts, comment your code generously – explain any algorithm or workaround in the Groovy/JS so that someone reading it later understands the intent.
- Attachment of Test Cases: (FIGAF) It can be helpful to save sample input/output files or test cases for each iFlow (perhaps in an associated documentation or repository). That way, when modifications are made, developers can re-run those sample cases to ensure nothing broke. While not an out-of-the-box feature of SCPI, this practice of keeping example payloads and expected results is part of good documentation.
By rigorously versioning and documenting, you create a knowledge base that supports long-term maintenance. Junior developers onboarding to the project can refer to previous versions to understand how an iFlow evolved and read the documentation to grasp the design decisions. This reduces dependency on oral knowledge transfer and prevents loss of information when team members roll off.
| ALWAYS | Document inline with description labels - do not leave default names eg. "Content Modifier" |
|---|
Logging and Traceability
Implement logging thoughtfully to aid debugging and auditability, without compromising performance or security. Traceability ensures you can follow a message’s path and diagnose issues when they arise:
- Use Message Logging Wisely: SCPI offers a Message Log (accessible in monitoring) and a Log Message step (in integration flow). You should log key events or important data points, but avoid logging entire payloads in production flows. Large payload logging can consume resources and possibly expose sensitive data. Never use data stores or giant Groovy scripts to log full payloads for debugging – this is an anti-pattern. If you need to record payload content for troubleshooting, consider writing just a portion (e.g., an order ID, or first 100 characters of a message) or use the built-in Trace feature during development/testing. The trace mode in SCPI, when activated, captures detailed payloads and step-by-step details, which is useful for debugging; however, enable trace only temporarily in non-production environments or for short periods in production if absolutely needed. Remember to turn it off, as leaving trace on will impact performance and potentially write sensitive data to logs.
- Add Business Identifiers: To make tracking easier, incorporate business-specific IDs or correlating keys into your logs. A best practice is setting the SAP standard header SAP_ApplicationID or a similar property with a business identifier (like an Order Number, Employee ID, etc.) early in the flow. This ID will then appear in the message monitoring view, allowing you to search for messages by a functional key. It greatly aids operations teams in finding the exact message related to an incident. For example, if an order integration fails, having the Order ID in a searchable field means support can quickly locate that message in the CPI monitor.
- Correlation for Multi-Step Processes: If your scenario involves multiple iFlows (e.g., one iFlow splits messages and others process the parts), ensure you propagate a correlation ID across them. This could be done by maintaining a common property or using the message ID. SCPI’s message monitoring doesn’t automatically group related messages, so it’s up to the developer to pass along an ID (perhaps in a custom header like X-CorrelationID). That way, logs in different flows can be tied together when analyzing end-to-end processing.
- Use of External Logging/Monitoring: For enterprise projects, consider integrating SCPI with a central logging or monitoring solution. SCPI allows access to message logs via OData APIs – some projects export error logs to an external SIEM or monitoring tool. At minimum, ensure that failed message alerts (discussed in Error Handling) are sent out so someone is notified to check the logs.
- Non-Repudiation and Audit Trails: If required by your scenario, use the SAP Cloud Integration's logging capabilities to store an audit trail of messages (e.g., using the Write Variable step to store checkpoints or using persistent data store entries). For example, you might log an entry when a message is successfully processed (with timestamp, IDs, etc.) to an audit database. This can be useful for critical transactions. However, balance this with performance – writing to external systems for every message can slow throughput.
In essence, log enough information to trace what happened for each message, but not so much that it floods the system or exposes sensitive data. Strive for clear, concise log entries that will make sense to someone troubleshooting the interface later. Good traceability means any message can be followed from start to finish, and issues can be pinpointed quickly.
| BAD | DO NOT LOG ENTIRE PAYLOADS |
|---|---|
| ALWAYS | important data points use messageLog.addCustomHeaderProperty |
Naming Standards
Consistent naming conventions are critical for clarity and team collaboration. All integration artifacts (such as packages, iFlows, mapping artifacts, etc.) and variables should have meaningful, self-explanatory names:
- Integration Packages and iFlows: Include context like project or domain, interface details, and direction in the name. For example, a structured format could be: [InterfaceID]_[Direction]_[Description]. This might include source/target system names or message type. E.g. ORD00100230_IN_Vendor_HTTP_SFTP_OrderRequest (meaning an Order interface, inbound from a Vendor system via HTTP to SFTP, handling order request messages). Such naming allows team members to immediately understand an iFlow’s purpose and involved systems.
- Internal IFlow Steps: Adopt clear prefixes for various steps to indicate their type. For instance:
- Message Mapping: Prefix with MM_ followed by source-to-target info. E.g. MM_XMLToJSON_Transform for a mapping converting XML to JSON.
- Content Modifier: Prefix with CM_ and describe the data added/changed. E.g. CM_AddCustomerDetails_BeforeSend to add customer info before sending.
- Groovy Script: Prefix with GS_ plus its function. E.g. GS_ProcessOrderValidation_AfterMapping for a script validating an order after mapping.
- Request-Reply: Prefix with RR_ if using a request-reply step. E.g. RR_GetCustomerInfo to denote a request-reply call to get customer data.
- Variables and Properties: Name message properties, headers, and variables with clear intent and scope. For example, use descriptive keys like OrderID, SourceSystem, TimestampUTC. If using environment-specific parameters, use consistent naming (and consider prefixing or suffixing them to indicate their purpose, such as URL_SAP_ECC for an endpoint URL). Avoid generic names like “Var1” or “temp” which don’t convey meaning.
Rationale: A well-defined naming scheme improves readability and acts as built-in documentation, reducing onboarding time and errors. It allows developers to quickly grasp an artifact’s purpose and eases maintenance/troubleshooting.
General Integration Principles
- standard > custom
- simple > complex
- timers are bad
- data caches are bad
- ????
API Management
Security Standards
Security is paramount in API development. All APIs should enforce strong authentication and protect sensitive data:
Authentication & Authorization: Leverage API Management’s built-in support for API keys and OAuth 2.0. API keys can be issued via the developer portal and verified on each call to ensure only registered apps access your API. OAuth 2.0 provides token-based security for user or service authentication; SAP API Management can validate OAuth 2.0 tokens (e.g. bearer tokens) via its policies. Whenever possible, use OAuth2 for enhanced security (e.g. user-specific access scopes) and API keys for server-to-server or trial access. If needed, implement Basic Authentication for simple internal use-cases or legacy support, but prefer more secure methods for public APIs.
Client Certificates (mTLS): For high-security integrations, consider mutual TLS. SAP API Management allows you to authenticate clients using X.509 client certificates. In this setup, clients must present a trusted certificate to connect, adding a layer of transport-level security beyond API keys or tokens. Use client-certificate authentication for partner or enterprise systems where certificate management is feasible, ensuring the certificate chain is trusted in API Management.
Data Protection: Never expose sensitive information unnecessarily. Mask or encrypt sensitive data in transit and at rest. All external API traffic must be over HTTPS to encrypt data over the wire. Avoid logging confidential details (like personal data or credentials) in plain text. If such data must be logged or included in responses, apply masking policies to redact values. For example, you might mask credit card numbers or redact personal identifiers in debug logs and responses. Ensuring GDPR and privacy compliance by design – only return necessary data fields and anonymize or truncate where appropriate.
Policy Usage (Traffic & Security Policies)
SAP API Management policies provide powerful ways to control and modify API behavior. Apply policies prudently to enforce standards and protect your services:
Rate Limiting & Quotas: Use traffic management policies to prevent abuse and ensure fair usage. For example, apply a rate limit or quota to throttle requests – e.g. 1000 calls per minute or as defined by the API plan. This protects backend services from overload and ensures one client cannot monopolize the API. Quotas can be tied to commercial plans (free tier vs. paid tier) to enforce contractually allowed usage.
IP Whitelisting / Access Control: Implement IP filtering policies to restrict access to trusted networks or clients. By whitelisting approved IP ranges (or blacklisting known malicious IPs), you add an extra security layer at the gateway. This ensures only calls from allowed sources reach your APIs. For example, an internal API might only accept calls from your company’s VPN IP range. Maintain these lists as your user base changes (e.g. when partners onboard or network ranges update).
Threat Protection: Enable content-level threat protection policies to guard against malicious inputs. JSON and XML Threat Protection policies should be used to enforce limits on payload size, depth, and structure. For instance, you can limit the maximum JSON nesting depth or array length to prevent resource exhaustion attacks. These policies will block or sanitize unexpectedly large or malformed payloads, thwarting attacks like JSON/XML bombs. Additionally, use Regular Expression Protection to detect and block patterns of harmful content (such as SQL injection attempts in inputs).
Other Policies: Consider other built-in policies to improve API performance and security. For example, use a caching policy to cache frequent responses and reduce load on backends (if the data can be safely cached). Employ message transformation policies (XML<->JSON conversions, mapping) to integrate with different backend formats as needed – though keep transformation minimal to avoid high latency. Use the Assign Message or scripting policies to inject standard headers (e.g. correlation IDs, CORS headers) across all responses for consistency. Keep policy flows as simple as possible; use only those needed for your use-case to maintain performance.
Naming Conventions
Adopt consistent naming conventions for all API Management entities to make them immediately understandable:
API Proxies: Use short, descriptive names that reflect the API’s purpose and (if applicable) its version. For example, an API proxy providing employee data version 1 could be named
Employees-v1(if its base path is/v1/employees). Avoid spaces or special characters; use hyphens or camelCase for readability as needed.API Products: Name products by grouping or domain. A product that bundles customer-related APIs might be called
CustomerAPIsorCustomer-Services. The name should clue the consumer into the API domain or business area. Keep it concise and avoid ambiguous terms..Resource URIs: Use lowercase nouns for endpoint paths, pluralized where appropriate (e.g.
/customers/{customerId}/orders). Do not include verbs in resource paths – the HTTP method defines the action. Separate words with hyphens for readability instead of underscores. Good URI naming makes the API self-descriptive; for instance,GET /orders/123is clearly fetching order 123, andPOST /orderscreates a new order. Consistent, human-readable resource names improve the developer experience.