You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

Purpose

The guiding principle for programming standards and guidelines at Syensqo is to use what is generally accepted by the industry as “best-practice”, rather than defining a bespoke set of rules. By adopting this approach, it is more likely that developers engaged by Syensqo are already familiar with the “best-practice” approach and can work effectively in the Syensqo environment immediately.

This document’s purpose is to provide developers with the standards and guidance required to develop in Syensqo’s landscape.


Assumptions

All tools required to develop best-practice based are available.

Since SAP development tools and approaches are evolving so does this document.

The SAP Development Approach has been understood.



SAP Cloud Integration/SCPI/HCI

Design Principles and Modularization

Design integration flows with simplicity, modularity, and maintainability in mind. Key principles include:

  • Modularize Flows: Avoid monolithic, overly complex iFlows. Break down complex processes into smaller, logical units. Use Local Integration Processes (sub-processes within an iFlow) to encapsulate reusable or distinct logic blocks. This makes the main integration process easier to read and maintain. If an iFlow is becoming lengthy or handling many tasks, it’s a sign to split it up.
  • One Interface, One IFlow: Design each iFlow to handle a single integration interface or a specific sender-receiver pair. If an integration scenario involves multiple target systems, consider using one iFlow per receiver for clarity and fault isolation. Similarly, if multiple sources send similar messages, you can introduce a dispatcher iFlow that routes incoming messages to separate iFlows for each target/system. This separation improves transparency and monitoring since each iFlow represents a distinct interface. (Decoupling flows via asynchronous queues like JMS can further isolate failures and facilitate retries between parts of a process.)
  • Reusable Subflows: For common sequences (e.g., data enrichment, calling a common API, or error handling routines), consider creating reusable integration processes or even separate template iFlows that can be invoked via the Process Direct adapter. This avoids duplicating logic across iFlows and centralizes updates to that logic.
  • Layout and Readability: Maintain a clean and logical layout in the iFlow editor. Arrange processing steps left-to-right and top-to-bottom in sequence. Use straight connectors and alignment tools (Auto-Layout) to produce a tidy diagram. This visual clarity helps any developer quickly understand the flow. Additionally, make use of labels and annotations: for example, label branches in a router with the condition name, or add notes for complex logic. Clear design and layout will make troubleshooting and future enhancements much easier.
  • Externalize Configurations: Design iFlows to be environment-agnostic. Do not hardcode environment-specific details (URLs, credentials, file paths, etc.) inside flows. Instead, externalize these parameters so they can be configured per environment (Dev/QA/Prod) without altering the flow’s logic. For example, define the endpoint URL or API keys as externally configurable parameters. This makes deployments to higher landscapes simpler and less error-prone and adheres to the principle of 12-factor app configuration.

By adhering to these design principles, you enhance the scalability and maintainability of integrations. The goal is to build flows that are easy to understand, modify, and extend, following solid software design practices adapted to integration scenarios.


Mapping and Transformation

Integrations often require transforming data between formats (XML, JSON, CSV, etc.) or structures. Here are best practices for message mappings and transformations:

  • Choose the Right Tool for the Job: Use out-of-the-box transformation tools whenever possible instead of custom code. For structured XML-to-XML transformations, leverage Graphical Message Mapping or XSLT mapping – these are optimized for XML and provide a visual mapping interface. If you need to transform XML to JSON or vice versa, consider using the message mapping step (which supports JSON as target format) or an XML-to-JSON converter if appropriate. Use Content Modifier steps for simple value assignments or constructing small payloads (especially if you need to build a new message body from scratch). Reserve Groovy scripts or JavaScript mappings for cases that cannot be handled with standard mapping functions (for example, complex computations or dynamic logic that the graphical mapping cannot easily express). This approach ensures maintainability – mappings are easier for others to understand and adjust than large script code.
  • Keep Mappings Manageable: In graphical mappings, maintain a clean structure. Map only the required fields – use the Filter or Remove contexts functions to drop any unnecessary data early. Keep an eye on mapping complexity: if a single mapping becomes too complicated (e.g., with many functions or if/else logic), evaluate if it should be broken into multiple mapping steps (such as a two-step mapping) or complemented by a script for the complex portion. Simplicity improves performance and clarity.
  • Handle Value Mappings and Lookups: Often you'll need to map code values (e.g., country codes, status codes) between systems. Use the Value Mapping artifact for this purpose, which acts as a lookup table for cross-reference values. Populate value-mapping tables with source-to-target value pairs rather than encoding such logic in the mapping script. At runtime, the mapping step can call these by the valueMapping function. This separates configuration from logic. Ensure value mappings are named clearly (for example, using a naming pattern like VM_<SourceSystem>_to_<TargetSystem>_<ValueSet> – e.g. VM_SAPtoLegacy_CountryCodes) to indicate their content. Maintaining these artifacts centrally makes updates easier when values change.
  • Test Transformations Thoroughly: Develop mappings with sample payloads from real systems. Use the built-in Mapping Simulator (for graphical mappings) or external tools for XSLT to validate that your transformations work as expected (especially for edge cases or optional fields). This will catch issues early. Ensure to handle exceptions in mapping – for instance, if an unexpected value appears, you might map it to a default or throw a controlled error that can be caught by error handling logic.
  • Avoid Unnecessary Complexity in Scripts: If you do use a Groovy or JavaScript for transformation, keep the script focused and as simple as possible. Do not replicate features available in mapping steps or adapters via scripting. For example, do not use a script to split messages or to perform simple field mappings that the mapping step can handlehelp.sap.com. Scripts should ideally be small utilities (e.g., custom date conversion, complex string parsing) within an integration flow. Overusing scripts can make maintenance harder and can introduce performance overhead if not carefully written.

By following these guidelines, you ensure that data mappings are efficient, transparent, and easy to maintain. The key is to use SCPI's rich palette of transformation tools to their strengths and keep custom code to a minimum.


Tools (pending figaf)

automated testing

itelliJ integrated with SCPI/eclipse integration

inline iflow editor


Error handling/Alerting (FEH ? AIF ?)

Pending Business requirements/NFR's, standard alerting should be in place

  • exceptions should be handled in all cases


MPL Attachements  are bad - DO NOT LOG ENTIRE PAYLOADS.

Async 

Retry based on business requirements and data/message criticality

IDOC post ALEAUDIT with error messages

Catch exceptions for alerting (if not posting  error to another application system).

Sync

All return appropriate errors to calling system - be as informative as possible

Security

Authentication (endpoints)

Certificates > oAuth > Basic

Data at Rest

Integration should NOT maintain data at rest

Credentials Storage

Stored in secure store

Web based tools and LLMS

DO NOT ! including JSON/XML formatter of data, LLM based coding agents (is there a co-pilot sanctioned by Syensqo ?).

AI tools need to be used as per current IT policy - https://thehub.syensqo.com/en/syensqoai/introducing-syensqos-ai-usage-policy-key-milestone-our-ai-journey?check_logged_in=1







Documentation

Inline

ALWAYS document Iflow sender and receivers


ALWAYS Iflow Description


ALWAYS give descriptive name to flow steps, including sender and receiver default system and internal integration processes (do not leave as default)- 

example

should be

Confluence

As build with reference to MAPPING/FUNC SPEC/TECH OBJECT in SCPI


Logging and Traceability

Implement logging thoughtfully to aid debugging and auditability, without compromising performance or security. Traceability ensures you can follow a message’s path and diagnose issues when they arise:

  • Use Message Logging Wisely: SCPI offers a Message Log (accessible in monitoring) and a Log Message step (in integration flow). You should log key events or important data points, but avoid logging entire payloads in production flows. Large payload logging can consume resources and possibly expose sensitive data. Never use data stores or giant Groovy scripts to log full payloads for debugging – this is an anti-patternscribd.com. If you need to record payload content for troubleshooting, consider writing just a portion (e.g., an order ID, or first 100 characters of a message) or use the built-in Trace feature during development/testing. The trace mode in SCPI, when activated, captures detailed payloads and step-by-step details, which is useful for debugging; however, enable trace only temporarily in non-production environments or for short periods in production if absolutely neededscribd.com. Remember to turn it off, as leaving trace on will impact performance and potentially write sensitive data to logs.
  • Add Business Identifiers: To make tracking easier, incorporate business-specific IDs or correlating keys into your logs. A best practice is setting the SAP standard header SAP_ApplicationID or a similar property with a business identifier (like an Order Number, Employee ID, etc.) early in the flowscribd.com. This ID will then appear in the message monitoring view, allowing you to search for messages by a functional key. It greatly aids operations teams in finding the exact message related to an incident. For example, if an order integration fails, having the Order ID in a searchable field means support can quickly locate that message in the CPI monitor.
  • Correlation for Multi-Step Processes: If your scenario involves multiple iFlows (e.g., one iFlow splits messages and others process the parts), ensure you propagate a correlation ID across them. This could be done by maintaining a common property or using the message ID. SCPI’s message monitoring doesn’t automatically group related messages, so it’s up to the developer to pass along an ID (perhaps in a custom header like X-CorrelationID). That way, logs in different flows can be tied together when analyzing end-to-end processing.
  • Use of External Logging/Monitoring: For enterprise projects, consider integrating SCPI with a central logging or monitoring solution. SCPI allows access to message logs via OData APIs – some projects export error logs to an external SIEM or monitoring tool. At minimum, ensure that failed message alerts (discussed in Error Handling) are sent out so someone is notified to check the logs.
  • Non-Repudiation and Audit Trails: If required by your scenario, use the SAP Cloud Integration's logging capabilities to store an audit trail of messages (e.g., using the Write Variable step to store checkpoints or using persistent data store entries). For example, you might log an entry when a message is successfully processed (with timestamp, IDs, etc.) to an audit database. This can be useful for critical transactions. However, balance this with performance – writing to external systems for every message can slow throughput.

In essence, log enough information to trace what happened for each message, but not so much that it floods the system or exposes sensitive data. Strive for clear, concise log entries that will make sense to someone troubleshooting the interface later. Good traceability means any message can be followed from start to finish, and issues can be pinpointed quickly.


MUST use logging -     messageLog.addCustomHeaderProperty("LABEL", "INFO like IDOC_NUM/MATNR/KUNNR");


Naming Standards

Consistent naming conventions are critical for clarity and team collaboration. All integration artifacts (such as packages, iFlows, mapping artifacts, etc.) and variables should have meaningful, self-explanatory names:

  • Integration Packages and iFlows: Include context like project or domain, interface details, and direction in the name. For example, a structured format could be: [InterfaceID]_[Direction]_[Description]. This might include source/target system names or message type. E.g. ORD00100230_IN_Vendor_HTTP_SFTP_OrderRequest (meaning an Order interface, inbound from a Vendor system via HTTP to SFTP, handling order request messages). Such naming allows team members to immediately understand an iFlow’s purpose and involved systems.
  • Internal IFlow Steps: Adopt clear prefixes for various steps to indicate their type. For instance:
    • Message Mapping: Prefix with MM_ followed by source-to-target info. E.g. MM_XMLToJSON_Transform for a mapping converting XML to JSON.
    • Content Modifier: Prefix with CM_ and describe the data added/changed. E.g. CM_AddCustomerDetails_BeforeSend to add customer info before sending.
    • Groovy Script: Prefix with GS_ plus its function. E.g. GS_ProcessOrderValidation_AfterMapping for a script validating an order after mapping.
    • Request-Reply: Prefix with RR_ if using a request-reply step. E.g. RR_GetCustomerInfo to denote a request-reply call to get customer data.
  • Variables and Properties: Name message properties, headers, and variables with clear intent and scope. For example, use descriptive keys like OrderID, SourceSystem, TimestampUTC. If using environment-specific parameters, use consistent naming (and consider prefixing or suffixing them to indicate their purpose, such as URL_SAP_ECC for an endpoint URL). Avoid generic names like “Var1” or “temp” which don’t convey meaning.

Rationale: A well-defined naming scheme improves readability and acts as built-in documentation, reducing onboarding time and errors. It allows developers to quickly grasp an artifact’s purpose and eases maintenance/troubleshooting.


General Integration Principles

  • standard > custom
  • simple > complex
  • timers are bad
  • data caches are bad
  • ????




API Management

Policy Design/Minimum security standard

When Authentication is available -

oAuth + api-key


When oAuth or basic auth isnt available -  

ipwhitelist + api-key




  • No labels