Functionnal Specification:
Integrated margin is originally calculated only in WBP side.
The goal is to calculate the integrated margin in CPX (like it's done in WBP with program Z_M_INT) with data from WPX and WBP and after this calcul, we must send back the data in WBP.
So, instead calculate Integrated Margin only in WBP, we will use data in WBP and in CPX.
In CPX, program ZBW_M_INT & ZBW_M_INT_ANALYSIS are a replication and adaptation (in name of tables for example) to programs Z_M_INT & ZBW_M_INT_ANALYSIS in WBP.
Apart differences on name of sources tables and internals tables, in program Z_M_INT (WBP) we make the calculation for key figures Integrated Duty (/bic/k_intdut) and Integrated Freight (/bic/k_intfrg) and not in program ZBW_M_INT (CPX) because with use data from CPX in WBP to calculate these keys figure.
Z_INT_COST is a transaction to see the result of integrated margin (she is linked to program ZBW_M_INT_ANALYSIS).
In program ZBW_M_INT_ANALYSIS, the calculation of integrated margin is done in module function ZBW_M_INT_READ:

Tool Leader + IT leader of the application: Name of project or main leader for the application + reporting coodinator
How many users? Which user category ? Is there critical period ? Which geographical perimeter? Describe in one sentence: around 50 users, worldwide or 200 users / European Site
Provide some history of the application : When was it created ? What was the initial project ? Who was the original requestor ? If possible provide link to the project information
List of application role + menu role and explanation if we have several applications role with specials rules.
| Role Code | Role Description | Explanation |
|---|---|---|
List of autorisation objects mandatory for the application.
| Authorization object | Explanation |
|---|---|
DSO correspondence WBP - CPX :
| CPX | WBP |
|---|---|
| ABCOMM01 | ODS_PCP1 |
| ABCOMM02 | ODS_PCP2 |
| ABCOMM03 | ODS_PCP3 |
| ABCOMM04 | ODS_PCP5 |
| ABCOMM05 | ODS_PCP6 |
| ABCOMM06 | OSD_PCP4 |
| ABCOPA01 | DBCOPA24 |
| ABMARM01 | ODS_MARM |
In CPX, program ZBW_M_INT is a replication and adaptation (in name of tables for example) to program Z_M_INT in WBP.
We must pay attention to table TVARVC for Z_PERIOD_IM_HIST name: if LOW field is empty, the calculation of integrated margin will be done on current and current period - 1.
If the LOW field is filled (format MMMYYY), the chain will use this period to calculate the integrated margin (to be used in case of recalculation) (same rules inWBP system).
In DTP: ABCOMM06 (Vault) -> OH_COMM01 - Delta, we have a filter to load only data with WX Source of data (to load only data who come from WPX system).
Dependencies with other application (BW, QV, Externals tools like PVelocity or Dynasis etc...
See data flow.
Monthly, after the execution of process chain RSP_COPCP_MONTHLY in WBP system (second day of workday).
PC_CO_PC_MM_04 - MM: META - M - Integrated Margin
To calculate the Integrated Margin in WPX, we need some informations for WBP. To do that, in WBP we splitted chain RSP_COPCP_MONTHLY in two:
One chain in WBP: RSP_COPCP_MONTHLY load data in WBP and trigger the chain in CPX (PC_CO_PC_MM_04).
The chain PC_CO_PC_MM_04 will load data from WPX and WBP (calculated in chain RSP_COPCP_MONTHLY) and calculate the integrated margin for Vault plant.
After that the data will be sent in WBP with help of webmethods and the chain PC_INT_MARGIN (WBP) will calculate the integrated margin with data from WBP and CPX.
Step 1: RSP_COPCP_MONTHLY (WBP)
Step 2: PC_CO_PC_MM_04 (CPX)
Step 3: PC_INT_MARGIN (WBP)
Document for changes in process chain in WBP side.
PC_CO_PC_MM_04 - MM: META - M - Integrated Margin is composed by three sub chains:
After these three sub chains, we use program ZBW_TVARVC_ZPERIOD to update TVARVC table for Z_PERIODE, Z_PERIODE-1 and Z_PERIODE-2.
Then:
If in TVARVC LOW field is empty for Z_PERIODE_IM_HIST, we execute program ZBW_M_INT with Z_PERIODE1 variant (Z_PERIODE-1 in tvarvc) and Z_PERIODE_M variant (Z_PERIODE in tvarvc).
Else, if in TVARVC LOW field is not empty for Z_PERIODE_IM_HIST, we execute program ZBW_M_INT for the period defined in TVARVC low field.
Program ZBW_M_INT will calculate the integrated margin and load the result in DSO ABCOMM05.
Data from DSO ABCOMM05 will be loaded in DSO ABCOMM06 and then in open hub destination OH_COMM01.
When the open hub is finished, the chain execute the program ZBW_REST_SERVICE_IM to sent information to give the go to webmethods to transfert the file OH_COMM01.CSV (in directory /exploit/BW/integrated_margin) from CPX to WBP.
PC_GLOBAL_ATTR_04 - Global: MD - D - Material unit of quantity (DSO)
This chain needs to be lauched before the cauclation of integrated margin to have last version of material unit quantity in DSO ABMARM01.
Normally this chain is launched every day.
Program ZBW_REST_SERVICE_IM:
The goal of this program is to transfert some information (directory, hostname of source an target server...) to webmethod to trigger the transfert of flat file from CPX to WBP.
For that it's need to hafe an RFC destination "RFC_MFT_REST" (transaction sm59) where we defined the target host:

After in program ZBW_REST_SERVICE_IM (inspired by program in this documentation) we defined the informations to send to Webmethods like:
The tests will be done with WB1 system instead WBQ.
To load data from WB1 instead WBQ in process chain PC_CO_PC_MM_04 there is a decision process where the program will be read the master data global filter:


If low field = YES, the process chain will use DTP with WB1 source system instead WBQ.
This variable from mmaster global filter is also used in program ZBW_REST_SERVICE_IM to define the target host:

After the test and the transport in production, it will be necessary to at least put the LOW at NO. And if possible remove the decision process and sub chain WB1 from process chain PC_CO_PC_MM_04 to make it clearer and avoid mistakes (and comment the code in program ZBW_REST_SERVICE_IM .
if possible, give some information on average process chain duration, amount of data loaded and total data volume example: daily process chain loaded in 30 min, weekly chain loaded in 1h15, with around 2k to 10k lines in DELTA mode for a total of 10M lines in cube. The purpose is to give a general overview of the volume of data managed y the application
| Key Figure | Estimation |
|---|---|
| ~ Average Process Chain Runtime | |
| ~ Average nb of rows loaded per load | |
| ~ Total nb of rows loaded (if full) | |
| ~ Average Runtime for 10k lines |
Give details if any historisation is done, example: keep only data greater than beginning of Y-3
Query end user documentation should be created in the public "Customer Support Wiki" space under the corresponding BW application page : BW - Application. Technical query query documentation, if necessary should be added as a sub-page of this documentation using the BW Technical Query Documentation template.
List the most important and complex queries only with a link to the documentation
Give detail on all complex functionalities: list most important and/or complex KPI, query jump, alerts
Indicate if there are broadcasts and give some details on the broadcast settings.
Give the list and explanation on the known, not-solved, bugs.
List recurring procedures
Detail planned major evolution if already known. Example: complete decommissioning of application is planned in 2017 / Extension to solvay perimeter planned in 2nd semester of 2016