Describe in a few words the content of application + link with functional or technical specifications (google links) done with during the project or linked with PPMO
Tool Leader + IT leader of the application: Name of project or main leader for the application + reporting coodinator
How many users? Which user category ? Is there critical period ? Which geographical perimeter? Describe in one sentence: around 50 users, worldwide or 200 users / European Site
Provide some history of the application : When was it created ? What was the initial project ? Who was the original requestor ? If possible provide link to the project information
List of application role + menu role and explanation if we have several applications role with specials rules.
| Role Code | Role Description | Explanation |
|---|---|---|
| ZR_RCS_CA_M731 | CT - COSTA Transactions | Role to have access to transaction below : ZBW_ADJ_PERIM_COSTA |
List of autorisation objects mandatory for the application.
| Authorization object | Explanation |
|---|---|
Composites providers:
CPCOCT05 (uses calculation view CV_FMCO_CO_CT_UNION_COSTS_TRANSPARENCY_DEST).
Architectur & Data Flow Desciption for :
10639 - Costs Transparency BW - Data Flow & Calculations Views

Clusters and sub clusters come from PF1_050, datasource DTS_BW_REPO_BU_CLU_SCLU. This datasource uses module function ZFM_BW_REPO_BU_CLU_SCLU).
We extract data into adso APCOCT14 and then load the attributes and textes in master data CPFCTR1_2, C_CLUSTER & C_SUBCLUS (loadings done in process chain PC_CT_MD_01).
Goal:
Add in master data C_COSTCTR two attributes: group function and GBU function extracted from hierarchy Z013ZCBS_GRP (WP1 system). The link between cost center and hierarchy is C_FUNCT_2 (attribute from cost center and level 4 in hierarchy Z013ZCBS_GRP).
Steps:
1) Extract Z013ZCBS_GRP hierarchy in master data C_FUNCT_2 where: (done in process chain PC_CT_MD_01).
2) Load attributes C_GRPFUNC & C_GBUFUNC in master data C_COSTCTR (with look up on C_FUNCT_2)
In adso APCOCT04 we use data from master data C_MAT_GRP to determine cost sub package (C_CTSBPKG).
This loading is done in process chain PC_CT_TR_01 (process skipped for the moment because values in master data C_MAT_GRP are not ok, so data loaded by flat file).
In adso APCOCT05 we use data from master data C_SEGT to determine cost sub package (C_CTSBPKG).
This loading is done in process chain PC_CT_TR_01.
In adso APCOCT06 we use data from adso APCOCT16 (only landscape ERPSOLV and ERPRCS) to determine cost sub package (C_CTSBPKG) associated to cost element.
For the moment there is no process chain to load this dso, for the moment there is another transformation and dtp to load the adso with flat file.
Goal: in master data c_coorder add three new attributes C_GEN_REA & C_INV_REA & C_ORD_TYP
C_GEN_REA & C_INV_REA come from C_WBS_EL2 and C_WBS_EL2 comes from dso ABCOCT01 & ABCOCT02.
Main rules:

Start routine:
End routine:
If C_COORDER___0ORDCATEG = 30 feed C_INV_REA & C_GEN_REA & C_ORDR_TYP
Else feed only C_ORDR_TYP
If nothing was found, no change, keep the current attributes in master data C_COORDER.
Goal is to determine "General reason for investment"(C_GEN_REA) from mapping table APCOCT17 for CO-OM WBS part flow and Corr Flow.
A program has been created ZBW_COSTA_INV for upload file as link:
aDSO:
Process Chain:
Impacted HANA Views:
Example for HANA Views CV_FMCO_CO_CT_WBS6_RHODIA, left join has been made with 0LOGSYS and C_INV_REA and get C_GEN_REA:

In scope of Costa data, it's necessary to define which data are for COSTA, F2G or ANAPLAN. The idea is to exclude some records not in scope of ANAPLAN or F2G.
https://drive.google.com/file/d/1gm7dEVDqkwav53t70qyF9hPcTdlgGQG1/view?usp=sharing
To do that, it's necessary to define the rule of exclusion in a table: ZCOSTA_ANAPLAN

This table could be maintened with SM30 or transaction ZMAINT_COSTA_ANAPLAN.
We use this table in calculations views CV_FMCO_CO_CT_UNION_COSTS_TRANSPARENCY & CV_FMCO_CO_CT_UNION_COSTS_TRANSPARENCY_DEST.
To apply the exclusion flag anaplan or F2G:





Level 5 responsible cost center is retreived in CV_FMCO_CO_CT_UNION_COSTS_TRANSPARENCY & CV_FMCO_CO_CT_UNION_COSTS_TRANSPARENCY_DEST.

1 - Projection of adso APCOCT18 where L5 are stored.
2 - Join with costa (co_area, c_resp_cc) to have calday (calculated with the first day of the costa period) from costa data in same line of L5.
3 - Projection of L5 with filter on L5 date from and L5 date to with help of calday.
4 - Rank to take only one L5 by co_area, c_resp_cc and date.
5 - Join to add the L5 in costa table with join on CO_AREA, C_RESP_CC and CALDAY.
In calculations views
| CV_FMCO_CO_CT_COOM_DEST_RHODIA |
| CV_FMCO_CO_CT_COOM_DEST_SOLVAY |
| CV_FMCO_CO_CT_COMPOSITE_RHODIA |
| CV_FMCO_CO_CT_COMPOSITE_SOLVAY |
| CV_FMCO_CO_CT_AT_DEST_COR |
| CV_FMCO_CO_CT_AT_DEST_SAMPLE |
| CV_FMCO_CO_CT_COST_NON_ERP |
| CV_FMCO_CO_CT_COPA_COMPO_RHODIA |
| CV_FMCO_CO_CT_COPA_COMPO_SOLVAY |
The global GBU is calculated in fuction of C_MAGNITU or C_RESP_CC:
If we don't have GBU in attribute of C_MAGNITU (destination GBU) we use GBU from C_RESP_CC (origin GBU).
Rule in calculation view: if(isnull("C_MAGNITU__CPFCTR1_2") OR "C_MAGNITU__CPFCTR1_2" = '',"C_RESP_CC___CPFCTR1_2","C_MAGNITU__CPFCTR1_2")
Get ECC Sample COGS table in specific datasource and replicate it in BW part (Bex COSTA):


Needs column "S" of "Step 1 - Axis , Filters and authorization excel file":


aDSO creation has been made :
Rules In start routine:
SITER field determination, with dso DPCOMPCD(DSO : Company Code Specific Attributes Purchasing) with company code
and 0PLANT master data with plant

in target part for master data C_SITER:
First get C_SITE data in 0PLANT (internal table IT_PLANT) and if no data get C_SITER in dso DPCOMPCD (internal table IT_SITER)

For DTP : full loading.
Process chain get data to ECC and load the aDSO APCOCT08 - PC_CT_TR_SAMPLE
This process chain has been put in COSTA global Process chain PC_CT_META_01 and load every day.
aDSO is used in calculation view CV_FMCO_CO_CT_SAMPLE and get in left join several fields.
There are calculated columns in projection P_FORMAT:


this calculation view is used in composite provider CPCOCT01.
Goal: Load from CYTEC Non ERP data (Bex COSTA) for cost center and cost element to:
Needs column "Y" of "Step 1 - Axis , Filters and authorization" excel file:

Loading is made in program ZBW_CYTEC_CBS and download in AL11 directory with csv file:
3 process chain are used for load NERP data in cube and 2 master data:
PSA deletion has been made in program with process chain:
NERP BW Flow:

dso is used in calculation view CV_FMCO_CO_CT_NERP_COMPO and get in left join several fields.
There are calculated columns in projection P_FORMAT:


this calculation view is used in composite provider CPCOCT01.
Goal is to upload with flat file budget COSTA data and import it in BW (Bex COSTA) from program ZBW_COSTA_BUDGET
Needs column "AF" of "Step 1 - Axis , Filters and authorization" excel file:

Template of Budget flat file :
aDSO has been created for get Budget data from AL11 folder:
An event have been created for data loading in BW side:
Process chain is:
Budget BW flow:

aDSO APCOCT12 is used in calculation view CV_FMCO_CO_CT_BUDGET_FLOW and get in left join several fields.
There are calculated columns in projection P_FORMAT:


this calculation view is used in composite provider CPCOCT01.
Goal is to get data in dso DBFIPA10 CAPEX Rhodia - FIAP Lease and DBFIPA11 CAPEX Solvay - FIAP Lease.
There are not specific program but there are calculation view with several left join to get several fields.
Needs column "V" and "W" of "Step 1 - Axis , Filters and authorization" excel file:

dso are used in calculation view CV_FMCO_CO_CT_WCFI_LEASE_COMPO_RHODIA and CV_FMCO_CO_CT_WCFI_LEASE_COMPO_SOLVAY get in left join several fields.


There are calculated columns in projection P_FORMAT:

this calculation view is used in composite provider CPCOCT01.
Goal is to actual and quantity correction in COSTA flow.
Needs column "AD" of "Step 1 - Axis , Filters and authorization" excel file:

Template of file for program ZBW_CORR_FLOW (transaction ZBW_COR_COSTA):
https://docs.google.com/spreadsheets/d/14Yol6G9Uj0CoYze8LkHSB7fmwTloiyHb/edit?rtpof=true
Loading is made in program ZBW_CORR_FLOW and download in AL11 directory with csv file:
aDSO have been created:
No event have been created for data loading in BW side:
Process chain is:
Budget BW flow:

aDSO APCOCT07 is used in calculation view CV_FMCO_CO_CT_CORR_FLOW and get in left join several fields.
There are calculated columns in projection P_FORMAT:


this calculation view is used in composite provider CPCOCT01.
Goal identify with a flag all CO document (at header level) link to PCARD Flow( COOM flow Solvay and Rhodia).
PCARD GL Account is put in table C_GLBFILT for determine relative CO document GL Account for PCARD, GL Account can be changed :

There are 2 HANA views, CV_FMCO_CO_CT_COOM_SOLVAY and CV_FMCO_CO_CT_COOM_RHODIA for determine PCARD flag:

Goal Before Origin/Destination Process replace the partner Object for 0NWA posting by WBS object.
Only for PF1 Solvay
For do this datasource have been used:
2 master data have been created:




DTP's have been created on ODP.
Transformation's are on direct mapping without trasnformation rules.

Loading has been made in process chain PC_CT_MD_01 (daily loading)
Impacted HANA views:


Left join has been made between source table ODS6CCA9 and master data(MD) C_NETACT join on:
PIOVALUE network value
PIOBJSV type of partner (here 0NWA)

Substitution have been made (0NWA "Network"→ 0POS "WBS") if in source (ODS6CCA9) PIOBJSV = "0NWA" and DB_CR_IND =" H"
Calculated columns for J_NET_TO_WBS node of view:



get only PIOBJSV = 0NWA of source table if match with LOGSYS and PIOVALUE(network value) get WBS from C_NETACT and put 0POS for PIOBJSV
Result after join with C_NETACT:

Result before join with C_NETACT:

Goal is to determine Responsible Cost Center with complex condition for CO-PA and FI flow:
This complex condition has been translated into technico-functional algorithm:

Datasource with AL11 file link:

File has been created and downloaded with program ZBW_COSTA_RCC (template https://docs.google.com/spreadsheets/d/1vW3BondHlMdE8slNNhHS8CnsfREMiBwN/edit#gid=2033384317) and AL11 file data it's loaded in aDSO APCOCT15 with process chain PC_CT_RCC_01.
Impacted HANA Views:
Example with HANA View CV_FMCO_CO_CT_COPA_COMPO_RHODIA (same development for others HANA Views)
:
![]()
For determine "Responsible Cost Center" there are 6 steps:
Step 1:
![]()
Step 2:

Step 3:

µ
Step 4:
Before aggregation:

After aggregation:

Step 5:

Step 6:
Granularity it's the same between source node and semantics nodes
Result:

Goal is to get user fields in table PRPS in ERP side:
Data source are PF1 and WP1:
BW side:

There are 2 datasource 1 for PF1 and 1 for WP1.
There are 2 transformation 1 for PF1 and 1 for WP1.
There are 2 dtp 1 for PF1 and 1 for WP1.
There is 1 infosource (1 transformation) for process 2 flow
aDSO APCOCT19
Rules:
In 2 transformation (PF1 and WP1) i populate:

Infosource:

in infosource transformation a process have been made for populate fields C_RCCBILL, C_GBUBILL, C_COABILL for WP1 and PF1 together.

3 select has been made in start routine:
Read table has been made in C_WBS_EL2 target field and used in appropriate fields.
Loading has been made in daily PC_CT_MD_01 procces chain.

Composite Provider CPCOCT08 allows users to compare current data with weekly/on demand snapshot saved in ADSO ABCOCT07.

ABCOCT07 is loaded from Composite Provider CPCOCT09, copy of CPCOCT01 without Input Parameters. The Input Parameters "Fiscal Period From" and "Fiscal Period To" are restored in Calculation View CV_FMCO_CO_CT_ABCOCT07, to facilitate union between Snapshot data and Current data.


Each Snapshot is timed and dated, and named following the pattern "Pyymmdd". Snapshots past a certain time (in months, stored in C_GLBFILT 001 for Stream COSTA and Rule SNAPSHOT) are deleted from history.
A snapshot is taken every Saturday though Process Chain PC_CO_CT_01, but one can also be taken manually through Transaction ZBW_COSTA_SNAP.
Users can access the data through query BW_QRY_CPCOCT08_0001 "COSTA - Snapshot vs Current Comparison (Core Query)"
Should snapshot data appear to be doubled/tripled, display C_TIMEP (Snapshot Time) to check that multiple snapshots were not taken the same day, which would explain that result in the query.
Note: Data should be loaded through DTP without "SAP HANA Extraction" checked. That check changes the way data is extracted from CPCOCT09 and generates erroneous totals for K_INTRAT.
Anaplan sends its Budget and Best Estimate data, both for Origin and Bridge Destination, to a folder on Application Server: /export/BW/Anaplan_inbound.
Origin data is collected through ADSOs APCOCT21 and ABCOCT08.

Bridge Destination data is collected through ADSOs APCOCT22 and ABCOCT09.

Both dataflows are loaded in parallel using the same Process Chains, one for Budget data and one for Best Estimate
| Process Chain | Process Chain Description | Period |
|---|---|---|
| PC_CO_CT_05 | COSTA - TD - Anaplan Best Estimate Destination | Monthly on 13th workday of the month in Calendar 50 |
| PC_CO_CT_06 | COSTA - TD - Anaplan Budget Destination | Yearly on 13th workday of July in Calendar 50 |
Users can request to have either chain running in addition to these default periods.
For Budget, initial load is done in July, with corrective/definitive values entered between September and November (daily updates required during that period).
Each source Flat File is assigned to a C_SRC_ANA (Source Anaplan) value, which is used to identify the dataflow, and offers a single KF for the Propagation ADSO associated.
For example, DTS_ANAPLAN_BESTIMATE_ADJ, DTS collecting adjustments for Origin Best Estimate, initiates KF K_ADJ_BESTIM in APCOCT21, with C_SRC_ANA = 'SRC_ADJ_BESTIM'.
Most important line of code at this level is this, identical in all transformations to APCOCT21/22, except for the name of the KF and the values of C_SRC_ANA (see above) and C_VERSN2 (F01 for BE and 000 for Budget):


This code sets all existing entries from the same source to Flag Obsolete = 'X', while the transformation loads actual data with Flag Obsolete = ''. Thus, all data deleted from the source file are marked as obsolete in the Propagation ADSO once the transport is complete. The transformation of the ADSO on itself (APCOCT21 → APCOCT21 and APCOCT22 → APCOCT22) set RECORDMODE = D to all obsolete entries, so they are removed from Business level when the Delta load runs.
The quarters of the period to be flagged as obsolete in Propagation layer (and then to be deleted) are manually maintained in the Global Filter : one for Budget (COSTA/PERIOD_000) and one for BE (COSTA/PERIOD_F01).
These values must be manually adapted when changing the loaded year.


For each monthly run, Best Estimate history is saved by moving existing data from version F01 to F02. Whether this save is done or not depends on the value of Global Filter COSTA/RELOAD_F02 above. Low Value should be 'Y' by default, changed to 'N' when manually reloading Best Estimate data.
Dependencies with other application (BW, QV, Externals tools like PVelocity or Dynasis etc...
1) Update TVARV table
→ SM30 → TVRAV
In parameter, check current run is equal to 0, macimum number run equal 7 and put X to init mode.

In selection option, if necessary change the date (by default is the current month - 1):

2) Execute process chain PC_CT_DEST_SCHE_01 (with transaction se37 → RSPC_API_CHAIN_START).
Detail of process chain, list + link between or special event done for the loading
Detail of frequency : monthly; weekly or else
if possible, give some information on average process chain duration, amount of data loaded and total data volume example: daily process chain loaded in 30 min, weekly chain loaded in 1h15, with around 2k to 10k lines in DELTA mode for a total of 10M lines in cube. The purpose is to give a general overview of the volume of data managed y the application
| Key Figure | Estimation |
|---|---|
| ~ Average Process Chain Runtime | |
| ~ Average nb of rows loaded per load | |
| ~ Total nb of rows loaded (if full) | |
| ~ Average Runtime for 10k lines |
Give details if any historisation is done, example: keep only data greater than beginning of Y-3
Query end user documentation should be created in the public "Customer Support Wiki" space under the corresponding BW application page : BW - Application. Technical query query documentation, if necessary should be added as a sub-page of this documentation using the BW Technical Query Documentation template.
List the most important and complex queries only with a link to the documentation
Give detail on all complex functionalities: list most important and/or complex KPI, query jump, alerts
Indicate if there are broadcasts and give some details on the broadcast settings.
Give the list and explanation on the known, not-solved, bugs.
List recurring procedures
1) Process Chain Budget will likely be modified to run daily, with test process to either execute dummy program or complete PC run depending on a Global Filter to be set manually OR through event when new file is added to AL11.
2) The way Budget adjustments are handled, with Top-Down and Bottom-Up adjustments on multiple levels, will be updated and simplified later this year (2022).