General presentation

Objective of the application

Provide predictive information on Credit Management.

 

Tool Leader: David TONDA

IT leader of the application: Guillaume THEVENET

Name of project: PCM Predictive Credit Management

PMO Project: 6958 Big Data for Credit Management

Reporting Coordinator: David TONDA

Usage information

Number of users: tbd

Critical period: none

Geographical perimeter: worldwide

InfoArea:

Process chain Display Component:

History

Provide some history of the application : When was it created ? What was the initial project ? Who was the original requestor ? If possible provide link to the project information

Roles & Access

Aligned on FIAR Roles & Access

Dataflow overview

Summary

The dataflow can be divided in 3 main steps:

  1. check that no other run are in progress (a) and then extract data from BW to PCM SQL database 
    1. Extract master data in full mode to csv files through openhub (b)
    2. Load FIAR and CAMS data into specific DSO build for PCM project (DBFIAR20 and DBFIAR21),then extract to csv through openhub
    3. launch ksh script (c) to execute treatment on PCM SQL database side (through OS Command (d))
  2. treatment on PCM SQL database side
  3. load data from PCM SQL database  to BW
    1. when OS command is finished (meaning that treatment are finished), load the status table (e)
    2. if code in status table is 0, load PCM data into 2 specific DSO, DPFIAR10 and DPFIAR11, else regenerate file or put process chain in error

 

 

DBFIAR20 loading process

 

Architecture Overview

Reporting documentation drive folder:

https://drive.google.com/drive/folders/0B0qn89R0RGdqYkZZOFZyYXlXVkE

Functional and Technical rules on Workbench + Reporting

Rules & Explanations

The whole process can be summarized has follow:

Program (a)

(created in SE38)

2 program to update the TVARVC variable Z_BW_PCM_PC_STATUS

The purpose of these program is to avoid launching process chain twice if previous run is still active.

If the process chain is already running, the process chain will fail requiring deeper analysis (normal duration should be less than one hour - only init run should exceed this duration)

Openhub (b)

7 openhub have been created for master data and 2 for transactional data

All master data openhub are loaded in full mode. All extract data from master data except the TCURR one which is connected to a datasource.

All transactional data openhub are loaded in delta mode. (remark delta is not possible with multiprovider).

 

All openhub use a "|" separator because some field already contain ";" in the value and openhub doesn't encapsulate data.

All openhub use logical file name defined through FILE transaction. All files are stored in the following folder

KSH script (c)

KSH script have been build by D3S/adagio team.

They are stored in the following folder

pacm-ping.ksh is the initial test for the PCM project (not used anymore).

pacm-reset_db.ksh allow to reset SQL database from BW.

pacm-run.ksh is the script which launch treatment on SQL database side. It generate a log in the following folder: /exploit/TA/tmp/${STAMP}_log_run_$server.log

This scripts awaits two parameter:

OS Command (d)

(created in SM49)

The ZPCM_RUN launch a .bat file which load extracted file in the D3S SQL database. It's included in process chain through the following variant:

This variant has been modified directly in WBP to launch treatment on the SQL production server

The ZPCM_RESET_DB reset the database. It's not included in process chain

Status table (e)

The status table is loaded in DSO DPFIAR12

C_COMMAND should be used to run the "ZPCM_RUN" or the "ZPCM_RESET" depending on the value sent by SQL database. This functionality is not used and by default the process chain always launch the RUN. If the RESET is required, then it needs to be launched manually.

C_CODSTAT can take 3 different value which are checked in the process chain:

These value are checked in the process chain through "Enhanced decision"

Dependencies with other applications

PCM data are loaded on top of FIAR data.

Data loadings

Info providers and objects loaded

Detail of process chain, list + link between or special event done for the loading

Target folder

Loading frequency

Detail of frequency : monthly; weekly or else

Average performance

if possible, give some information on average process chain duration, amount of data loaded and total data volume example: daily process chain loaded in 30 min, weekly chain loaded in 1h15, with around 2k to 10k lines in DELTA mode for a total of 10M lines in cube. The purpose is to give a general overview of the volume of data managed y the application

 

Key FigureEstimation
~ Average Process Chain Runtime 
~ Average nb of rows loaded per load 
~ Total nb of rows loaded (if full) 
~ Average Runtime for 10k lines 

Record Keeping

DSO are loaded with full historical data but we only send today - 3 years to SQL server.

Reporting

Queries End User Documentation

Query end user documentation should be created in the public "Customer Support Wiki" space under the corresponding BW application page : BW - Application. Technical query query documentation, if necessary should be added as a sub-page of this documentation using the BW Technical Query Documentation template.

 

Main queries

List the most important and complex queries only with a link to the documentation

Main functionalities

Give detail on all complex functionalities: list most important and/or complex KPI, query jump, alerts

Broadcast

All Credit Management broadcasts (and associated PCs) were deactivated on Business demand

Maintenance

Known bugs

Give the list and explanation on the known, not-solved, bugs.

Recurring procedure

List recurring procedures

Planned Evolution

Detail planned major evolution if already known. Example: complete decommissioning of application is planned in 2017 / Extension to solvay perimeter planned in 2nd semester of 2016