This document provides a technical overview of the solution delivered by BigData&Analytics Team, inspired by the previous version analytics project developed by D3S (https://wiki.solvay.com/display/BDA/PCM+-+Predictive+Credit+Management)
It explains the building blocks of the source code and the methodology followed to obtain:
The main business stake is to increase overdue coverage with the existing task force. As of today, dunning and pre-dunning actions focus on the largest outstanding amounts, leaving aside smaller accounts (below a threshold). Pre-dunning include some additional rules applied by cash collection teams through a time-consuming manual process.
Cash collection is steered with End of Month KPIs. Although not necessarily representative of the cost of working capital, EOM metrics are relevant as they are fully aligned with other business steering indicators. Predictive analytics are a way forward, especially to better address the smaller accounts on which the overdue rate is higher.
![]()
Figure 1: objectives and core principles

Figure 1bis: functional overview
The predictive solution leverage machine learning technology. A model is first trained on payment history to learn customer behavior based on all available characteristics. For new customers, the model infers behavior based on available data (country, currency, sector, invoice- characteristics, etc…)

Figure 2: Machine learning description
Several talend pipelines covers the whole project during each iteration of the update. It allows to interact with several components of the projects (SAP BW, SFTP server, Google Big Queries, Google Cloud Storage, Dataiku Data science studio, Google Cloud Functions)

Figure 3: List of all talend pipelines used for the project
Access control and administration of the pipleines are managed by SBS BDA Analytics team with limited number of Talend licences
A simple webapp has been developed to monitor and prioritize cash collections. For UI details: see the documentation in confluence. Source code is available on version control tool (bitbucket repository) and through Google SDK in case of user credentials .

Figure 4 : Web interface with main features
There are several building blocks in or interacting with the solution:
These building blocks are linked through the here abode functional steps.

Figure 5: Building blocks and interactions of the whole solution

Figure 6 : Schematic description
Through Data Transfer Processes in SAP BW and SFTP Connectors, all data retrieved in SAP for the project as raw data is remove from SAP environment to GBQ datasets in the GCP projects corresponding to each environment:
Several details to handle and upgrade this workflow:

Figure 7 : Schematic description
Several information are manually stored each month in several tabs of a collaborative spreadsheet. These information are transformed into KPI for the forecast and strategies computation:
Several details to handle and upgrade this workflow:
This step helps for the daily and on demand update of the Master data and Transactional data stored in Gbq. Talend communicates with google cloud storage to launch Gbq saved queries to organize the extract-transform-load process.

Figure 8 : List of sql files for Talend transformations
The model is performed in Data Science Studio Platform of Dataiku, a user-friendly interface coded in python.
These SaaS tool owns several data connectors to external databases and storage such as: Google cloud storage, GCP.
The model object is (New RF on train sample) computed on demand by a Data Scientist with a python dedicated library, scikit-learn and stored as pickle object inside the platform and is made available to other dataiku project depending on access policy.

Figure 9 : Model details on dataiku.
Each version of the model is evaluated automatically by the data science plateform. Split between train and test sample is realized with « net due date » variable, oldest documents for training (approximatively M-48 to M-7) , newest for test (M-6 to M-1)

Figure 10: Accuracy computation on dataiku.
Each working day at approximately 10:30 for dev env and 11:00 env, the prediction based on document data is released.

Figure 11: Daily prediction process
Here below is the strategy design process :

Figure 13: Strategy specification
The link for the source spreadsheet:

Figure 14 : Strategy Daily computation
Here is the link to useful document for the design :
Each user of the application should access to a specific documents portfolio split by region for a standard user and related to a dedicated group.

Figure 15: User roles specification
More information on the dedicated spreadsheet on Dev environment:

Figure 15bis : Daily update
For more detail, please see the documentation: https://docs.google.com/document/d/1HZU0aQw4uoqxS4mkg5f6ybWZS4cYwCr2QJ_2t23fMpE/edit?usp=sharing

Figure 16: Actions archiving
Link to the dashboard: https://datastudio.google.com/u/0/reporting/1l7Utyq5GIaVdRCbMpkJrKIbMjc5OFsEo/page/SPkf

Figure 17: Dashboarding

Figure 18 : Update BW reprting

Figure 19 : Maintenance rules
https://docs.google.com/presentation/d/1bKRxVjK6c34jx980FuduR_Hnlb7mZaCDPam5hWNLLvQ/edit?usp=sharing