Here is a the link of the data architecture schema.
Link.
Link.
Schema showing the different STEPS of the application flow - with the data involved at each step

Describe the data and process involved at each step
This project has the data from SAP on QM, material document and Vendor master data tables loading into Industrial data ocean for all data in PF1 basing on the create and modification date. Then, the view on top will filter only Soda Ash data.
Note: Soda ash data has only in PF1, therefore, this project will load only from PF1. (It is not include WP1)
Talend
To access to SAP, the Talend user RFC_TAL_PF1 is required
Table | Name of Table | Create Date | Modify Date |
MCH1 | Batches (if Batch Management Cross-Plant) | ERSDA | LAEDA |
MCHA | Batches | ERSDA | LAEDA |
QALS | Inspection lot record (header) | ERSTELDAT | AENDERDAT |
QAMR | Characteristic results during inspection processing | ERSTELLDAT | AENDERDAT |
QAMV | Characteristic specifications for inspection processing (Average / Summary result) | ERSTELLDAT | AENDERDAT |
QAVE | Inspection processing: Usage decision | VDATUM | VAEDATUM |
MSEG | Document Segment: Material | CPUDT_MKPF | /BEV2/ED_AEDAT |
LFA1 | Vendor Master (General Section) |
DataOean GCP = prj-data-dm-industrial-dev
Product GCP = prj-data-sad-ebatch-ppd
To save the data into GCP Industrial Data Ocean, service account is required
Same as the source
Expected data volume for :
- full process
- incremental process
How to validate that the generated output is valid
Is there an automatic schedule ? At what frequency ? What is the trigger ?
The average time expected for :
- full process
- incremental process
High / Medium / Low
Logging location
SAME QUESTIONS