...
The connection to SAP WP1 is already in context g_CNX_SAP_WP1
Extract data
From the ECC, we can extract only information from the table. To get metadata it is highly recommended to get if from
1. Table (tSAPTableInput)
1.1 Retrieve metatametadata
Metadata > SAP Connection > (Server) > right click and select Retrieve SAP Table
...
1.2 Copy the reference job J020_ECC_Table_to_GCS to the new job
1.3 The only change is on the table object tSAPTableInputChange component tSAPTableInput (attribute)
Drag and drop the metadata table object to the new job or select to change the schema on the existing object and ensure that the schema, table name, filter, number of records are correct.
Filter is required to get only required data
1.4. Click on the sync column to get the new schema on tLogRow if you want to see the output during the run of the job or delete this job
...
1.6 Enter the parameter context to connect to GCP
| CONTEXT VARIABLEVariable | MEANINGMeaning |
|---|---|
| l_VAR_FILE_GCS_CSV_TO_STAGING_BUCKETNAME | Bucket Name in GCSlocal folder to save the output from the extraction |
| l_VAR_GCS_CSV_TO_STAGING_BUCKETNAMEIt is the name of the bucket containing the file to download.FOLDER | Local path |
| l_VAR_FILE | Local filename |
| l_VAR_LIMITROW | Limit number of row per file in case of tSAPTableInput |
| l_PATHDIR_GCS_CSV_TO_STAGING_SERVICE_ACCOUNT_PATH | location of the JSON file to access the GCSLocation path and file of service account (JSON file) |
The rest of the job will extract the data from SAP and generate the file on the local folder, then upload it to the bucket that we enter in point 1.6
The job will not delete the file in GCP but overwrite it. Therefore, be careful if the next run has number of file less than pervious one, the old data will be existing in the files and possible to be wrong data to load to staging. Or it is better to name the file with timestamp in order to avoid overwrite wrongly.


