Page tree


You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Description

Tools: Talend

Detail job

  • F010_Helix_Incidents


  • F040_Helix_Measurement


  • F050_Helix_Request


  • F030_Helix_ServiceRequestStub


  • F021_Helix_WorkInfo


  • F011_Helix_Worklog


  • F020_Helix_WorkOrder_json


  1. Connect to the source system API by reading context from flow job
  2. Setup loop to get the data
    1. tSetGlobalVar : to set the maximum number of records to read each time and set the variable nb to check when to exit the loop (start with 0)
    2. tLoop : setup the condition to exit the loop when variable nb < 0
    3. tJava: setup the offset of records in order to get new records of each loop
  3. To get data from the source by using start row number from "nb" and max row number from "limit".  It read schema from the source(meta data)
  4. Generate output file and save to DATA\DEV\DATA_OCEAN_DOMAIN_DT\Tmp
  5. Update the offset number "nb" = "nb" + "limit"
  6. Update "nb" = -1 when ((Integer)globalMap.get("tReplace_1_NB_LINE"))<= 0  in order to exit the loop
  7. Upload the files all the folder( cs-ew1-prj-data-dm-dt-[dev]-staging)
  8. Delete all the files in the folder (point number 5)

Flow job

Below are the list of plan names which are used to invoke the above mentioned Talend jobs.

  • PL_DT_F020_Helix_WorkOrder_json
  • PL_DT_F011_Helix_Worklog
  • PL_DT_F021_Helix_WorkInfo
  • PL_DT_F030_Helix_ServiceRequestStub
  • PL_DT_F050_Helix_Request
  • PL_DT_F040_Helix_Measurement
  • PL_DT_F010_Helix_Incidents


    • Setup meta_run_id and filename of the output file
    • Get the last load from table STG.incremetnal_load, control by the variable l_VAR_eBatch_PF1_QAPP_INC_LOAD and configuration the logic of the incremental load in tJava to use the date from incremental_load to the field of create or change date in the SAP
    • Call the detail job and pass parameters such as user/password, query from point number 2 to do the incremental load and save the file to GCS
    • Call the standard job to upload the files from GCS to ODS
    • If the loading is OK and parameter l_VAR_heliux_[table_name]_reload = incremental, update the time on the table incremental_load. If the value is not incremental, it is the reloading
    • If everything is OK, update the log. 

Access rights

Source

Format

  • JSON

Destination

Location

  • Bucket = cs-ew1-prj-data-dm-dt-[dev]-staging/xxx
  • DataOean GCP = prj-data-dm-dt-[env]
  • STG Table names =
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_HD_incidents
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_Measurement
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_ServiceRequest
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_ServiceRequestStub
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_Workinfo
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_worklog
    • prj-data-dm-dt-[env].STG.STG_HLX_0000_0000_F001_I_H_workorder_json
  • ODS Table names =
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_Incidents
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_Measurement
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_ServiceRequest
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_ServiceRequestStub
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_Workinfo
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_worklog
    • prj-data-dm-dt-dev.ODS.ODS_HLX_0000_F001_I_H_workorder_json


  • DPL View names =  
    •  prj-data-dm-dt-[env].DPL.V_FACT_hlx_work_order
    • prj-data-dm-dt-[env].DPL.V_FACT_hlx_Workinfo
    • prj-data-dm-dt-[env].DPL.V_FACT_hlx_service_request_stub
    • prj-data-dm-dt-[env].DPL.V_FACT_hlx_measurement
    • prj-data-dm-dt-[env].DPL.V_DIM_hlx_incident
    • prj-data-dm-dt-[env].DPL.V_FACT_hlx_ServiceRequest
    • prj-data-dm-dt-[env].DPL.V_DIM_hlx_status

Format

  • columnar format

Sizing


Assessment

How to validate that the generated output is valid: 

Loading

1.1 Incremental Load

1.2 Full load

1.3. Reloading data

1.4 Plan to schedule

1.5 Timing

The average time expected for  loading:

Criticality

High/Medium/Low

Logging