The job allows you to extract data from a Salesforce module and upload the query result into a Google Cloud Storage bucket. You can pass a condition if you do not wish to do a full load.

JOB DESCRIPTION

1 - The job first connects to Salesforce and Google Cloud Storage. If one of the connection fails the jobs stops and raises an error.

2 - The job extracts data from Salesforce using the specified module name and query condition and store it into a CSV file. 

3 - The extracted file is uploaded into the specified Google Cloud Storage bucket.

4 - The temporary file is deleted.

HOW TO USE THIS JOB?

The job is fully dynamic. So you can :

  • Copy the job from the DATA_OCEAN project and paste it into your project
  • Drag and drop the copied job into your Talend flow
  • Provide the necessary parameters to make the job work.

The parameters that you have to provide are highlighted here below.

CONTEXT VARIABLEDESCRIPTION
l_LOCAL_SF_USERSalesforce user
l_LOCAL_SF_PASSWORDPassword of the Salesforce user
l_LOCAL_SF_TOKENToken associated with the Salesforce suer
l_PATHDIR_GCP_SERVICE_ACCOUNTFull path of the GCP JSON key file
l_LOCAL_PATHDIRFull path of the folder which where the file will be stored
l_LOCAL_FILENAMEFilename of the extracted file (it must follow the naming convention rules)
l_LOCAL_MODULE_NAMESalesforce module name to extract
l_LOCAL_SF_CONDITIONCondition filter applied to extract SF data
l_LOCAL_CSV_SEPARATORCharacter used as separator in the CSV file


  • The default date format is yyyy-MM-dd HH:mm:ss . If you wish to change it (for instance adding milliseconds, having only the date part...) you should modify this value from by editing the schema of the tSalesforceInput component.


  • No labels