The job allows you to extract data from a Salesforce module and upload the query result into a Google Cloud Storage bucket. You can pass a condition if you do not wish to do a full load.
JOB DESCRIPTION
1 - The job first connects to Salesforce and Google Cloud Storage. If one of the connection fails the jobs stops and raises an error.
2 - The job extracts data from Salesforce using the specified module name and query condition and store it into a CSV file.
3 - The extracted file is uploaded into the specified Google Cloud Storage bucket.
4 - The temporary file is deleted.
HOW TO USE THIS JOB?
The job is fully dynamic. So you can :
- Copy the job from the DATA_OCEAN project and paste it into your project
- Drag and drop the copied job into your Talend flow
- Provide the necessary parameters to make the job work.
The parameters that you have to provide are highlighted here below.
| CONTEXT VARIABLE | DESCRIPTION |
|---|---|
| l_LOCAL_SF_USER | Salesforce user |
| l_LOCAL_SF_PASSWORD | Password of the Salesforce user |
| l_LOCAL_SF_TOKEN | Token associated with the Salesforce suer |
| l_PATHDIR_GCP_SERVICE_ACCOUNT | Full path of the GCP JSON key file |
| l_LOCAL_PATHDIR | Full path of the folder which where the file will be stored |
| l_LOCAL_FILENAME | Filename of the extracted file (it must follow the naming convention rules) |
| l_LOCAL_MODULE_NAME | Salesforce module name to extract |
| l_LOCAL_SF_CONDITION | Condition filter applied to extract SF data |
| l_LOCAL_CSV_SEPARATOR | Character used as separator in the CSV file |
- The default date format is yyyy-MM-dd HH:mm:ss . If you wish to change it (for instance adding milliseconds, having only the date part...) you should modify this value from by editing the schema of the tSalesforceInput component.
