Each operation can be configured with options such as when the operation will time out, what to log, and the timeframe for debug logging. Depending on the components used in an operation, it may also be able to be configured with options for whether a subsequent operation runs and whether to use chunking.
Accessing Operation Options
- Project Pane: In the Workflows or Components tab of the project pane, hover over an operation name and click the actions menu icon to open the actions menu. From the menu, select Settings to open the operation settings.
- Design Canvas: In the top right of an operation, click the actions menu icon to open the actions menu. From the menu, select Settings to open the operation settings.
Once the operation settings screen is open, select the Options tab:
Configuring Operation Options
Each option available within the Options tab of the operation settings is described below.
Operation Time Out: The operation timeout is the maximum amount of time the operation will run for before being canceled. In the first field, enter a number and in the second field use the dropdown to select the units in Seconds, Minutes, or Hours. By default, this is set to 2 hours.TIP: You may want to increase the timeout value if the operation has large datasets that are taking a long time to run. Or decrease it if the operations are time-sensitive; that is, you do not want the operation to succeed if it cannot complete within a certain timeframe.
What to Log: Use the dropdown to select logging of Everything or Errors Only. By default, everything is logged. This includes success, canceled, pending, running, and error statuses. When only errors are logged, note that successful child operations are not displayed in operation logs. Parent (root level) operations are always displayed in the logs because they require logging to function properly. The operation logs are available within the Cloud Studio operation log screen (see Operation Logs) and Activities page of the Management Console.TIP: You may want to limit the log only to errors if you are having operation latency issues. This way, if you weren't planning on using the other non-error messages normally filtered out in the operation log, you can prevent them from being generated in the first place.WARNING: For security, logs from agents in a Cloud Agent Group are removed as soon as they are no longer required for an operation to execute properly. In addition, downloading of operation logs through the Management Console Activities page is disabled on agents in a Cloud Agent Group.
Enable Debug Mode Until: Debug logging allows you to log debug messages to files on a Private Agent. This option is used mainly for debugging problems during testing and should not be turned on in production.
To turn on debug mode, select the checkbox and specify a date that debug mode will automatically be turned back off. This date is limited to 2 weeks from the current date. Debugging will be turned off at the beginning of that date (that is, 12:00 a.m.) using the time zone of the Private Agent. The debug log files are available from the Management Console on the Agents > Agents and Activities pages.TIP: Selective debug logging can help if you are having issues with a particular operation and do not need to turn on debug logging for the entire project, which can create very large files within the directory.NOTE: While this option is used to enable debug mode for a single operation, debug logging can also be enabled for each API or for the entire Private Agent (see Enabling Debug Logging).WARNING: For security, logs from agents in a Cloud Agent Group are removed as soon as they are no longer required for an operation to execute properly. In addition, downloading of operation logs through the Management Console Activities page is disabled on agents in a Cloud Agent Group.
Run Success Operation Even If There Are No Matching Source Files: This option is present only if the operation contains a file-based activity that is used as a source within the operation, and applies only when the operation has "on success" operation actions configured. By default, any "on success" operations run only if they have a matching source file to process. This option can be useful for setting up later parts of a project without requiring success of a dependent operation.
To force the previous operation to be successful, select the checkbox. This effectively lets you kick off the "on success" operation even if the trigger failed.
NOTE: The parameter
[OperationEngine]section of the jitterbit.conf file overrides the Run Success Operation Even If There Are No Matching Source Files setting.
Enable Chunking: This option is present only if the operation contains a transformation or a database, NetSuite, Salesforce, or SOAP activity, and is used for processing data to the target system in chunks. This allows for faster processing of large datasets and is also used to address record limits imposed by various web-service-based systems when making a request.
Note if you are using a Salesforce endpoint:
If a Salesforce activity is added to an operation that does not have chunking enabled, chunking becomes enabled with default settings specifically for Salesforce as described below.
If a Salesforce activity is added to an operation that already has chunking enabled, the chunking settings are not changed. Likewise, if a Salesforce activity is removed from an operation, the chunking settings are not changed.
Chunk Size: Enter a number of source records (nodes) to process for each thread. When chunking is enabled for operations that do not contain any Salesforce activities, the default chunk size is
1. When a Salesforce activity is added to an operation that does not have chunking enabled, chunking automatically becomes enabled with a default chunk size of
200. If using a Salesforce bulk activity, you should change this default to a much larger number, such as 10,000.
Number of Records per File: Enter a requested number of records to be placed in the target file. The default is
0, meaning there is no limit on the number of records per file.
Max Number of Threads: Enter the number of concurrent threads to process. When chunking is enabled for operations that do not contain any Salesforce activities, the default number of threads is
1. When a Salesforce activity is added to an operation that does not have chunking enabled, chunking automatically becomes enabled with a default number of threads of
Additional information and best practices for chunking are provided in the next section, Chunking.
- Save Changes: Click to save and close the operation settings.
- Discard Changes: After making changes to the operation settings, click to close the settings without saving.
Chunking is used to split the source data into multiple chunks based on the configured chunk size. The chunk size is the number of source records (nodes) for each chunk. The transformation is then performed on each chunk separately, with each source chunk producing one target chunk. The resulting target chunks combine to produce the final target.
Chunking can be used only if records are independent and from a non-LDAP source. We recommend using as large a chunk size as possible, making sure that the data for one chunk fits into available memory. For additional methods to limit the amount of memory a transformation uses, see Transformation Processing.
Many web service APIs (SOAP/REST) have size limitations. For example, a Salesforce upsert accepts only 200 records for each call. With sufficient memory, you could configure an operation to use a chunk size of 200. The source would be split into chunks of 200 records each, and each transformation would call the web service once with a 200-record chunk. This would be repeated until all records have been processed. The resulting target files would then be combined. (Note that you could also use Salesforce bulk activities to avoid the use of chunking.)
If you have a large source and a multi-CPU computer, chunking can be used to split the source for parallel processing. Since each chunk is processed in isolation, several chunks can be processed in parallel. This applies only if the source records are independent of each other at the chunk node level. Web services can be called in parallel using chunking, improving performance.
When using chunking on an operation where the target is a database, note that the target data is first written to numerous temporary files (one for each chunk). These files are then combined to one target file, which is sent to the database for insert/update. If you set the Jitterbit variable
true when chunking is enabled, each chunk is instead committed to the database as it becomes available. This can improve performance significantly as the database insert/updates are performed in parallel.
Using Variables with Chunking
As chunking can invoke multi-threading, its use can affect the behavior of variables that are not shared between the threads.
Global and project variables are segregated between the instances of chunking, and although the data is combined, changes to these variables are not. Only changes made to the initial thread are preserved at the end of the transformation.
For example, if an operation — with chunking and multiple threads — has a transformation that changes a global variable, the global variable's value after the operation ends is that from the first thread. Any changes to the variable in other threads are independent and are discarded when the operation completes.
These global variables are passed to the other threads by value rather than by reference, ensuring that any changes to the variables are not reflected in other threads or operations. This is similar to the
RunOperation() function when in asynchronous mode.
Last updated: Nov 18, 2020
- No labels