Multi-job processing on grid

Certain types of CLC Server jobs, known as "non-exclusive" jobs, can be scheduled to run concurrently on the same grid node when appropriate. Non-exclusive jobs are those that have reasonably low demands for system resources. A list of such jobs is provided in the Appendix, Non-exclusive Algorithms.

There are two ways non-exclusive jobs can be configured to run concurrently on a grid node:

  1. In the context of a workflow or workflow block executed on a single grid node
    If the server has been configured to send all tasks in a workflow to a single node, or to send tasks in a workflow block to a single node, as described in Workflow distribution options, and the workflow or workflow block includes parallel non-exclusive tasks, then these may run concurrently on the grid node.

    The maximum number of concurrent workflow tasks is a setting in each grid preset. The value provided here should align with the machines that jobs will be sent to using the preset. Note that values entered here cannot be validated directly.

    Leaving this setting blank in a grid preset means the value for the master will be used. This value can be seen by switching the Server mode setting to "Single server". The default value is 10 or the number of cores on the master's system, whichever is lowest.

  2. In any other context
    Information about the CPU or thread requirements of the jobs must be passed to the grid scheduler. Non-exclusive algorithms expose their CPU or thread usage, and this information can be passed on to the grid scheduler via the COMMAND_THREAD_MIN and COMMAND_THREAD_MAX variables in the Shared native specification of a grid preset. The variable COMMAND_THREAD_MAX would be used alone as an argument when a single value should be specified, or both the variables COMMAND_THREAD_MIN and COMMAND_THREAD_MAX can be provided when a range is required. An example of specifying a range is shown in the image of a grid present in Configure and save grid presets.

    One can also use the functions take_lower_of and take_higher_of for settings relevant to configuring multiple job processing. For example, to specify 4 as the maximum number of cores to be used by a non-exclusive job, the following could be used as the argument to the relevant parameter in the Shared native specification of a grid preset: {#take_lower_of COMMAND_THREAD_MAX, 4}. As the non-exclusive job passes on its thread usage requirements via the COMMAND_THREAD_MAX variable, this evaluates to 4 if that requirement is higher than 4 or the value specified by the job if is lower than 4.

    Further details about grid preset configuration, including Shared native specifications, functions and variables, can be found in Configure grid presets.

    Licensing notes

    Each CLC Grid Worker launched, whether it is to run alone on a node or run alongside a job already running on a particular node, will attempt to get a license from the CLC Network License Manager. Once the job is complete, the license will be returned.

    If the server has been configured to send all tasks in a workflow to a single node, only a single CLC Grid Worker will be launched for a given workflow run. Thus, irrespective of the number of concurrently running jobs in such a workflow run, only a single gridworker license is used.

    If the server has been configured so that each task in a workflow is submitted separately, then a CLC Grid Worker will be launched for each task, resulting in the use of a gridworker license for each task, including non-exclusive jobs executed concurrently.

    If the server has been configured so that each block of a workflow is submitted to run on a single node, a CLC Grid Worker will be launched for each block. If the workflow contains no iteration blocks, then it, by definition, consists of a single block and will use a single gridworker license. If a workflow contains one or more iteration blocks, it will consume a number of licenses equal to the number of iterations of these blocks plus one for each additional, non-iterated block. See figure 6.12 in Workflow queuing options for more detail about workflow blocks.