Skip to content

Nuke submitter reference

The Nuke submitter is a Group node that allows you to configure and submit jobs to Conductor.

This document is a reference for all the attributes and functionality of the node. If you want to get up and running fast, head over to the Nuke submitter tutorial page.

Submission Preview

The submission preview tab shows how the other settings come together to create a submission. Its purpose is to help you check over the final submission properties to avoid mistakes.

The data in the preview panel updates live as you change attributes on the submitter and in your comp. Changing the frame range in the Root node, will cause the list of tasks displayed in the submission preview to refresh.

To avoid sluggish behavior, the submission preview does not run asset scraping functions by default. Nor does it show more than a handful of resolved tasks. To see a list of scraped assets, click on the Update with Assets button in the preview tab.



When you first create the node, it needs to communicate with Conductor to establish a connection with your account. After you sign in, the submitter will fetch data required to populate the UI such as projects and instance types. Upon re-opening your comp, you must always click on Connect even if the UI is already populated.

If the submit button does not respond, press the reconnect button to force an update. This can also be useful when something changed on your account, such as your project list.


After validation checks, the comp is scraped and those files are scanned. The job will then be submitted to Conductor. If the submission is successful, you'll see a dialog window with a link to where you can monitor the job on the web dashboard.


Run validations but don't submit. Validation results may be of type info, warning, or error.

If there are errors, warnings or info notices, a dialog pops up detailing the things to look out for. If there are no errors, you can continue the submission, but take heed of the warnings. If there are any errors, you must address them before you can submit.



The job title that appears in the Conductor dashboard.

Conductor project

A project you created on the Conductor dashboard. The drop-down menu updates when the submitter connects to your Conductor account. If the menu contains only the - NOT CONNECTED - option, or it doesn't show a newly created project from your account, then press the Connect button.

Instance Type

Specify the hardware configuration used to run your tasks. You are encouraged to run tests to find the most cost-efficient combination that meets your deadline. You can read about hardware choices and how they affect costs in this blog post.


Preemptible instances are less expensive to run than non-preemptible. The drawback is that they may be stopped at any time by the cloud provider. The probability of a preemption rises with the duration of the task. Conductor does not support checkpointing, so if a preemption occurs, the task starts from scratch on another instance. It is possible to change the preemptible setting in the dashboard for your account.

Nuke Version

The version of Nuke to run on the render nodes. You are not required to use the same as your local version, but be aware of incompatible changes between Nuke versions that could affect the render.

Chunk Size

A chunk is the set of frames handled by one task. If your renders are reasonably fast, it may make sense to render many frames per task because the time it takes to spin up instances, and sync can be significant by comparison.

In Nuke it's good practice to have multiple chunks within a Task.

Use Custom Range

When Use Custom Range is on, a text field appears and we ignore the frame range specified in the Root node. Instead, enter a frame-spec.

A frame-spec is a comma-separated list of arithmetic progressions. In most cases, this will be a simple range:


However, any set of frames may be specified efficiently in this way.


Negative numbers are also valid.


Scout Tasks

Specify a set of frames to render first. We start any tasks that contain these frames. All others are put on hold, which allows you to check a subsample of your sequence before committing to the full render.

You can use a frame spec to specify scout frames, for example: 1-100x30. Alternatively, you can select how many scout frames you want and let the submitter calculate scout frames from the current frame range. To specify three well-spaced scout frames automatically, enter auto:3.


The remote render nodes execute tasks in their entirety, so if you have chunk size set greater than 1, then all frames are rendered in any task containing a scout frame.

Use Upload Daemon

Use upload Daemon is off by default. This means that assets are uploaded within Nuke itself. If the size of your total asset data is relatively large in relation to your internet connection, it will block Nuke until it finishes uploading.

A better solution may be to turn on Use Upload Daemon. An upload daemon is a separate background process. It means assets are not uploaded in the application. The submission, including the list of expected assets, is sent to Conductor, and the upload daemon continually asks the server if there are assets to upload. When your job hits the server, the upload daemon will get the list and upload them, which allows you to continue with your work.

You can start the upload daemon either before or after you submit the job. Once started, it will listen to your entire account, and you can submit as many jobs as you like.


You must have Conductor Core installed in order to use the upload daemon and other command line tools. See the installation page for options.

To run an upload daemon, open a terminal or command prompt, and run the following command.

    conductor uploader

Once started, the upload daemon runs continuously and uploads files for all jobs submitted to your account.


Add one or more email addresses, separated by commas to receive an email when the job completes.

Task Template

A template for the commands to run on remote instances. The template defines the shape of a Nuke command-line render. It uses TCL scripts.

If you examine the task template and then check the Submission Preview section, you'll see how it is resolved.

Location Tag

Attach a location to this submission for the purpose of matching to an uploader and/or downloader process.

If your organization is distributed in several locations, you can enter a value here, for example, London. Then when you run a downloader daemon you can add the location option to limit downloads to only those that were submitted in London.

Extra Environment

By default, your job's environment variables are defined automatically based on the software and plugin versions you choose. Sometimes, however, it can be necessary to append to those variables or add more of your own.

For example, you may have a script you want to upload and run without entering its full path. In that case, you can add its location to the PATH variable.

Add an entry with the Add button and enter the Name of the variable: PATH, the Value /my/scripts/directory. Make sure Exclusive is switched off to indicate that the variable should be appended.

You can also enter local environment variables in the value field itself. They will then be active in the submission. You might use $MY_SCRIPTS_PATH (if it's defined) for the value in the above example.

Extra Assets

If the scraping misses some assets, or if you want to upload a script, or maybe a color profile, you can explicitly include them to make sure they are available on the render nodes.


Metadata consists of arbitrary Key/Value pairs that are attached to your submission. The purpose of metadata is to allow you to filter information in the Conductor web UI.

Example: To break down costs by shot number, you can add a metadata key called shot and enter the shot number in the Value field. You can also enter environment variables in the value field that resolve in the submission. In the above example, you might use $SHOT for the value.


On submission, you'll usually want to include the current file. It can be tedious to save the scene each time manually. The autosave feature allows you to set a filename template instead.

By default, autosave is active, and the template is the filename, prefixed by cio_.

If cleanup is on, then the autosave file is removed after submission.

You cannot use cleanup if you are using an upload daemon because the file itself is one of the assets to be uploaded, and the submitter doesn't know or care when the daemon has uploaded it.



CopyCat training is currently a BETA feature and only supported if CoreWeave is your cloud provider. Please reach out to support for any questions. We welcome feedback on this new and exciting feature.

CopyCat is Nuke's machine learning tool-set based on PyTorch.

There are two phases to using CopyCat; training and inference


Conductor supports training on a single instance and distributed training.

Submitting a training job is very similar to submitting a render job. Submitter attributes that aren't relevant (ex: frame range) are hidden. Training on a single instance can use instance types with multiple GPUs. It will typically be preferable to use a single instance with multiple GPUs compared to multiple instances with single GPUs.

When running a single-instance training job, progress can be monitored via the dashboard logs. Users will see the Steps and error delta. Contactsheet's, Cat files, etc. are only available once the job has completed and can be downloaded via the Companion or CLI Downloader.

When running in distributed mode, the error delta is piped back into the Progress tab of the CopyCat node. The Live Updates checkbox must be enabled. This is achieved by the Conductor node polling the jobs. This polling can be controlled via the CopyCat Jobs tab on the Conductor node.

Distributed training

CopyCat distributed training works by establishing a main node (i.e.: server) and all other nodes connect to that main node. All this is automatically handled by Conductor. The number of Tasks in the Conductor job will be equivalent to how many workers are specified. One Task (usually the first to start) will be established as the main node. All other Tasks will be worker nodes. All nodes will terminate once the training is complete. More details can be found in the Nuke docs


I can't see the Conductor plugin in the Render menu.

If the plugin does not appear, check to make sure that either $HOME/.nuke/ or $NUKE_PATH includes the location to cionuke Python package.