BROWSE CATDV SUPPORT MANUALS

Here is a collection of top tips for developing efficient Worker Node scripts:

  • Ensure that all watch actions reset the conditions as part of the task so that the same clips aren’t processed over and over again. For server queries that look for a clip of status “Do Some Action”, update the status to “Action Done” on completion. For a watch folder, try to implement this as a drop box where files are moved out of the drop box to a more permanent location on completion.
  • Avoid file conditions that need to analyse a file to check its contents. Conditions that only check the filename extension are much more efficient.
  • Avoid creating very large thumbnails (or large numbers of thumbnails) unless you need them
  • If possible, avoid using the “Create metaclip based on MXF UMID” and “Avoid duplicates” options, as they require the entire catalog that you are publishing to to be opened up by the worker before the task is processed
  • Test your worker scripts on small numbers of example files before letting it loose on an entire volume. Use development mode (accessible from the General tab) to begin with so you only run one task at a time and can see what’s going on. You can also uncheck the various checkboxes in the Processing panel to slow things down.
  • Initially test your scripts with source and destination on your local machine (eg. your desktop) and get them right there first before trying a network volume to eliminate network and permissions issues.
  • Avoid reducing the check interval for watch folders or server queries below 60s unless you need to. Rather than speeding things up it might actually slow the worker down. (If you want to speed things up while testing your scripts, turn on development mode as this temporarily reduces the check interval.)
  • Avoid the “allow polling” option unless you’re sure you need it
  • Avoid having multiple workers on different machines sharing the same workset file. Instead, use multiple processes on the same machine.
  • If possible keep the workset file on a local drive rather than a shared network drive.
  • Keep the frame size and bit rate of your proxies down. Although the resulting file size isn’t as small as H264 or MPEG-4, using Photo-JPEG (OfflineRT) gives good quality and is much faster to encode and decode.
  • Don’t leave the log viewer window open when you’re not using it
  • When troubleshooting, reduce the number of processors to 1 (on the License tab) and quit and restart the worker to start a new log file (as the worker log file can get very large when it’s been running for a while).
  • If you have a lot of watch actions, disable all but the script that is causing problems then restart the worker. As well as keeping the log file small it will make troubleshooting easier if you only consider one watch action at a time.
  • When reporting problems always provide the relevant log files (but see previous point about restarting the worker to keeping the file size down). Use the Save Log Files button from the log viewer to compress the relevant log file(s) into a ZIP archive, and don’t forget to include a screenshot or precise details of which task failed (filename, time of day, error message, etc.) so we know which part of the log file to look at.
  • If a file fails to import and get processed the way you expect it to in the worker, try importing it manually into CatDV Pro as that will make it easier to see any errors that are reported and you can check that you have the appropriate codec to play the file.
  • Although the worker is fully cross-platform between Mac and Windows, there is a better choice of QuickTime codecs available on the Mac, so this can make a good platform for the worker even if the rest of the system is predominantly PC-based.
  • If you have multiple worker licenses, try reducing the number of concurrent processes to see what gives the best performance with your workflow. If you’re doing a lot of CPU intensive transcodes then increasing the number of processes may help but if you are processing a lot of short jobs then you might be constrained by the speed of I/O and applying too many concurrent processes may actually slow it down.
  • If you change a job definition you need to resubmit any tasks that have already been queued to pick up the new definition.