TGrabs parameters


TGrabs has a live-tracking feature, allowing users to extract positions and postures of individuals while recording/converting. For this process, all parameters relevant for tracking are available in TGrabs as well – for a reference of those, please refer to TRex parameters.


default value: 2

Threshold value to be used for adaptive thresholding, if enabled.


default value: 0

If available, please provide the approximate length of the video in minutes here, so that the encoding strategy can be chosen intelligently. If set to 0, infinity is assumed. This setting is overwritten by stop_after_minutes.


default value: 100

Number of samples taken to generate an average image. Usually fewer are necessary for ``averaging_method``s max, and min.

See also



default value: mode

possible values:
  • mean: Sum all samples and divide by N.

  • mode: Calculate a per-pixel median of the samples to avoid noise. More computationally involved than mean, but often better results.

  • max: Use a per-pixel minimum across samples. Usually a good choice for short videos with black backgrounds and individuals that do not move much.

  • min: Use a per-pixel maximum across samples. Usually a good choice for short videos with white backgrounds and individuals that do not move much.

Determines the way in which the background samples are combined. The background generated in the process will be used to subtract background from foreground objects during conversion.


default value: [0.01,500000]

Minimum or maximum size of the individuals on screen after thresholding. Anything smaller or bigger than these values will be disregarded as noise.


default value: -1

If set to anything else than 0, this will limit the basler camera framerate to the given fps value.


default value: 5500

Sets the cameras exposure time in micro seconds.


default value: [-1,-1]

Defines the dimensions of the camera image.


default value: 3

Size of the dilation/erosion filters for if use_closing is enabled.

See also



default value: 1

Index (0-2) of the color channel to be used during video conversion, if more than one channel is present in the video file.


default value: false

Attempts to correct for badly lit backgrounds by evening out luminance across the background.


default value: [0,0,0,0]

Percentage offsets [left, top, right, bottom] that will be cut off the input images (e.g. [0.1,0.1,0.5,0.5] will remove 10%% from the left and top and 50%% from the right and bottom and the video will be 60%% smaller in X and Y).


default value: false

If set to true, the grabber will open a window before the analysis starts where the user can drag+drop points defining the crop_offsets.


default value: 0

If set to a value greater than zero, detected shapes will be inflated (and potentially merged). When set to a value smaller than zero, detected shapes will be shrunk (and potentially split).


default value: false

When enabled, live tracking will be executed for every frame received. Frames will be sent to the ‘’ script - see this script for more information. Sets enable_live_tracking to true. Allows the tracker to skip frames by default, in order to catch up to the video.


default value: true

Enables background subtraction. If disabled, threshold will be applied to the raw greyscale values instead of difference values.

See also



default value: false

When enabled, the program will save a .results file for the recorded video plus export the data (see output_graphs in the tracker documentation).


default value: false

Equalizes the histogram of the image before thresholding and background subtraction.


default value: 20

Quality for crf (see ffmpeg documentation) used when encoding as libx264.


default value: false

If set to true, live tracking will always overwrite a settings file with filename.settings in the output folder.

See also



default value: false

Converts the image to floating-point (temporarily) and performs f(x,y) * image_contrast_increase + image_brightness_increase plus, if enabled, squares the image (image_square_brightness).


default value: 0

Value that is added to the preprocessed image before applying the threshold (see image_adjust). The neutral value is 0 here.

See also



default value: 3

Value that is multiplied to the preprocessed image before applying the threshold (see image_adjust). The neutral value is 1 here.

See also



default value: false

Squares the floating point input image after background subtraction. This brightens brighter parts of the image, and darkens darker regions.


default value: “”

Path to a video file containing a mask to be applied to the video while recording. Only works for conversions.


default value: -1

Age of the individuals used in days.


default value: “”

The current commit hash. The video is branded with this information for later inspection of errors that might have occured.


default value: “”

Command-line of the framegrabber when conversion was started.


default value: “”

Treatment name.


default value: “”

This contains the time of when this video was converted / recorded as a string.


default value: “”

Other information.


default value: “”

Name of the species used.


default value: [“meta_species”,”meta_age_days”,”meta_conditions”,”meta_misc”,”cam_limit_exposure”,”meta_real_width”,”meta_source_path”,”meta_cmd”,”meta_build”,”meta_conversion_time”,”frame_rate”,”cam_undistort_vector”,”cam_matrix”]

The given settings values will be written to the video file.


default value: false

Start without a window enabled (for terminal-only use).


default value: false

If set to true, this will terminate the program directly after generating (or loading) a background average image.


default value: true

If set to true, the program will record frames whenever individuals are found.


default value: false

If set to true, the average will be regenerated using the live stream of images (video or camera).


default value: false

Saves a RAW movie (.mov) with a similar name in the same folder, while also recording to a PV file. This might reduce the maximum framerate slightly, but it gives you the best of both worlds.


default value: 255

A greyscale value in case enable_difference is set to false - TGrabs will automatically generate a background image with the given color.


default value: 0

If set to a value above 0, the video will stop recording after X minutes of recording time.


default value: 0

Custom override of how many bytes of system RAM the program is allowed to fill. If approximate_length_minutes or stop_after_minutes are set, this might help to increase the resulting RAW video footage frame_rate.


default value: 0.025

Higher values (up to 1.0) will lead to coarser approximation of the rectangle/tag shapes.


default value: false

(beta) Enable debugging for tags.


default value: false

(beta) If enabled, TGrabs will search for (black) square shapes with white insides (and other stuff inside them) - like QRCodes or similar tags. These can then be recognized using a pre-trained machine learning network (see tags_recognize), and/or exported to PNG files using tags_save_predictions.


default value: false

Apply a histogram equalization before applying a threshold. Mostly this should not be necessary due to using adaptive thresholds anyway.


default value: [80,80]

Tags that are bigger than these pixel dimensions may be cropped off. All extracted tags are then pre-aligned to any of their sides, and normalized/scaled down or up to a 32x32 picture (to make life for the machine learning network easier).


default value: “tag_recognition_network.h5”

The pretrained model used to recognize QRcodes/tags according to Path to a pretrained network .h5 file that takes 32x32px images of tags and returns a (N, 122) shaped tensor with 1-hot encoding.


default value: [3,7]

The number of sides of the tag (e.g. should be 4 if it is a rectangle).


default value: false

(beta) Apply an existing machine learning network to turn images of tags into tag ids (numbers, e.g. 1-122). Be sure to set tags_model_path along-side this.

See also



default value: false

Save images of tags, sorted into folders labelled according to network predictions (i.e. ‘tag 22’) to ‘output_dir/tags_``filename``/<individual>.<frame>/*’.


default value: false

(beta) If set to true, all objects other than the detected blobs are removed and will not be written to the output video file.


default value: [0.08,2]

The minimum and maximum area accepted as a (square) physical tag on the individuals.


default value: -5

Threshold passed on to cv::adaptiveThreshold, lower numbers (below zero) are equivalent to higher thresholds / removing more of the pixels of objects and shrinking them. Positive numbers may invert the image/mask.


default value: false

Internal variable.


default value: “checkerboard”

Defines, which test image will be used if video_source is set to ‘test_image’.

See also



default value: true

Use threads to process images (specifically the blob detection).


default value: 9

Threshold to be applied to the input image to find blobs.


default value: 255


default value: false

Enables or disables adaptive thresholding (slower than normal threshold). Deals better with weird backgrounds.


default value: false

Toggles the attempt to close weird blobs using dilation/erosion with closing_size sized filters.

See also


video_conversion_range(pair<int, int>)

default value: [-1,-1]

If set to a valid value (!= -1), start and end values determine the range converted.


default value: true

Use threads to read images from a video file.


default value: “webcam”

Where the video is recorded from. Can be the name of a file, or one of the keywords [‘basler’, ‘webcam’, ‘test_image’].