Algorithm 2: matchTemplate with meanShift

How to run:

  1. Install dependencies and requirements

  2. Select a set of images (at least 100 images) with good lighting condition distributed during the whole period, and copy them into one folder. This calibration set of images will be used to calculate an appropriate conversion factor (to convert displacements from pixel to meter)

  3. Select a set of templates (about a dozen) as explained in https://rtgmc.readthedocs.io/en/latest/algorithm1.html#important-instructions-for-the-template-image and save them into one folder

  4. Import the script and run the main script, i.e. mT_mS.py with following inputs:

  • path: select the path to the folder with image time series

  • path_cal: select the path to folder with calibration set of images (step 2 from above)

  • path_template: select the path to the folder with templates (step 3 from above)

The workflow of the algorithm is as follow:

  1. In the main script (mT_mS.py) the image time series is imported with the following function:

mT_mS.load_good_images_from_folder(folder)

Function loading the image series inside a specified folder. File names must contain date and time with delimiters - or _. Acquisition taken during night are filtered out with help of a darkness threshold.

Parameters

folder (string) – Path of the folder (string). can be a relative or absolute path.

Returns

  • images (list) – Time series of images.

  • times (list) – List of time differences for every image to the first image in hours. e.g. [0, 0.5, 1.0, …]

  • hour (list) – List containing the hour when the image was taken.

2. The parameters (a, b) of the function used to convert displacements into metric unit are calculated for the images in the calibration set (good lighting conditions) with the following function:

mT_mS.find_conversion_factor(img)

Calculate a (slope) and b (y-intercept) of the function that describes tape height in function of its position in the image (because of image distortion).

Parameters

img (opencv-image) – Image containing the stake with tapes.

Returns

  • a (float) – Slope of the function.

  • b (float) – Y-intercept of the function (in px).

3. The combination of following functions allow to calculate displacements of the pole with tapes for the time series of images.

Main Functions

Considering two consecutive images, the match_template function finds the initial location of the tapes in the first image. The function meanShift is than able to track the tapes in the consecutive image. This combination is implemented in the function mS_different_frames. During the implementation of the algorithm, errors arising from a template that was not perfectly centered were observed. In addition, since the pole and therefore the tapes may tilt over time, it is possible that a tape is centered at the beginning of the time series but loses its centering over time, thus leading to erroneous results. To overcome this problem, the function mS_same_frame recalls match_template and meanShift on the same frame, thus correcting possible errors from template offsets.

mT.match_template(im, temp)

Finds areas of an image that match (are similar) to a template image.

Parameters
  • im (openncv-image) – (Source image) The image in which we expect to find a match to the template image.

  • temp (opencv-image) – (Template image) The image which will be compared to the source image.

Returns

collinearMatches – The coordinates of the matching points found (Left uppermost corner of the ROI).

Return type

list

mS.mS_same_frame(images, times, a, b, template, h, w)

This function recalls the matchTemplate and meanShift functions on the same frame to calculate eventual offsets of the templates (which will be later corrected).

Parameters
  • images (list) – Time series of images.

  • times (list) – List of time differences for every image to the first image in hours. e.g. [0, 0.5, 1.0, …]

  • a (float) – Slope of the function used to convert displacements into metric unit.

  • b (float) – Y-intercept of the function used to convert displacements into metric unit.

  • template (opencv-image) – Template used to identify the tapes.

  • h (int) – Height of the track window.

  • w (int) – Width of the track window

Returns

dy_list – List of offset value between template and tape in each image.

Return type

list

mS.mS_different_frames(images, times, a, b, template, h, w, dy_cal)

This function recalls the matchTemplate function in one frame (to find the initial location of the tapes) and meanShift to track tapes into the consecutive frame. Displacement of tapes between two frames is calculated and cumulated over the hole series.

Parameters
  • images (list) – Time series of images.

  • times (list) – List of time differences for every image to the first image in hours. e.g. [0, 0.5, 1.0, …]

  • a (float) – Slope of the function used to convert displacements into metric unit.

  • b (float) – Y-intercept of the function used to convert displacements into metric unit.

  • template (opencv-image) – Template used to identify the tapes.

  • h (int) – Height of the track window.

  • w (int) – Width of the track window

  • dy_cal (list) – Offsets of the templates.

Returns

  • dy_list (list) – Cumulative displacements for every image in m.

  • std_list (list) – Cumulative standard deviation for every image in m.

  • count_nomatches_notrack (int) – Number of images for which the displacement could not be calculated (value of 0 is assigned).

Sub-functions

The following functions are recalled by the match_template function to remove duplicates matches and find collinear matches, i.e. on one straight line.

mT.find_collinear(points)

This function searches for points that are collinear (on 1 straight line). If there are several lines, the one with the most points on it is chosen.

Parameters

points (list of tuple of float) – list of x and y coordinates e.g. [[x1, y1], [x2, y2], …]

Returns

  • collinear_points (list of tuple of float) – list of coordinates of all matches that are collinear

  • angle (float) – the inclination of the pole. returns 0 if no collinear matches.

mT.remove_duplicates(points)

This function is approximating points that are very close together into 1 single point

Parameters

points (list of tuple of float) – list of x and y coordinates e.g. [[x1, y1], [x2, y2], …]

Returns

points – list of x and y coordinates of fewer points.

Return type

list of tuple of float

Since this algorithm works with colors to track tapes, the following functions are used to mask tape colors.

mask.yellow(image)

Creates a mask for yellow color.

Parameters

image (ndarray) – Image of which the mask is wanted.

Returns

mask – Mask for yellow color of the image.

Return type

ndarray

mask.red(image)

Creates a mask for red color.

Parameters

image (ndarray) – Image of which the mask is wanted.

Returns

mask – Mask for red color of the image.

Return type

ndarray

mask.green(image)

Creates a mask for green color.

Parameters

image (ndarray) – Image of which the mask is wanted.

Returns

mask – Mask for green color of the image.

Return type

ndarray

mask.blue(image)

Creates a mask for blue color.

Parameters

image (ndarray) – Image of which the mask is wanted.

Returns

mask – Mask for blue color of the image.

Return type

ndarray

mask.black(image)

Creates a mask for black color.

Parameters

image (ndarray) – Image of which the mask is wanted.

Returns

mask – Mask for black color of the image.

Return type

ndarray

Recommendations

  • Run the algorithm with at least 10 different templates (template is a sensitive variable)

  • If possible build the stations in such a way that lighting condition are goods, i.e. colors are well recognizable and the contrast is not too high

  • By comparing the results of different templates, as well as results of Algorithm 1, some erroneous results may be filtered out thus obtaining better performances