Frames#
The frames of a video can be enhanced before projection to highlight particles and patterns more clearly. For instance, intensities can be thresholded, edges can be brought forward, and background transparency can be reduced. The enhancements are automatically applied on all frames from the video or from the user defined frames between start and end frame. After each enhancement, a new operation can be performed on the result, and even several enhancements can be repeated. For instance, you may first apply an intensity thresholding to remove intensities above or below a certain threshold (e.g. solar flare), then apply an edge detection, and then again threshold the result of the edge detection enhancement.
The methods to perform these image enhancements are described below.
Intensity thresholding#
To remove effects of noisy dark pixels, or very bright pixels, such as directly reflected sunlight, a simple thresholding method can be applied.
the minmax
flag performs thresholding on the frames. Simply set a minimum and maximum value under min
and max
to darken pixels that are below or above the set thresholds. An example is provided below.
frames:
minmax:
min: 150
max: 250
Thresholding is performed with the minmax
method, described in the frames API section:
Improving contrast#
There are a number of ways to improve contrast of moving features. These so far include the following methods:
Normalization: removes the average of a number of frames in time. This then yields a better contrast of moving things compared to static things. This is particularly useful to remove visuals of the bottom, when you have very transparent water. You can set the amount of frames used for averaging.
Edge detection: enhances the visibility of strong spatial gradients in the frame, by applying two kernel filters on the individual frames with varying window sizes, and returning the difference between the filtered images.
The normalize
flag can be used to remove background. By default, 15 frames at equal intervals are extracted
and averaged to establish an estimate of the background intensities. This is then subtracted from each frame to
establish a background corrected series. The amount of frames can be controlled with the samples
flag. An
example, where the default amount of samples is modified to 25 is provided below.
frames:
normalize:
samples: 25
Normalization is controlled by the normalize
method and described in the frames API section.
The edge_detect
method computes the difference between kernel smoothed frames.
By default, the kernel window one-sided sizes are 1 (i.e. a 3x3 window) and 2 (a 5x5 window) respectively.
For very high resolution imagery or very large features to track, these may be enlarged to better encompass the
edges of the patterns of interest. The kernel sizes can be modified using the flags wdw_1
and wdw_2
for
instance as follows:
frames:
edge_detect:
wdw_1: 2
wdw_4: 4
Edge detection is performed with the edge_detect
method, described in the frames API section.
Orthoprojection#
As you supply a camera configuration to the video, pyorc is aware how frames must be reprojected to provide an orthorectified image. Typically orthorectification is the last step before estimating surface velocities from the frame pairs.
In the command-line interface, orthoprojection is performed automatically after all image enhancement steps the a user may possible have entered in the recipe. Nonetheless, you can still control the resolution (in meters) of the projected end result at this stage. If you leave any specifics about the projection out of your recipe, then pyorc will assume that you want to use the resolution specified in the camera configuration file. If however you wish to manipulate the resolution in the recipe then you can do this by using the following keys and values (with an example for 0.1 meter resolution):
frames:
...
...
...
project:
resolution: 0.1
Projection is performed with the project
method, described in the frames API section.
Exporting results to video#
If you wish to inspect frames after they have been treated with filters and projected, then you can write the result to a new video file. This helps to assess if patterns are indeed clearly visible and projected results good enough in resolution to recognize the features on the water surface.
In the recipe, the export to a video can be controlled with the to_video
method and by supplying a
filename with extension .mp4 or another recognizable video extension. An example of a frames section in which
enhanced frames (with normalization, edge detection and finally thresholding and projecting) are written to a
file is given below.
frames:
normalize:
samples: 25
edge_detect:
wdw_1: 2
wdw_4: 4
minmax:
min: 0
max: 10
Exporting frames to a video is performed with the to_video
method, described in the
frames API section.