API reference#

pyorc’s API consists of several subclasses of the xarray.Dataset and xarray.DataArray data models. In a nutshell, xarray’s data models are meant to store and analyze scientific datasets with multiple dimensions. A xarray.DataArray contains one variable with possibly several dimensions and coordinates within those dimensions. A xarray.Dataset may contain multiple xarray.DataArray objects, with shared coordinates. In pyorc typically the coordinates are time for time epochs measured in seconds since the beginning of a video, x for horizontal grid spacing (in meters), y for vertical grid spacing (in meters). Operations you can apply on both data models are very comparable with operations you may already use in pandas, such as resampling, reindexing, aggregations, and so on.

Note

We highly recommend to use the excellent xarray manual side-by-side with pyorc to understand how to effectively work with the xarray ecosystem.

In the remaining sections, we describe the API classes, and the functions they are based on.

CameraConfig class#

Class and properties#

CameraConfig(height, width[, crs, ...])

Camera configuration containing information about the perspective of the camera with respect to real world coordinates

CameraConfig.bbox

Returns geographical bbox fitting around corners points of area of interest in camera perspective

CameraConfig.shape

Returns rows and columns in projected frames from Frames.project

CameraConfig.transform

Returns Affine transform of projected frames from Frames.project

Setting of properties and attributes#

CameraConfig.set_bbox_from_corners(corners)

Establish bbox based on a set of camera perspective corner points Assign corner coordinates to camera configuration

CameraConfig.set_gcps(src, dst, z_0[, ...])

Set ground control points for the given CameraConfig

CameraConfig.set_lens_pars([k1, c, focal_length])

Set the lens parameters of the given CameraConfig

CameraConfig.set_lens_calibration(fn[, ...])

Calibrates and sets the properties camera_matrix and dist_coeffs using a video of a chessboard pattern.

CameraConfig.set_lens_position(x, y, z[, crs])

Set the geographical position of the lens of current CameraConfig.

Exporting#

CameraConfig.to_dict()

Return the CameraConfig object as dictionary

CameraConfig.to_file(fn)

Write the CameraConfig object to json structure

CameraConfig.to_json()

Convert CameraConfig object to string

Retrieve geometrical information#

CameraConfig.get_M([h_a, to_bbox_grid, reverse])

Establish a transformation matrix for a certain actual water level h_a.

CameraConfig.get_depth(z[, h_a])

Retrieve depth for measured bathymetry points using the camera configuration and an actual water level, measured in local reference (e.g.

CameraConfig.z_to_h(z)

Convert z coordinates of bathymetry to height coordinates in local reference (e.g.

Plotting methods#

CameraConfig.plot([figsize, ax, tiles, ...])

Plot the geographical situation of the CameraConfig.

CameraConfig.plot_bbox([ax, camera, ...])

Plot bounding box for orthorectification in a geographical projection (camera=False) or the camera Field Of View (camera=True).

Video class#

Class and properties#

Video(fn[, camera_config, h_a, start_frame, ...])

Video class, inheriting parts from cv2.VideoCapture.

Video.camera_config

CameraConfig object

Video.fps

float, frames per second

Video.end_frame

int, last frame considered in analysis

Video.start_frame

int, first frame considered in analysis

Video.corners

list of 4 lists (int) with [column, row] locations of area of interest in video objective

Getting frames from video objects#

Video.get_frame(n[, method])

Retrieve one frame.

Video.get_frames([method])

Get a xr.DataArray, containing a dask array of frames, from start_frame until end_frame, expected to be read lazily.

Frames subclass#

These methods can be called from an xarray.DataArray that is generated using pyorc.Video.get_frames.

Class and properties#

Frames(xarray_obj)

Frames functionalities that can be applied on xr.DataArray

Frames.camera_config

Camera configuration belonging to the processed video

Frames.camera_shape

Shape of the original camera objective of the processed video (e.g.

Frames.h_a

Actual water level belonging to the processed video

Enhancing frames#

Frames.edge_detect([wdw_1, wdw_2])

Convert frames in edges, using a band convolution filter.

Frames.normalize([samples])

Remove the mean of sampled frames.

Frames.time_diff([thres, abs])

Apply a difference over time (i.e.

Frames.smooth([wdw])

Smooth each frame with a Gaussian kernel.

Projecting frames to planar views#

Frames.project([method, resolution])

Project frames into a projected frames object, with information from the camera_config attr.

Retrieving surface velocities from frames#

Frames.get_piv(**kwargs)

Perform PIV computation on projected frames.

Visualizing frames#

Frames.plot([ax, mode])

Creates QuadMesh plot from a RGB or grayscale frame on a new or existing (if ax is not None) axes

Frames.to_ani(fn[, figure_kwargs, ...])

Store an animation of the frames in the object

Frames.to_video(fn[, video_format, fps])

Write frames to a video file without any layout

Velocimetry subclass#

These methods can be called from an xarray.Dataset that is generated using pyorc.Frames.get_piv.

Class and properties#

Velocimetry(xarray_obj)

Velocimetry functionalities that can be applied on xarray.Dataset

Velocimetry.camera_config

Camera configuration belonging to the processed video

Velocimetry.camera_shape

Shape of the original camera objective of the processed video (e.g.

Velocimetry.h_a

Actual water level belonging to the processed video

Temporal masking methods#

The mask methods below either require or may have a dimension time in the data. Therefore they are most logically applied before doing any reducing over time.

Velocimetry.mask.minmax([s_min, s_max])

Masks values if the velocity scalar lies outside a user-defined valid range.

Velocimetry.mask.corr([tolerance])

Masks values with too low correlation

Velocimetry.mask.angle([angle_expected, ...])

filters on the expected angle.

Velocimetry.mask.rolling([wdw, tolerance])

Masks values if neighbours over a certain rolling length before and after, have a significantly higher velocity than value under consideration, measured by tolerance.

Velocimetry.mask.outliers([tolerance, mode])

Mask outliers measured by amount of standard deviations from the mean.

Velocimetry.mask.variance([tolerance, mode])

Masks locations if their variance (std/mean in time) is above a tolerance level for either or both x and y direction.

Velocimetry.mask.count([tolerance])

Masks locations with a too low amount of valid velocities in time, measured by fraction with tolerance.

Spatial masking methods#

The spatial masking methods look at a time reduced representation of the grid results. The resulting mask can be applied on a full time series and will then mask out grid cells over its full time span if these do not pass the mask.

Velocimetry.mask.window_mean([tolerance, ...])

Masks values when their value deviates more than tolerance (measured as relative fraction) from the mean of its neighbours (inc.

Velocimetry.mask.window_nan([tolerance, wdw])

Masks values if their surrounding neighbours (inc.

Data infilling#

Velocimetry.mask.window_replace([wdw, iter])

Replaces values in a certain window size with mean of their neighbours.

Getting data over transects#

Velocimetry.get_transect(x, y[, z, s, crs, ...])

Interpolate all variables to supplied x and y coordinates of a cross section.

Velocimetry.set_encoding([enc_pars])

Set encoding parameters for all typical variables in a velocimetry dataset.

Plotting methods#

Velocimetry.plot

alias of _Velocimetry_PlotMethods

Velocimetry.plot.pcolormesh(x, y[, s, ax])

Creates pcolormesh plot from velocimetry results on new or existing axes

Velocimetry.plot.scatter(x, y[, c, ax])

Creates scatter plot of velocimetry or transect results on new or existing axes

Velocimetry.plot.streamplot(x, y, u, v[, s, ...])

Creates streamplot of velocimetry results on new or existing axes

Velocimetry.plot.quiver(x, y, u, v[, s, ax])

Creates quiver plot from velocimetry results on new or existing axes

Transect subclass#

These methods can be called from an xarray.Dataset that is generated using pyorc.Velocimetry.get_transect.

Class and properties#

Transect(xarray_obj)

Transect functionalities that can be applied on xarray.Dataset

Transect.camera_config

Camera configuration belonging to the processed video

Transect.camera_shape

Shape of the original camera objective of the processed video (e.g.

Transect.h_a

Actual water level belonging to the processed video

Derivatives#

Transect.vector_to_scalar([v_x, v_y])

Set "v_eff" and "v_dir" variables as effective velocities over cross-section, and its angle

Transect.get_xyz_perspective([M, xs, ys, ...])

Get camera-perspective column, row coordinates from cross-section locations.

River flow methods#

Transect.get_river_flow([q_name, Q_name])

Integrate time series of depth averaged velocities [m2 s-1] into cross-section integrated flow [m3 s-1] estimating one or several quantiles over the time dimension.

Transect.get_q([v_corr, fill_method])

Depth integrated velocity for quantiles of time series using a correction v_corr between surface velocity and depth-average velocity.

Plotting methods#

Transect.plot

alias of _Transect_PlotMethods

Transect.plot.quiver(x, y, u, v[, s, ax])

Creates quiver plot from velocimetry results on new or existing axes

Transect.plot.scatter(x, y[, c, ax])

Creates scatter plot of velocimetry or transect results on new or existing axes