API reference#

pyorc’s API consists of several subclasses of the xarray.Dataset and xarray.DataArray data models. In a nutshell, xarray’s data models are meant to store and analyze scientific datasets with multiple dimensions. A xarray.DataArray contains one variable with possibly several dimensions and coordinates within those dimensions. A xarray.Dataset may contain multiple xarray.DataArray objects, with shared coordinates. In pyorc typically the coordinates are time for time epochs measured in seconds since the beginning of a video, x for horizontal grid spacing (in meters), y for vertical grid spacing (in meters). Operations you can apply on both data models are very comparable with operations you may already use in pandas, such as resampling, reindexing, aggregations, and so on.

Note

We highly recommend to use the excellent xarray manual side-by-side with pyorc to understand how to effectively work with the xarray ecosystem.

In the remaining sections, we describe the API classes, and the functions they are based on.

CameraConfig class#

Class and properties#

CameraConfig(height, width[, crs, ...])

Camera configuration containing information about the perspective of the camera.

CameraConfig.bbox

Give geographical bbox fitting around corners points of area of interest in camera perspective.

CameraConfig.shape

Return rows and columns in projected frames from Frames.project.

CameraConfig.transform

Returns Affine transform of projected frames from Frames.project.

Setting of properties and attributes#

CameraConfig.set_bbox_from_corners(corners)

Establish bbox based on a set of camera perspective corner points.

CameraConfig.set_gcps(src, dst, z_0[, ...])

Set ground control points for the given CameraConfig.

CameraConfig.set_lens_pars([k1, c, focal_length])

Set the lens parameters of the given CameraConfig.

CameraConfig.set_intrinsic([camera_matrix, ...])

Set lens and distortion parameters.

CameraConfig.set_lens_calibration(fn[, ...])

Calibrate and set the properties camera_matrix and dist_coeffs using a video of a chessboard pattern.

CameraConfig.set_lens_position(x, y, z[, crs])

Set the geographical position of the lens of current CameraConfig.

Exporting#

CameraConfig.to_dict()

Return the CameraConfig object as dictionary.

CameraConfig.to_dict_str()

Convert the current instance to a dictionary with all values converted to strings.

CameraConfig.to_file(fn)

Write the CameraConfig object to json structure.

CameraConfig.to_json()

Convert CameraConfig object to string.

Retrieve geometrical information#

CameraConfig.get_M([h_a, to_bbox_grid, reverse])

Establish a transformation matrix for a certain actual water level h_a.

CameraConfig.get_bbox([camera, h_a, z_a, ...])

Get bounding box.

CameraConfig.get_camera_coords(points)

Convert real-world coordinates into camera coordinates.

CameraConfig.get_dist_shore(x, y, z[, h_a])

Retrieve depth for measured bathymetry points.

CameraConfig.get_dist_wall(x, y, z[, h_a])

Retrieve distance to wall for measured bathymetry points.

CameraConfig.get_depth(z[, h_a])

Retrieve depth for measured bathymetry points.

CameraConfig.get_z_a([h_a])

Get actual water level measured in global vertical datum (+z_0) from water level in local datum (+h_ref).

CameraConfig.project_grid(xs, ys, zs[, ...])

Project gridded coordinates to col, row coordinates on image.

CameraConfig.project_points(points[, ...])

Project real world x, y, z coordinates into col, row coordinates on image.

CameraConfig.unproject_points(points, zs)

Reverse projects points in [column, row] space to [x, y, z] real world.

CameraConfig.z_to_h(z)

Convert z coordinates of bathymetry to height coordinates in local reference (e.g.

CameraConfig.h_to_z(h_a)

Convert z coordinates of bathymetry to height coordinates in local reference (e.g.

CameraConfig.estimate_lens_position()

Estimate lens position from distortion and intrinsec/extrinsic matrix.

Plotting methods#

CameraConfig.plot([figsize, ax, tiles, ...])

Plot geographical situation of the CameraConfig.

CameraConfig.plot_bbox([ax, camera, ...])

Plot bounding box.

Video class#

Class and properties#

Video(fn[, camera_config, h_a, start_frame, ...])

Video class for reading and extracting data from video files.

Video.camera_config

Get CameraConfig object attached to Video instance.

Video.end_frame

int, last frame considered in analysis

Video.fps

float, frames per second

Video.frames

Get frames of Video instance.

Video.freq

Get video sampling frequency.

Video.h_a

Actual water level [m] during video

Video.lazy

Get lazy flag.

Video.mask

Get region mask for stabilization.

Video.rotation

Get rotation code.

Video.stabilize

Get stabilization region coordinates.

Video.start_frame

int, first frame considered in analysis

Video.corners

list of 4 lists (int) with [column, row] locations of area of interest in video objective

Setting properties#

Video.set_mask_from_exterior(exterior)

Prepare a mask grid with 255 outside of the stabilization polygon and 0 inside.

Getting frames from video objects#

Video.get_frame(n[, method])

Retrieve one frame.

Video.get_frames([method])

Get a xr.DataArray, containing a dask array of frames, from start_frame until end_frame.

Video.get_frames_chunk(n_start, n_end[, method])

Retrieve a chunk of frames in one go.

Video.get_ms(cap[, split])

Get stabilization transforms for each frame based on analysis of stable points outside of water area.

Frames subclass#

These methods can be called from an xarray.DataArray that is generated using pyorc.Video.get_frames.

Class and properties#

Frames(xarray_obj)

Frames functionalities that can be applied on xr.DataArray.

Frames.camera_config

Camera configuration belonging to the processed video.

Frames.camera_shape

Shape of the original camera objective of the processed video (e.g.

Frames.h_a

Actual water level belonging to the processed video.

Enhancing frames#

Frames.edge_detect([wdw_1, wdw_2])

Highlight edges of frame intensities, using a band convolution filter.

Frames.minmax([min, max])

Minimum / maximum intensity filter.

Frames.normalize([samples])

Remove the temporal mean of sampled frames.

Frames.reduce_rolling([samples])

Remove a rolling mean from the frames.

Frames.smooth([wdw])

Smooth each frame with a Gaussian kernel.

Frames.time_diff([thres, abs])

Apply a difference over time.

Projecting frames to planar views#

Frames.project([method, resolution, reducer])

Project frames into a projected frames object, with information from the camera_config attr.

Retrieving surface velocities from frames#

Frames.get_piv([window_size, overlap, engine])

Perform PIV computation on projected frames.

Visualizing frames#

Frames.plot([ax, mode])

Create QuadMesh plot from a RGB or grayscale frame on a new or existing (if ax is not None) axes.

Frames.to_ani(fn[, figure_kwargs, ...])

Store an animation of the frames in the object.

Frames.to_video(fn[, video_format, fps])

Write frames to a video file without any layout.

Velocimetry subclass#

These methods can be called from an xarray.Dataset that is generated using pyorc.Frames.get_piv.

Class and properties#

Velocimetry(xarray_obj)

Velocimetry functionalities that can be applied on xarray.Dataset

Velocimetry.camera_config

Camera configuration belonging to the processed video.

Velocimetry.camera_shape

Shape of the original camera objective of the processed video (e.g.

Velocimetry.h_a

Actual water level belonging to the processed video.

Velocimetry.is_velocimetry

Checks if the data contained in the object seems to be velocimetry data by checking naming of dims and available variables.

Temporal masking methods#

The mask methods below either require or may have a dimension time in the data. Therefore they are most logically applied before doing any reducing over time.

Velocimetry.mask.minmax([s_min, s_max])

Masks values if the velocity scalar lies outside a user-defined valid range.

Velocimetry.mask.corr([tolerance])

Masks values with too low correlation

Velocimetry.mask.angle([angle_expected, ...])

filters on the expected angle.

Velocimetry.mask.rolling([wdw, tolerance])

Masks values if neighbours over a certain rolling length before and after, have a significantly higher velocity than value under consideration, measured by tolerance.

Velocimetry.mask.outliers([tolerance, mode])

Mask outliers measured by amount of standard deviations from the mean.

Velocimetry.mask.variance([tolerance, mode])

Masks locations if their variance (std/mean in time) is above a tolerance level for either or both x and y direction.

Velocimetry.mask.count([tolerance])

Masks locations with a too low amount of valid velocities in time, measured by fraction with tolerance.

Spatial masking methods#

The spatial masking methods look at a time reduced representation of the grid results. The resulting mask can be applied on a full time series and will then mask out grid cells over its full time span if these do not pass the mask.

Velocimetry.mask.window_mean([tolerance, ...])

Masks values when their value deviates more than tolerance (measured as relative fraction) from the mean of its neighbours (inc.

Velocimetry.mask.window_nan([tolerance, wdw])

Masks values if their surrounding neighbours (inc.

Data infilling#

Velocimetry.mask.window_replace([wdw, iter])

Replaces values in a certain window size with mean of their neighbours.

Getting data over transects#

Velocimetry.get_transect(x, y[, z, s, crs, ...])

Interpolate all variables to supplied x and y coordinates of a cross section.

Velocimetry.set_encoding([enc_pars])

Set encoding parameters for all typical variables in a velocimetry dataset.

Plotting methods#

Velocimetry.plot

alias of _Velocimetry_PlotMethods

Velocimetry.plot.pcolormesh(x, y[, s, ax])

Create pcolormesh plot from velocimetry results on new or existing axes.

Velocimetry.plot.scatter(x, y[, c, ax])

Create scatter plot of velocimetry or transect results on new or existing axes.

Velocimetry.plot.streamplot(x, y, u, v[, s, ...])

Create streamplot of velocimetry results on new or existing axes.

Velocimetry.plot.quiver(x, y, u, v[, s, ax])

Create quiver plot from velocimetry results on new or existing axes.

Transect subclass#

These methods can be called from an xarray.Dataset that is generated using pyorc.Velocimetry.get_transect.

Class and properties#

Transect(xarray_obj)

Transect functionalities that can be applied on xarray.Dataset.

Transect.camera_config

Camera configuration belonging to the processed video.

Transect.camera_shape

Shape of the original camera objective of the processed video (e.g.

Transect.h_a

Actual water level belonging to the processed video.

Derivatives#

Transect.vector_to_scalar([v_x, v_y])

Set "v_eff" and "v_dir" variables as effective velocities over cross-section, and its angle.

Transect.get_xyz_perspective([trans_mat, ...])

Get camera-perspective column, row coordinates from cross-section locations.

Transect.get_depth_perspective(h[, ...])

Get line (x, y) pairs that show the depth over several intervals in the wetted part of the cross section.

Transect.get_bottom_surface_z_perspective(h)

Return densified bottom and surface points, warped to image perspective.

Transect.get_transect_perspective([h, ...])

Get row, col locations of the transect coordinates.

Transect.get_wetted_perspective(h[, sample_size])

Get wetted polygon in camera perspective.

River flow methods#

Transect.get_river_flow([q_name, discharge_name])

Integrate time series of depth averaged velocities [m2 s-1] into cross-section integrated flow [m3 s-1].

Transect.get_q([v_corr, fill_method])

Integrated velocity over depth for quantiles of time series.

Plotting methods#

Transect.plot

alias of _Transect_PlotMethods

Transect.plot.quiver(x, y, u, v[, s, ax])

Create quiver plot from velocimetry results on new or existing axes.

Transect.plot.scatter(x, y[, c, ax])

Create scatter plot of velocimetry or transect results on new or existing axes.