API reference#
pyorc’s API consists of several subclasses of the xarray.Dataset
and xarray.DataArray
data models.
In a nutshell, xarray’s data models are meant to store and analyze scientific datasets with multiple
dimensions. A xarray.DataArray
contains one variable with possibly several dimensions and coordinates
within those dimensions. A xarray.Dataset
may contain multiple xarray.DataArray
objects, with shared
coordinates. In pyorc typically the coordinates are time
for time epochs measured in seconds since
the beginning of a video, x
for horizontal grid spacing (in meters), y
for vertical grid spacing
(in meters). Operations you can apply on both data models are very comparable with operations you may
already use in pandas, such as resampling, reindexing, aggregations, and so on.
Note
We highly recommend to use the excellent xarray manual side-by-side with pyorc
to understand
how to effectively work with the xarray ecosystem.
In the remaining sections, we describe the API classes, and the functions they are based on.
CameraConfig class#
Class and properties#
|
Camera configuration containing information about the perspective of the camera. |
Give geographical bbox fitting around corners points of area of interest in camera perspective. |
|
Return rows and columns in projected frames from Frames.project. |
|
Returns Affine transform of projected frames from Frames.project. |
Setting of properties and attributes#
|
Establish bbox based on a set of camera perspective corner points. |
|
Set ground control points for the given CameraConfig. |
|
Set the lens parameters of the given CameraConfig. |
|
Set lens and distortion parameters. |
|
Calibrate and set the properties camera_matrix and dist_coeffs using a video of a chessboard pattern. |
|
Set the geographical position of the lens of current CameraConfig. |
Exporting#
Return the CameraConfig object as dictionary. |
|
Convert the current instance to a dictionary with all values converted to strings. |
|
Write the CameraConfig object to json structure. |
|
Convert CameraConfig object to string. |
Retrieve geometrical information#
Estimate lens position from distortion and intrinsec/extrinsic matrix. |
|
|
Establish a transformation matrix for a certain actual water level h_a. |
|
Get bounding box. |
|
Retrieve depth for measured bathymetry points. |
|
Retrieve distance to wall for measured bathymetry points. |
|
Retrieve depth for measured bathymetry points. |
|
Get actual water level measured in global vertical datum (+z_0) from water level in local datum (+h_ref). |
|
Project gridded coordinates to col, row coordinates on image. |
|
Project real world x, y, z coordinates into col, row coordinates on image. |
|
Reverse projects points in [column, row] space to [x, y, z] real world. |
Convert z coordinates of bathymetry to height coordinates in local reference (e.g. |
|
|
Convert z coordinates of bathymetry to height coordinates in local reference (e.g. |
Estimate lens position from distortion and intrinsec/extrinsic matrix. |
Plotting methods#
|
Plot geographical situation of the CameraConfig. |
|
Plot bounding box. |
CrossSection class#
The CrossSection class is made to facilitate geometrical operations with cross sections. It can be used to guide water line detection methods, estimate the water level and possibly other optical methods conceived by the user, that require understanding of the perspective, jointly with a measured cross section.
To facilitate geometric operations, most of the geometric properties are returned as shapely.geometry objects such as Point, LineString and Polygon.
This makes it easy to derive important flow properties such as wetted surface [m2] and wetted perimeter [m].
Class and properties#
|
Water Level functionality. |
Return cross-section as list of shapely.geometry.Point. |
|
Return cross-section perpendicular to flow direction (SZ) as list of shapely.geometry.Point. |
|
Return cross-section as shapely.geometry.Linestring. |
|
Return cross-section perpendicular to flow direction (SZ) as shapely.geometry.Linestring. |
|
Average angle orientation that cross-section makes in geographical space. |
|
Determine index of point in cross-section, closest to the camera. |
|
Determine index of point in cross-section, farthest from the camera. |
Getting cross section geometries#
|
Retrieve LineString of water surface at cross-section at a given water level. |
|
Retrieve list of points, where cross-section (cs) touches the land (l). |
|
Retrieve waterlines over the cross-section, perpendicular to the orientation of the cross-section. |
|
Get horizontal polygon from cross-section land-line towards water or towards land. |
|
Retrieve a bottom surface polygon for the entire cross section, expanded over a length. |
|
Retrieve a planar water surface for a given water level, as a geometry.Polygon. |
|
Retrieve a wetted surface for a given water level, as a geometry.Polygon. |
Retrieve a wetted surface perpendicular to flow direction (SZ) for a water level, as a geometry.Polygon. |
|
Retrieve the points of interest within the cross-section for water level detection. |
Plotting methods#
The plotting methods consist of a number of smaller methods, as well as one overarching CrossSection.plot method, that combines the smaller methods. The plotting functions are meant to provide the user insight in the situation on a site. All methods can be combined with the parallel plotting method for camera configurations CameraConfig.plot.
|
Plot the cross-section situation. |
|
Plot the cross section. |
|
Plot the planar surface for a given water level. |
|
Plot the bottom surface for a given water level. |
|
Plot the wetted surface for a given water level. |
Water level detection#
Combined with a preprocessed image from e.g. a video file, a water level can be detected.
|
Detect water level optically from provided image. |
Video class#
Class and properties#
|
Video class for reading and extracting data from video files. |
Get CameraConfig object attached to Video instance. |
|
int, last frame considered in analysis |
|
float, frames per second |
|
Get frames of Video instance. |
|
Get video sampling frequency. |
|
Actual water level [m] during video |
|
Get lazy flag. |
|
Get region mask for stabilization. |
|
Get rotation code. |
|
Get stabilization region coordinates. |
|
int, first frame considered in analysis |
|
list of 4 lists (int) with [column, row] locations of area of interest in video objective |
Setting properties#
|
Prepare a mask grid with 255 outside of the stabilization polygon and 0 inside. |
Getting frames from video objects#
|
Retrieve one frame. |
|
Get a xr.DataArray, containing a dask array of frames, from start_frame until end_frame. |
|
Retrieve a chunk of frames in one go. |
|
Get stabilization transforms for each frame based on analysis of stable points outside of water area. |
Frames subclass#
These methods can be called from an xarray.DataArray
that is generated using pyorc.Video.get_frames
.
Class and properties#
|
Frames functionalities that can be applied on xr.DataArray. |
Camera configuration belonging to the processed video. |
|
Shape of the original camera objective of the processed video (e.g. |
|
Actual water level belonging to the processed video. |
Enhancing frames#
|
Highlight edges of frame intensities, using a band convolution filter. |
|
Minimum / maximum intensity filter. |
|
Remove the temporal mean of sampled frames. |
|
Remove a rolling mean from the frames. |
|
Smooth each frame with a Gaussian kernel. |
|
Apply a difference over time. |
Projecting frames to planar views#
|
Project frames into a projected frames object, with information from the camera_config attr. |
Retrieving surface velocities from frames#
|
Perform PIV computation on projected frames. |
Visualizing frames#
|
Create QuadMesh plot from a RGB or grayscale frame on a new or existing (if ax is not None) axes. |
|
Store an animation of the frames in the object. |
|
Write frames to a video file without any layout. |
Velocimetry subclass#
These methods can be called from an xarray.Dataset
that is generated using pyorc.Frames.get_piv
.
Class and properties#
|
Velocimetry functionalities that can be applied on xarray.Dataset. |
Camera configuration belonging to the processed video. |
|
Shape of the original camera objective of the processed video (e.g. |
|
Actual water level belonging to the processed video. |
|
Check if the data contained in the object seems to be velocimetry data. |
Temporal masking methods#
The mask methods below either require or may have a dimension time
in the data. Therefore they are most logically
applied before doing any reducing over time.
|
Masks values if the velocity scalar lies outside a user-defined valid range. |
|
Masks values with too low correlation |
|
filters on the expected angle. |
|
Masks values if neighbours over a certain rolling length before and after, have a significantly higher velocity than value under consideration, measured by tolerance. |
|
Mask outliers measured by amount of standard deviations from the mean. |
|
Masks locations if their variance (std/mean in time) is above a tolerance level for either or both x and y direction. |
|
Masks locations with a too low amount of valid velocities in time, measured by fraction with |
Spatial masking methods#
The spatial masking methods look at a time reduced representation of the grid results. The resulting mask can be applied on a full time series and will then mask out grid cells over its full time span if these do not pass the mask.
|
Masks values when their value deviates more than tolerance (measured as relative fraction) from the mean of its neighbours (inc. |
|
Masks values if their surrounding neighbours (inc. |
Data infilling#
|
Replaces values in a certain window size with mean of their neighbours. |
Getting data over transects#
|
Interpolate all variables to supplied x and y coordinates of a cross-section. |
|
Set encoding parameters for all typical variables in a velocimetry dataset. |
Plotting methods#
alias of |
|
|
Create pcolormesh plot from velocimetry results on new or existing axes. |
|
Create scatter plot of velocimetry or transect results on new or existing axes. |
|
Create streamplot of velocimetry results on new or existing axes. |
|
Create quiver plot from velocimetry results on new or existing axes. |
Transect subclass#
These methods can be called from an xarray.Dataset
that is generated using pyorc.Velocimetry.get_transect
.
Class and properties#
|
Transect functionalities that can be applied on |
Camera configuration belonging to the processed video. |
|
Shape of the original camera objective of the processed video (e.g. |
|
Actual water level belonging to the processed video. |
Derivatives#
|
Set "v_eff" and "v_dir" variables as effective velocities over cross-section, and its angle. |
|
Get camera-perspective column, row coordinates from cross-section locations. |
|
Get line (x, y) pairs that show the depth over several intervals in the wetted part of the cross section. |
Return densified bottom and surface points, warped to image perspective. |
|
|
Get row, col locations of the transect coordinates. |
|
Get wetted polygon in camera perspective. |
River flow methods#
|
Integrate time series of depth averaged velocities [m2 s-1] into cross-section integrated flow [m3 s-1]. |
|
Integrated velocity over depth for quantiles of time series. |
Plotting methods#
alias of |
|
|
Create quiver plot from velocimetry results on new or existing axes. |
|
Create scatter plot of velocimetry or transect results on new or existing axes. |