Detectors

Module containing detector classes to be used with pengu-track filters and models.

class PenguTrack.Detectors.AlexSegmentation(*args, **kwargs)[source]

Segmentation method comparing input images to image-background buffer. Able to learn new background information.

class PenguTrack.Detectors.AreaBlobDetector(object_size=1, object_number=1, threshold=None)[source]

Detector classifying objects by area and number to be used with pengu-track modules.

detect(image, return_regions=False)[source]

Detection function. Parts the image into blob-regions with size according to their area. Returns information about the regions.

Parameters
  • image (array_like) – Image will be converted to uint8 Greyscale and then binnearized.

  • return_regions (bool, optional) – If True, function will return skimage.measure.regionprops object, else a list of the blob centroids and areas.

Returns

regions (array_like) – List of information about each blob of adequate size.

class PenguTrack.Detectors.AreaDetector(object_area=1, object_number=1, threshold=None, lower_limit=None, upper_limit=None)[source]

Detector classifying objects by area and number to be used with pengu-track modules.

detect(image)[source]

Detection function. Parts the image into blob-regions with size according to their area. Returns information about the regions.

Parameters
  • image (array_like) – Image will be converted to uint8 Greyscale and then binnearized.

  • return_regions (bool, optional) – If True, function will return skimage.measure.regionprops object, else a list of the blob centroids and areas.

Returns

regions (array_like) – List of information about each blob of adequate size.

class PenguTrack.Detectors.BlobDetector(object_size=1, object_number=1, threshold=None)[source]

Detector classifying objects by size and number to be used with pengu-track modules.

detect(image)[source]

Detection function. Parts the image into blob-regions with size according to object_size. Returns information about the regions.

Parameters

image (array_like) – Image will be converted to uint8 Greyscale and then binnearized.

Returns

regions (array_like) – List of information about each blob of adequate size.

class PenguTrack.Detectors.BlobSegmentation(max_size, min_size=1, init_image=None)[source]

Segmentation method detecting blobs.

detect(image, do_neighbours=True, *args, **kwargs)[source]

Segmentation function. This compares the input image to the background model and returns a segmentation map.

Parameters
  • image (array_like) – Input Image.

  • do_neighbours (bool, optional) – If True neighbouring pixels will be updated accordingly to their foreground vicinity, else this time-intensiv calculation will not be done.

Returns

SegMap (array_like, bool) – The segmented image.

class PenguTrack.Detectors.Detector[source]

This Class describes the abstract function of a detector in the pengu-track package. It is only meant for subclassing.

class PenguTrack.Detectors.DumbViBeSegmentation(*args, **kwargs)[source]
class PenguTrack.Detectors.EmperorDetector(initial_image, **kwargs)[source]
class PenguTrack.Detectors.FlowDetector(flow=None, pyr_scale=0.5, levels=3, winsize=15, iterations=3, poly_n=5, poly_sigma=1.2, flags=0, *args, **kwargs)[source]
class PenguTrack.Detectors.MeanViBeSegmentation(sensitivity=1, n=20, m=1, init_image=None, *args, **kwargs)[source]
class PenguTrack.Detectors.Measurement(log_probability, position, cov=None, data=None, frame=None, track_id=None)[source]

Base Class for detection results.

class PenguTrack.Detectors.MoGSegmentation(n=10, init_image=None)[source]

Segmentation method assuming that pixel states depend on various gaussian distributions.

detect(image, *args, **kwargs)[source]

Segmentation function. This function binearizes the input image by assuming the pixels, which do not fit the background gaussians are foreground.

Parameters

image (array_like) – Image to be segmented.

Returns

SegMap (array_like, bool) – Segmented Image.

segmentate(image, *args, **kwargs)[source]

Segmentation function. This function binearizes the input image by assuming the pixels, which do not fit the background gaussians are foreground.

Parameters

image (array_like) – Image to be segmented.

Returns

SegMap (array_like, bool) – Segmented Image.

class PenguTrack.Detectors.MoGSegmentation2(n=20, r=15, init_image=None)[source]

Segmentation method comparing input images to image-background buffer. Able to learn new background information.

detect(image, do_neighbours=True, *args, **kwargs)[source]

Segmentation function. This compares the input image to the background model and returns a segmentation map.

Parameters
  • image (array_like) – Input Image.

  • do_neighbours (bool, optional) – If True neighbouring pixels will be updated accordingly to their foreground vicinity, else this time-intensiv calculation will not be done.

Returns

SegMap (array_like, bool) – The segmented image.

class PenguTrack.Detectors.NKCellDetector[source]
class PenguTrack.Detectors.NKCellDetector2[source]
class PenguTrack.Detectors.RegionPropDetector(RegionFilters)[source]
class PenguTrack.Detectors.Segmentation[source]

This Class describes the abstract function of a image-segmentation-algorithm in the pengu-track package. It is only meant for subclassing.

class PenguTrack.Detectors.SimpleAreaDetector(object_area=1, object_number=1, threshold=None, lower_limit=None, upper_limit=None)[source]

Detector classifying objects by area and number to be used with pengu-track modules.

detect(image)[source]

Detection function. Parts the image into blob-regions with size according to their area. Returns information about the regions.

Parameters

image (array_like) – Image will be converted to uint8 Greyscale and then binnearized.

Returns

regions (array_like) – List of information about each blob of adequate size.

class PenguTrack.Detectors.SimpleAreaDetector2(object_area=1, object_number=1, threshold=None, lower_limit=None, upper_limit=None, distxy_boundary=10, distz_boundary=21)[source]

Detector classifying objects by area and number to be used with pengu-track modules.

detect(image, mask)[source]

Detection function. Parts the image into blob-regions with size according to their area. Returns information about the regions.

Parameters

image (array_like) – Image will be converted to uint8 Greyscale and then binnearized.

Returns

regions (array_like) – List of information about each blob of adequate size.

class PenguTrack.Detectors.TCellDetector[source]

Detector classifying objects by area and number to be used with pengu-track modules.

class PenguTrack.Detectors.ThresholdSegmentation(treshold, reskale=True)[source]
class PenguTrack.Detectors.TinaCellDetector(disk_size=0, minimal_area=57, maximal_area=350, threshold=0.2)[source]
detect(minProj, minIndices, maxProj, maxIndices)[source]
Parameters
  • minProj

  • minIndices

  • maxProj

  • maxIndices

Returns

class PenguTrack.Detectors.VarianceSegmentation(treshold, r)[source]
class PenguTrack.Detectors.ViBeSegmentation(n=20, r=15, n_min=1, phi=16, init_image=None)[source]

Segmentation method comparing input images to image-background buffer. Able to learn new background information.

detect(image, do_neighbours=True, *args, **kwargs)[source]

Segmentation function. This compares the input image to the background model and returns a segmentation map.

Parameters
  • image (array_like) – Input Image.

  • do_neighbours (bool, optional) – If True neighbouring pixels will be updated accordingly to their foreground vicinity, else this time-intensiv calculation will not be done.

Returns

SegMap (array_like, bool) – The segmented image.

class PenguTrack.Detectors.WatershedDetector(object_size=1, object_number=1, threshold=None)[source]

Detector classifying objects by area and number. It uses watershed algorithms to depart bigger areas. To be used with pengu-track modules.

detect(image, return_regions=False)[source]

Detection function. Parts the image into blob-regions with size according to their area. Then departs bigger areas into smaller ones with watershed method. Returns information about the regions.

Parameters
  • image (array_like) – Image will be converted to uint8 Greyscale and then binnearized.

  • return_regions (bool, optional) – If True, function will return skimage.measure.regionprops object, else a list of the blob centroids and areas.

Returns

list – List of information about each blob of adequate size.

class PenguTrack.Detectors.dotdict[source]

enables dot access on dicts

PenguTrack.Detectors.extended_regionprops(label_image, intensity_image=None, cache=True)[source]

Measure properties of labeled image regions.

Parameters
  • label_image ((N, M) ndarray) – Labeled input image. Labels with value 0 are ignored.

  • intensity_image ((N, M) ndarray, optional) – Intensity (i.e., input) image with same size as labeled image. Default is None.

  • cache (bool, optional) – Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.

Returns

properties (list of RegionProperties) – Each item describes one labeled region, and can be accessed using the attributes listed below.

Notes

The following properties can be accessed as attributes or keys:

areaint

Number of pixels of region.

bboxtuple

Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).

bbox_areaint

Number of pixels of bounding box.

centroidarray

Centroid coordinate tuple (row, col).

convex_areaint

Number of pixels of convex hull image.

convex_image(H, J) ndarray

Binary convex hull image which has the same size as bounding box.

coords(N, 2) ndarray

Coordinate list (row, col) of the region.

eccentricityfloat

Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The value is in the interval [0, 1). When it is 0, the ellipse becomes a circle.

equivalent_diameterfloat

The diameter of a circle with the same area as the region.

euler_numberint

Euler characteristic of region. Computed as number of objects (= 1) subtracted by number of holes (8-connectivity).

extentfloat

Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (rows * cols)

filled_areaint

Number of pixels of filled region.

filled_image(H, J) ndarray

Binary region image with filled holes which has the same size as bounding box.

image(H, J) ndarray

Sliced binary region image which has the same size as bounding box.

inertia_tensor(2, 2) ndarray

Inertia tensor of the region for the rotation around its mass.

inertia_tensor_eigvalstuple

The two eigen values of the inertia tensor in decreasing order.

intensity_imagendarray

Image inside region bounding box.

labelint

The label in the labeled input image.

local_centroidarray

Centroid coordinate tuple (row, col), relative to region bounding box.

major_axis_lengthfloat

The length of the major axis of the ellipse that has the same normalized second central moments as the region.

max_intensityfloat

Value with the greatest intensity in the region.

mean_intensityfloat

Value with the mean intensity in the region.

min_intensityfloat

Value with the least intensity in the region.

minor_axis_lengthfloat

The length of the minor axis of the ellipse that has the same normalized second central moments as the region.

moments(3, 3) ndarray

Spatial moments up to 3rd order:

m_ji = sum{ array(x, y) * x^j * y^i }

where the sum is over the x, y coordinates of the region.

moments_central(3, 3) ndarray

Central moments (translation invariant) up to 3rd order:

mu_ji = sum{ array(x, y) * (x - x_c)^j * (y - y_c)^i }

where the sum is over the x, y coordinates of the region, and x_c and y_c are the coordinates of the region’s centroid.

moments_hutuple

Hu moments (translation, scale and rotation invariant).

moments_normalized(3, 3) ndarray

Normalized moments (translation and scale invariant) up to 3rd order:

nu_ji = mu_ji / m_00^[(i+j)/2 + 1]

where m_00 is the zeroth spatial moment.

orientationfloat

Angle between the X-axis and the major axis of the ellipse that has the same second-moments as the region. Ranging from -pi/2 to pi/2 in counter-clockwise direction.

perimeterfloat

Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.

solidityfloat

Ratio of pixels in the region to pixels of the convex hull image.

weighted_centroidarray

Centroid coordinate tuple (row, col) weighted with intensity image.

weighted_local_centroidarray

Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.

weighted_moments(3, 3) ndarray

Spatial moments of intensity image up to 3rd order:

wm_ji = sum{ array(x, y) * x^j * y^i }

where the sum is over the x, y coordinates of the region.

weighted_moments_central(3, 3) ndarray

Central moments (translation invariant) of intensity image up to 3rd order:

wmu_ji = sum{ array(x, y) * (x - x_c)^j * (y - y_c)^i }

where the sum is over the x, y coordinates of the region, and x_c and y_c are the coordinates of the region’s weighted centroid.

weighted_moments_hutuple

Hu moments (translation, scale and rotation invariant) of intensity image.

weighted_moments_normalized(3, 3) ndarray

Normalized moments (translation and scale invariant) of intensity image up to 3rd order:

wnu_ji = wmu_ji / wm_00^[(i+j)/2 + 1]

where wm_00 is the zeroth spatial moment (intensity-weighted area).

Each region also supports iteration, so that you can do:

for prop in region:
    print(prop, region[prop])