Masking

Functions for work with masks/labels and multidimensional arrays using binary masks and label

label_prof_arr

Calc labeled ROIs profiles for time series.

Parameters:
  • input_label (ndarray) –

    label image

  • input_img_series (ndarray) –

    image time series, frames must be the same size with label array

  • f0_win (int, default: 3 ) –

    number of points from profile start to calc F0

Returns:
  • prof_df_arr( ndarray[dF_value, t] ) –

    ΔF/F profiles for each label elements

  • prof_arr( ndarray[intensity_value, t] ) –

    intensity profiles for each label elements

src/domb/utils/masking.py
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
def label_prof_arr(input_label: np.ndarray, input_img_series: np.ndarray,
                   f0_win:int=3):
    """ Calc labeled ROIs profiles for time series. 

    Parameters
    ----------
    input_label: ndarray [x,y]
        label image
    input_img_series: ndarray [t,x,y]
        image time series, frames must be the same size with label array
    f0_win: int, optional
        number of points from profile start to calc F0

    Returns
    -------
    prof_df_arr: ndarray [dF_value,t]
        ΔF/F profiles for each label elements
    prof_arr: ndarray [intensity_value,t]
        intensity profiles for each label elements

    """
    output_dict = {}
    prof_arr = []
    prof_df_arr=[]
    for label_num in np.unique(input_label)[1:]:
        region_mask = input_label == label_num
        prof = np.mean(input_img_series, axis=(1,2), where=region_mask)
        F_0 = np.mean(prof[:f0_win])
        df_prof = (prof-F_0)/F_0
        output_dict.update({label_num:[prof, df_prof]})

        prof_arr.append(prof)
        prof_df_arr.append(df_prof)


    return np.asarray(prof_df_arr), np.asarray(prof_arr)

mask_connection

Function to filter two masks by overlay.

Could be used in stand-alone mode as a static method of the WTvsMut class.

Parameters:
  • input_master_mask (ndarray) –

    boolean mask for filtering with a greater number/area of insertions, typically the mask of wild-type NCS insertions

  • input_minor_mask (ndarray) –

    boolean mask with a lower number/area of insertions, typically the mask of mutant NCS insertions

Returns:
  • fin_mask( ndarray[x, y] ) –

    boolean mask that includes only the elements from 'input_wt_mask' that overlap with the elements from 'input_mut_mask'

  • fin_label( ndarray[x, y] ) –

    label image for fin_mask

src/domb/utils/masking.py
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
def mask_connection(input_master_mask:np.ndarray, input_minor_mask:np.ndarray):
    """ Function to filter two masks by overlay.

    Could be used in stand-alone mode as a static method of the WTvsMut class.

    Parameters
    ----------
    input_master_mask: ndarray [x,y]
        boolean mask for filtering with a greater number/area of insertions,
        typically the mask of wild-type NCS insertions
    input_minor_mask: ndarray [x,y]
        boolean mask with a lower number/area of insertions,
        typically the mask of mutant NCS insertions

    Returns
    -------
    fin_mask: ndarray [x,y]
        boolean mask that includes only the elements from 'input_wt_mask'
        that overlap with the elements from 'input_mut_mask'  
    fin_label: ndarray [x,y]
        label image for `fin_mask`

    """ 
    master_label, master_num = ndi.label(input_master_mask)

    sums = ndi.sum(input_minor_mask, master_label, np.arange(master_num+1))
    connected = sums > 0
    debris_mask = connected[master_label]

    fin_mask = np.copy(input_master_mask)
    fin_mask[~debris_mask] = 0

    fin_label, fin_num = ndi.label(fin_mask)

    return fin_mask, fin_label

pb_exp_correction

Image series photobleaching correction by exponential fit.

Correction proceeds by masked area of interest, not the whole frame to prevent autofluorescence influence.

Parameters:
  • input_img (ndarray) –

    input image series

  • mask (ndarray) –

    mask of region of interest, must be same size with image frames

  • method (str, default: 'exp' ) –

    method for correction, exponential (exp) or bi-exponential (bi_exp)

Returns:
  • corrected_img( ndarray[t, x, y] ) –

    corrected image series

  • bleach_coefs( ndarray[t] ) –

    array of correction coeficients for each frame

  • r_val( float ) –

    R-squared value of exponential fit

src/domb/utils/masking.py
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
def pb_exp_correction(input_img:np.ndarray, mask:np.ndarray, method:str='exp'):
    """ Image series photobleaching correction by exponential fit.

    Correction proceeds by masked area of interest, not the whole frame to prevent autofluorescence influence.

    Parameters
    ----------
    input_img: ndarray [t,x,y]
        input image series
    mask: ndarray [x,y]
        mask of region of interest, must be same size with image frames
    method: str
        method for correction, exponential (`exp`) or bi-exponential (`bi_exp`)

    Returns
    -------
    corrected_img: ndarray [t,x,y]
        corrected image series
    bleach_coefs: ndarray [t]
        array of correction coeficients for each frame
    r_val: float
        R-squared value of exponential fit

    """
    exp = lambda x,a,b: a * np.exp(-b * x)
    bi_exp = lambda x,a,b,c,d: (a * np.exp(-b * x)) + (c * np.exp(-d * x))

    if method == 'exp':
        func = exp
    elif method == 'bi_exp':
        func = bi_exp
    else:
        raise ValueError('Incorrect method!')

    bleach_profile = np.mean(input_img, axis=(1,2), where=mask)
    x_profile = np.linspace(0, bleach_profile.shape[0], bleach_profile.shape[0])

    popt,_ = optimize.curve_fit(func, x_profile, bleach_profile)
    bleach_fit = np.vectorize(func)(x_profile, *popt)
    bleach_coefs =  bleach_fit / bleach_fit.max()
    bleach_coefs_arr = bleach_coefs.reshape(-1, 1, 1)
    corrected_image = input_img/bleach_coefs_arr

    _,_,r_val,_,_ = stats.linregress(bleach_profile, bleach_fit)

    return corrected_image, bleach_coefs, r_val

proc_mask

NB: used by default in most registration types!

Mask for single neuron images with bright soma region with Sauvola local threshold. Func drops soma and returns the mask for processes only.

In the original method, a threshold T is calculated for every pixel in the image using the following formula (m(x,y) is mean and s(x,y) is SD in rectangular window):

T = m(x,y) * (1 + k * ((s(x,y) / r) - 1))

Parameters:
  • input_img (ndarray) –

    input img for mask creation, recomend choose max int projection of brighter channel (GFP/YFP in our case)

  • proc_sigma (float, default: 1.0 ) –

    sigma value for gaussian blur of input image

  • win_size (int, default: 801 ) –

    Sauvola local threshold parameter, window size specified as a single odd integer

  • k_val (float, default: 0.0001 ) –

    Sauvola local threshold parameter, value of the positive parameter k

  • r_val (float, default: 0.5 ) –

    Sauvola local threshold parameter, value of r, the dynamic range of standard deviation

  • soma_mask (bool, default: False ) –

    if True brighter region on image will identified and masked as soma

  • soma_th (float, default: 0.5 ) –

    soma detection threshold in % of max img int

  • soma_ext (int, default: 100 ) –

    soma mask extension size in px

  • ext_fin_mask (bool, default: False ) –

    if True - final processes mask will be extended on proc_ext value

  • proc_ext (int, default: 5 ) –

    neuron processes mask extention value in px

  • select_largest_mask (bool, default: False ) –

    if True - final mask will contain the largest detected element only

Returns:
  • proc_mask( ndarray, dtype boolean ) –

    neuron processes mask

src/domb/utils/masking.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
def proc_mask(input_img:np.ndarray,
              proc_sigma:float=1.0, win_size:int=801, k_val:float=1e-4, r_val:float=0.5,
              soma_mask:bool=False, soma_th:float=.5, soma_ext:int=100,
              ext_fin_mask:bool=False, proc_ext:int=5,
              select_largest_mask:bool=False):
    """
    __NB: used by default in most registration types!__

    Mask for single neuron images with bright soma region
    with Sauvola local threshold. Func drops soma and returns the mask for processes only.

    In the original method, a threshold T is calculated
    for every pixel in the image using the following formula
    (`m(x,y)` is mean and `s(x,y)` is SD in rectangular window):

    `
    T = m(x,y) * (1 + k * ((s(x,y) / r) - 1))
    `

    Parameters
    ----------
    input_img: ndarray
        input img for mask creation,
        recomend choose max int projection of brighter channel (GFP/YFP in our case)
    proc_sigma: float
        sigma value for gaussian blur of input image
    win_size: int
        Sauvola local threshold parameter, window size specified as a single __odd__ integer
    k_val: float
        Sauvola local threshold parameter, value of the positive parameter `k`
    r_val: float  
        Sauvola local threshold parameter, value of `r`, the dynamic range of standard deviation
    soma_mask: boolean, optional
        if True brighter region on image will identified and masked as soma
    soma_th: float, optional
        soma detection threshold in % of max img int
    soma_ext: int, optional
        soma mask extension size in px
    ext_fin_mask: boolean, optional
        if `True` - final processes mask will be extended on proc_ext value
    proc_ext: int
        neuron processes mask extention value in px
    select_largest_mask: bool, optional
        if `True` - final mask will contain the largest detected element only

    Returns
    -------
    proc_mask: ndarray, dtype boolean
        neuron processes mask

    """
    input_img = filters.gaussian(input_img, sigma=proc_sigma)

    # processes masking    
    proc_th = filters.threshold_sauvola(input_img, window_size=win_size, k=k_val, r=r_val)
    proc_mask = input_img > proc_th

    # mask extention
    if ext_fin_mask:
        proc_dist = ndi.distance_transform_edt(~proc_mask, return_indices=False)
        proc_mask = proc_dist <= proc_ext

    # soma masking
    if soma_mask:
        soma_region = np.copy(input_img)
        soma_region = soma_region > soma_region.max() * soma_th
        soma_region = morphology.opening(soma_region, footprint=morphology.disk(5))
        soma_dist = ndi.distance_transform_edt(~soma_region, return_indices=False)
        soma_region = soma_dist < soma_ext
        proc_mask[soma_region] = 0

    # minor mask elements filtering
    if select_largest_mask:
        def largest_only(raw_mask):
            # get larger mask element
            element_label = measure.label(raw_mask)
            element_area = {element.area : element.label for element in measure.regionprops(element_label)}
            larger_mask = element_label == element_area[max(element_area.keys())]
            return larger_mask
        proc_mask = largest_only(proc_mask)

    return proc_mask

proc_mask_otsu

Mask for single neuron images with bright soma region with simple Otsu thresholding. Fiunc drops soma and returns the mask for processes only.

Parameters:
  • input_img (ndarray) –

    input img for mask creation, recomend choose max int projection of brighter channel (YFP)

  • soma_mask (bool, default: True ) –

    if True brighter region on image will identified and masked as soma

  • soma_th (float, default: 0.5 ) –

    soma detection threshold in % of max img int

  • soma_ext (int, default: 20 ) –

    soma mask extension size in px

  • proc_ext (int, default: 5 ) –

    cell processes mask extention in px

  • ext_fin_mask (bool, default: True ) –

    if True - final processes mask will be extended on proc_ext val

  • proc_sigma (float, default: 0.5 ) –

    sigma value for gaussian blur of input image

Returns:
  • proc_mask_fin( ndarray, dtype boolean ) –

    extented cell processes mask

src/domb/utils/masking.py
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def proc_mask_otsu(input_img:np.ndarray,
              soma_mask:bool=True, soma_th:float=.5, soma_ext:int=20,
              proc_ext:int=5, ext_fin_mask:bool=True, proc_sigma:float=.5):
    """ Mask for single neuron images with bright soma region with simple Otsu thresholding.
    Fiunc drops soma and returns the mask for processes only.

    Parameters
    ----------
    input_img: ndarray
        input img for mask creation,
        recomend choose max int projection of brighter channel (YFP)
    soma_mask: boolean, optional
        if True brighter region on image will identified and masked as soma
    soma_th: float, optional
        soma detection threshold in % of max img int
    soma_ext: int, optional
        soma mask extension size in px
    proc_ext: int
        cell processes mask extention in px
    ext_fin_mask: bool, optional
        if True - final processes mask will be extended on proc_ext val
    proc_sigma: float
        sigma value for gaussian blur of input image

    Returns
    -------
    proc_mask_fin: ndarray, dtype boolean
        extented cell processes mask

    """
    # soma masking
    input_img = filters.gaussian(input_img, sigma=proc_sigma)
    if soma_mask:
        soma_region = np.copy(input_img)
        soma_region = soma_region > soma_region.max() * soma_th
        soma_region = morphology.opening(soma_region, footprint=morphology.disk(5))
        # soma_region = morphology.dilation(soma_region, footprint=morphology.disk(10))
        soma_dist = ndimage.distance_transform_edt(~soma_region, return_indices=False)
        soma_mask = soma_dist < soma_ext
        input_img = ma.masked_where(soma_mask, input_img)

        th = filters.threshold_otsu(input_img.compressed())
    else:
        th = filters.threshold_otsu(input_img)

    # processes masking
    proc_mask = input_img > th
    proc_mask = morphology.closing(proc_mask, footprint=morphology.disk(5))
    if ext_fin_mask:
        proc_dist = ndimage.distance_transform_edt(~proc_mask, return_indices=False)
        proc_mask_fin = proc_dist <= proc_ext
    else:
        proc_mask_fin = proc_mask
    proc_mask_fin[soma_mask] = 0

    return proc_mask_fin

series_derivate

Calculation of intensity derivative profile for image series.

Parameters:
  • input_img (ndarray) –

    input image series

  • mask (ndarray) –

    cell mask

  • left_win

    number of frames for left image

  • space (int, default: 0 ) –

    number of skipped frames between left and right images

  • left_win

    number of frames for left image

Returns:
  • der_arr_win( ndarray[i] ) –

    1D array of derivate intensity calculated for framed images (left-skip-right), 0-1 normalized

  • prof_df_arr( ndarray[i] ) –

    1D array of derivative intensity calculated frame by frame and smoothed with moving average (n=3), 0-1 normalized

src/domb/utils/masking.py
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
def series_derivate(input_img:np.ndarray, mask:np.ndarray,
                    left_win=1, space:int=0, right_win:int=1):
    """ Calculation of intensity derivative profile for image series.

    Parameters
    ----------
    input_img: ndarray [t,x,y]
        input image series
    mask: ndarray, dtype boolean
        cell mask
    left_win: int
        number of frames for left image
    space: int
        number of skipped frames between left and right images
    left_win: int
        number of frames for left image

    Returns
    -------
    der_arr_win: ndarray [i]
        1D array of derivate intensity calculated
        for framed images (left-skip-right), 0-1 normalized
    prof_df_arr: ndarray [i]
        1D array of derivative intensity calculated frame by frame
        and smoothed with moving average (n=3), 0-1 normalized

    """
    der_img = []
    der_arr_win = []
    der_arr_point = []
    for i in range(input_img.shape[0] - (left_win+space+right_win)):
        der_frame = np.mean(input_img[i+left_win+space:i+left_win+space+right_win+1], axis=0) - np.mean(input_img[i:i+left_win+1], axis=0) 
        der_val = np.sum(np.abs(der_frame), where=mask)

        der_frame_point = input_img[i+1] - input_img[i]
        der_val_point = np.sum(np.abs(der_frame_point), where=mask)

        der_img.append(der_frame)
        der_arr_win.append(der_val)
        der_arr_point.append(der_val_point)

    der_img = np.asarray(der_img)

    der_arr_win = np.asarray(der_arr_win)
    der_arr_win = (der_arr_win - np.min(der_arr_win)) / (np.max(der_arr_win)-np.min(der_arr_win))
    der_arr_win = np.pad(der_arr_win, (left_win+space+right_win), constant_values=0)[:input_img.shape[0]]

    def moving_average(a, n=3):
        ret = np.cumsum(a, dtype=float)
        ret[n:] = ret[n:] - ret[:-n]
        return ret[n - 1:] / n

    der_arr_point = np.asarray(der_arr_point)
    der_arr_point = (der_arr_point - np.min(der_arr_point)) / (np.max(der_arr_point)-np.min(der_arr_point))
    der_arr_point = moving_average(der_arr_point)
    der_arr_point = np.pad(der_arr_point, 5, constant_values=0)[:input_img.shape[0]]

    return der_arr_win, der_arr_point

trans_prof_arr

Calc ratio mask int/ROIs int for time series

Parameters:
  • input_total_mask (ndarray) –

    cell mask

  • input_label (ndarray) –

    ROIs label array

  • input_img_series (ndarray) –

    Series of images as 3D array with dim. order (time,x,y), each frame should be same size wiht cell mask array.

Returns:
  • trans_rois_arr( ndarray ) –

    2D array with dim. order (t,translocation profile)

  • prof_df_arr( ndarray ) –

    array with ratio "all ROIs int/cell mask int" for each frame

src/domb/utils/masking.py
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
def trans_prof_arr(input_total_mask: np.ndarray,
                   input_label: np.ndarray,
                   input_img_series: np.ndarray):
    """ Calc ratio mask int/ROIs int for time series 

    Parameters
    ----------
    input_total_mask: ndarray, dtype boolean
        cell mask
    input_label: ndarray
        ROIs label array
    input_img_series: ndarray
        Series of images as 3D array with dim. order (time,x,y),
        each frame should be same size wiht cell mask array.

    Returns
    -------
    trans_rois_arr: ndarray
        2D array with dim. order (t,translocation profile)
    prof_df_arr: ndarray
        array with ratio "all ROIs int/cell mask int" for each frame

    """
    trans_rois_arr = []
    for label_num in range(1, np.max(input_label)+1):
        region_mask = input_label == label_num
        prof = np.asarray([np.sum(img, where=region_mask) / np.sum(img, where=input_total_mask) \
                           for img in input_img_series])
        trans_rois_arr.append(prof)

    total_rois = input_total_mask !=0    
    trans_total_arr = [np.sum(img, where=total_rois)/np.sum(img, where=input_total_mask) \
                       for img in input_img_series]

    return np.asarray(trans_rois_arr), np.asarray(trans_total_arr)