Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.
# Although the paper says to use cathodic-first, the code only
# reproduces if we use what we now call anodic-first. So flip the sign
# on the stimulus here:
stim = -self.calc_layer_current(in_arr, pt_list, layers)
# R1 convolved the entire stimulus (with both pos + neg parts)
r1 = self.tsample * utils.conv(stim, self.gamma1, mode='full',
method='sparse')[:stim.size]
# It's possible that charge accumulation was done on the anodic phase.
# It might not matter too much (timing is slightly different, but the
# data are not accurate enough to warrant using one over the other).
# Thus use what makes the most sense: accumulate on cathodic
ca = self.tsample * np.cumsum(np.maximum(0, -stim))
ca = self.tsample * utils.conv(ca, self.gamma2, mode='full',
method='fft')[:stim.size]
r2 = r1 - self.epsilon * ca
# Then half-rectify and pass through the power-nonlinearity
r3 = np.maximum(0.0, r2) ** self.beta
# Then convolve with slow gamma
r4 = self.tsample * utils.conv(r3, self.gamma3, mode='full',
method='fft')[:stim.size]
return utils.TimeSeries(self.tsample, r4)
Charge accumulation is calculated on the effective input current
`ecm`, as opposed to the output of the fast response stage.
Parameters
----------
ecm: array - like
A 2D array specifying the effective current values at a particular
spatial location(pixel); one value per retinal layer, averaged
over all electrodes through that pixel.
Dimensions: < # layers x #time points>
"""
ca = np.zeros_like(ecm)
for i in range(ca.shape[0]):
summed = self.tsample * np.cumsum(np.abs(ecm[i, :]))
conved = self.tsample * utils.conv(summed, self.gamma_ca,
mode='full', method='fft')
ca[i, :] = self.scale_ca * conved[:ecm.shape[-1]]
return ca
If True (default), use numba just - in-time compilation.
usefft: bool, optional
If False (default), use sparseconv, else fftconvolve.
Returns
-------
Fast response, b2(r, t) in Nanduri et al. (2012).
Notes
-----
The function utils.sparseconv can be much faster than np.convolve and
signal.fftconvolve if `stim` is sparse and much longer than the
convolution kernel.
The output is not converted to a TimeSeries object for speedup.
"""
conv = utils.conv(stim, gamma, mode='full', method=method,
use_jit=use_jit)
# Cut off the tail of the convolution to make the output signal
# match the dimensions of the input signal.
return self.tsample * conv[:stim.shape[-1]]
stim: array
Temporal signal to process, stim(r, t) in Nanduri et al. (2012)
Returns
-------
Slow response, b5(r, t) in Nanduri et al. (2012).
Notes
-----
This is by far the most computationally involved part of the perceptual
sensitivity model.
Conversion to TimeSeries is avoided for the sake of speedup.
"""
# No need to zero-pad: fftconvolve already takes care of optimal
# kernel/data size
conv = utils.conv(stim, self.gamma_slow, method='fft', mode='full')
# Cut off the tail of the convolution to make the output signal match
# the dimensions of the input signal.
return self.scale_slow * self.tsample * conv[:stim.shape[-1]]
method='sparse')[:stim.size]
# It's possible that charge accumulation was done on the anodic phase.
# It might not matter too much (timing is slightly different, but the
# data are not accurate enough to warrant using one over the other).
# Thus use what makes the most sense: accumulate on cathodic
ca = self.tsample * np.cumsum(np.maximum(0, -stim))
ca = self.tsample * utils.conv(ca, self.gamma2, mode='full',
method='fft')[:stim.size]
r2 = r1 - self.epsilon * ca
# Then half-rectify and pass through the power-nonlinearity
r3 = np.maximum(0.0, r2) ** self.beta
# Then convolve with slow gamma
r4 = self.tsample * utils.conv(r3, self.gamma3, mode='full',
method='fft')[:stim.size]
return utils.TimeSeries(self.tsample, r4)