-
Notifications
You must be signed in to change notification settings - Fork 220
Add "3D Auto-correlograms" (3D ACGs) postprocessing extension #3860
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
||
|
||
from spikeinterface.core.waveforms_extractor_backwards_compatibility import MockWaveformExtractor | ||
from joblib import Parallel, delayed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fededagos joblib
is not a requirement. Instead, we normally use the built-in ProcessPoolExecutor
Is it ok if I push to this PR directly to refactor the parallelization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that is okay! Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done :) You can check the changes if you're interested!
…into feature/3d_acg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple initial comments. I guess one other approach would be to try to numba this instead for a speed gain since we already use numba for the standard correlograms. I saw that you pushed changes Alessio while I was typing this so if you changed something I marked feel free to ignore.
n_jobs : int, default: -1. | ||
Number of parallel jobs to spawn to compute the 3D-ACGS on different units. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n_jobs : int, default: -1. | |
Number of parallel jobs to spawn to compute the 3D-ACGS on different units. |
If we want the job kwargs we need to do the {} trick and pull the shared kwarg docs. We can help with that.
|
||
# pre-compute time bins | ||
winsize_bins = 2 * int(0.5 * window_ms * 1.0 / bin_ms) + 1 | ||
bin_times_ms = np.linspace(-window_ms / 2, window_ms / 2, num=winsize_bins) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bin_times_ms = np.linspace(-window_ms / 2, window_ms / 2, num=winsize_bins) | |
bin_times_ms = np.linspace(-window_ms / 2, window_ms / 2, num=winsize_bins) |
should we automatically make that the user can't specify a binsize that is inappropriate. So if I give a bin_size of like 0.0003 ms then most recording equipment don't have that resolution.
acgs_3d[unit_index, :, :] = acg_3d | ||
firing_quantiles[unit_index, :] = firing_quantile | ||
else: | ||
# Process units in serial |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Process units in serial | |
# Process units serially |
:)
assert winsize_bins >= 1 | ||
assert winsize_bins % 2 == 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have some error output for this case for ease of debugging in the future.
try: | ||
import npyx | ||
|
||
HAVE_NPYX = True | ||
except ModuleNotFoundError: | ||
HAVE_NPYX = False | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
importlib.util.find_spec
But I haven't fixed testing yet so I can go back and fix that later to try to gain a tiny speed boost in testing.
Hi. Any idea why Maxime Beau chas choosen th name 3D-ACG ? I also have the intuition that a pure python loop will be a bit slow and numba would help a lot here. Maybe we could implement also in the same PR a very simple matplotlib widget no ? like plot_3dacg. I will make some comments directly in the code. |
smoothing_factor: int = 250, | ||
**job_kwargs, | ||
): | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is internal I would not put long doc here.
Or maybe I would rename this compute_acgs_3d_sorting and expose it to the API.
kernel_size = int(np.ceil(smoothing_factor / bin_size)) | ||
half_kernel_size = kernel_size // 2 | ||
kernel = np.ones(kernel_size) / kernel_size | ||
smoothed_firing_rate = np.convolve(firing_rate, kernel, mode="same") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe scipy.signal.oaconvolve could be faster here.
if isinstance(smoothing_factor, (int, float)) and smoothing_factor > 0: | ||
kernel_size = int(np.ceil(smoothing_factor / bin_size)) | ||
half_kernel_size = kernel_size // 2 | ||
kernel = np.ones(kernel_size) / kernel_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not precomputing the kernel before the unit_id loop ?
if isinstance(smoothing_factor, (int, float)) and smoothing_factor > 0: | ||
kernel_size = int(np.ceil(smoothing_factor / bin_size)) | ||
half_kernel_size = kernel_size // 2 | ||
kernel = np.ones(kernel_size) / kernel_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could also have a gaussian kernel no ?
# Correct manually for possible artefacts at the edges | ||
for i in range(kernel_size): | ||
start = max(0, i - half_kernel_size) | ||
stop = min(len(firing_rate), i + half_kernel_size) | ||
smoothed_firing_rate[i] = np.mean(firing_rate[start:stop]) | ||
for i in range(len(firing_rate) - kernel_size, len(firing_rate)): | ||
start = max(0, i - half_kernel_size) | ||
stop = min(len(firing_rate), i + half_kernel_size) | ||
|
||
smoothed_firing_rate[i] = np.mean(firing_rate[start:stop]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK with this but this prevent using other kernel for smoothing.
Is this border handling usefull ?
Hey there, thanks for the input Samuel! David Herzfeld called it '3D-ACG' because regular ACGs are 2-dimensional (time-shift, correlation/rate), and those are simply stratified by instantaneous firing rate. Thus they're graphically plotted as heatmaps, which intuitively relate to 3D arrays! Because that's what we called them in Beau et al. 2025, and also in David's papers, I'm reluctant to rename them :) Super happy to see this merged into spike-interface! By the way, @alejoe91 we should get together with Julie Fabre and chat about quality metrics / bombcell but also about a potential Phy3 someday soon. Cyrille Rossant told me that you were interested in writing it up - we had something similar in mind, we should merge forces! Maxime |
Thanks, Maxime! I'm reluctant to rename them as well, as they are now '3D-ACGs' in multiple papers. I will note while they are a creation of myself and Nate Hall, they were officially named by Steve Lisberger. I initially called them firing rate versus correlogram plots (because they trivially extend to CCGs as well as ACGs) but 3D-ACG is a lot fewer words... |
Of course! I'll ping you on slack :) |
Following conversations at Cosyne 2025 with @alejoe91, here is a first potential implementation of the "3D autocorrelograms" as an analyzer postprocessing extension.
First introduced in Beau et al., 2025, 3D ACGs provide a rich representation of a neuron's firing rate which accounts for different firing modes.
They were also recently used by Yu et al., (2025) again as a powerful representation of firing rates that can be used e.g. by representation learning methods, the same way that Beau et al. did.
The implementation is adapted directly from NeuroPyxels (mainly maintained by @m-beau), which contains the original Python version.
The original conceptualization is from @herzfeldd, who I am also tagging here for visibility.