Skip to content

Commit 6d698e6

Browse files
committed
merge with origin
2 parents 38d064f + f4dd96e commit 6d698e6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+362650
-364464
lines changed

.github/workflows/pre-commit.yaml

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
name: pre-commit-codestyle
2+
concurrency:
3+
group: ${{ github.workflow }}-${{ github.event.number }}-${{ github.event.ref }}
4+
cancel-in-progress: true
5+
6+
on: [pull_request]
7+
8+
jobs:
9+
pre-commit:
10+
runs-on: ubuntu-latest
11+
steps:
12+
- uses: actions/checkout@v4
13+
- uses: actions/setup-python@v5
14+
- uses: pre-commit/[email protected]

.github/workflows/tests.yml

+45
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
name: tests
2+
concurrency:
3+
group: ${{ github.workflow }}-${{ github.event.number }}-${{ github.event.ref }}
4+
cancel-in-progress: true
5+
on:
6+
push:
7+
branches:
8+
- "master"
9+
pull_request:
10+
branches:
11+
- '*' # all branches, including forks
12+
13+
jobs:
14+
test:
15+
runs-on: ${{ matrix.os }}
16+
strategy:
17+
fail-fast: false
18+
matrix:
19+
os: [ "ubuntu-latest", "macos-latest", "windows-latest" ]
20+
python-version: ["3.9", "3.10", "3.11"]
21+
steps:
22+
## Install Braindecode
23+
- name: Checking Out Repository
24+
uses: actions/checkout@v4
25+
# Cache MNE Data
26+
# The cache key here is fixed except for os
27+
# so if you download a new mne dataset in the code, best to manually increment the key below
28+
- name: Create/Restore MNE Data Cache
29+
id: cache-mne_data
30+
uses: actions/cache@v3
31+
with:
32+
path: ~/mne_data
33+
key: ${{ runner.os }}-v3
34+
- uses: actions/setup-python@v4
35+
with:
36+
python-version: ${{ matrix.python-version }}
37+
- name: Show Python Version
38+
run: python --version
39+
# Update pip
40+
- name: Update pip
41+
run: python -m pip install --upgrade pip
42+
- name: Install EEGDash from Current Checkout
43+
run: pip install -e .
44+
# Show Braindecode Version
45+
- run: python -c "import eegdash; print(eegdash.__version__)"

.gitignore

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
old
2-
src/EEGDash.egg-info
2+
*.egg-info
33
*.npy
44
*/*.npy
55
*/*/*.npy

.pre-commit-config.yaml

+63
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
default_language_version:
2+
python: python3
3+
ci:
4+
autofix_commit_msg: '[pre-commit.ci] auto fixes from pre-commit.com hooks
5+
6+
'
7+
autofix_prs: true
8+
autoupdate_branch: master
9+
autoupdate_commit_msg: '[pre-commit.ci] pre-commit autoupdate'
10+
autoupdate_schedule: quarterly
11+
skip: []
12+
submodules: false
13+
repos:
14+
- repo: https://github.com/charliermarsh/ruff-pre-commit
15+
rev: v0.11.7
16+
hooks:
17+
- id: ruff
18+
name: ruff lint docs & examples
19+
args:
20+
- --fix
21+
- --select=E,W,F,I,D
22+
- --ignore=E402,E501,F401,D103,D400,D100,D101,D102,D105,D107,D415,D417,D205
23+
files: ^(docs|examples)/
24+
- id: ruff
25+
name: ruff lint eegdash preview
26+
args:
27+
- --fix
28+
- --preview
29+
- --select=NPY201
30+
- --ignore=D100,D101,D102,D105,D107,D415,D417,D205
31+
files: ^eegdash/
32+
- id: ruff
33+
name: ruff lint docs & examples
34+
args:
35+
- --fix
36+
- --select=D
37+
- --ignore=D103,D400,E402,D100,D101,D102,D105,D107,D415,D417,D205
38+
files: ^(docs|examples)/
39+
- id: ruff-format
40+
name: ruff format code
41+
files: ^(eegdash|docs|examples)/
42+
- repo: https://github.com/codespell-project/codespell
43+
rev: v2.4.1
44+
hooks:
45+
- id: codespell
46+
args:
47+
- --ignore-words-list=carin,splitted,meaned,wil,whats,additionals,alle,alot,bund,currenty,datas,farenheit,falsy,fo,haa,hass,iif,incomfort,ines,ist,nam,nd,pres,pullrequests,resset,rime,ser,serie,te,technik,ue,unsecure,withing,zar,mane,THIRDPARTY
48+
- --skip="./.*,*.csv,*.json,*.ambr,*.toml"
49+
- --quiet-level=2
50+
exclude_types:
51+
- csv
52+
- json
53+
exclude: ^tests/|generated/^.github
54+
- repo: https://github.com/asottile/blacken-docs
55+
rev: 1.19.1
56+
hooks:
57+
- id: blacken-docs
58+
exclude: ^.github|CONTRIBUTING.md
59+
- repo: https://github.com/PyCQA/isort
60+
rev: 6.0.1
61+
hooks:
62+
- id: isort
63+
exclude: ^\.gitignore

DevNotes.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,5 @@ pip install -r requirements.txt
55
## signalstore mongodb
66
- Check args functions to double check input to db query
77
- Create_index to the collection once its created to speed up querying
8-
- `find` has deserialization to convert timestamp to correct milisecond format and json_schema from bytes to dict
8+
- `find` has deserialization to convert timestamp to correct millisecond format and json_schema from bytes to dict
99
- `add` has serialization before insert into db

README.md

+8-2
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,10 @@ To use the data from a single subject, enter:
3939

4040
```python
4141
from eegdash import EEGDashDataset
42-
ds_NDARDB033FW5 = EEGDashDataset({'dataset': 'ds005514', 'task': 'RestingState', 'subject': 'NDARDB033FW5'})
42+
43+
ds_NDARDB033FW5 = EEGDashDataset(
44+
{"dataset": "ds005514", "task": "RestingState", "subject": "NDARDB033FW5"}
45+
)
4346
```
4447

4548
This will search and download the metadata for the task **RestingState** for subject **NDARDB033FW5** in BIDS dataset **ds005514**. The actual data will not be downloaded at this stage. Following standard practice, data is only downloaded once it is processed. The **ds_NDARDB033FW5** object is a fully functional BrainDecode dataset, which is itself a PyTorch dataset. This [tutorial](https://github.com/sccn/EEGDash/blob/develop/notebooks/tutorial_eoec.ipynb) shows how to preprocess the EEG data, extracting portions of the data containing eyes-open and eyes-closed segments, then perform eyes-open vs. eyes-closed classification using a (shallow) deep-learning model.
@@ -48,7 +51,10 @@ To use the data from multiple subjects, enter:
4851

4952
```python
5053
from eegdash import EEGDashDataset
51-
ds_ds005505rest = EEGDashDataset({'dataset': 'ds005505', 'task': 'RestingState'}, target_name='sex')
54+
55+
ds_ds005505rest = EEGDashDataset(
56+
{"dataset": "ds005505", "task": "RestingState"}, target_name="sex"
57+
)
5258
```
5359

5460
This will search and download the metadata for the task 'RestingState' for all subjects in BIDS dataset 'ds005505' (a total of 136). As above, the actual data will not be downloaded at this stage so this command is quick to execute. Also, the target class for each subject is assigned using the target_name parameter. This means that this object is ready to be directly fed to a deep learning model, although the [tutorial script](https://github.com/sccn/EEGDash/blob/develop/notebooks/tutorial_sex_classification.ipynb) performs minimal processing on it, prior to training a deep-learning model. Because 14 gigabytes of data are downloaded, this tutorial takes about 10 minutes to execute.
File renamed without changes.

docs/conf.py

+10-10
Original file line numberDiff line numberDiff line change
@@ -6,26 +6,26 @@
66
# -- Project information -----------------------------------------------------
77
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
88

9-
project = 'EEGDash'
10-
copyright = '2025, Arnaud Delorme, Dung Truong'
11-
author = 'Arnaud Delorme, Dung Truong'
9+
project = "EEGDash"
10+
copyright = "2025, Arnaud Delorme, Dung Truong"
11+
author = "Arnaud Delorme, Dung Truong"
1212

1313
# -- General configuration ---------------------------------------------------
1414
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
1515

16-
extensions = ['sphinx.ext.autodoc']
17-
18-
templates_path = ['_templates']
19-
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
16+
extensions = ["sphinx.ext.autodoc"]
2017

18+
templates_path = ["_templates"]
19+
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
2120

2221

2322
# -- Options for HTML output -------------------------------------------------
2423
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
2524

26-
html_theme = 'alabaster'
27-
html_static_path = ['_static']
25+
html_theme = "alabaster"
26+
html_static_path = ["_static"]
2827

2928
import os
3029
import sys
31-
sys.path.insert(0, os.path.abspath('..'))
30+
31+
sys.path.insert(0, os.path.abspath(".."))

docs/convert_xls_2_martkdown.py

+12-7
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,34 @@
11
#!/usr/bin/env python3
22
import sys
3+
34
import pandas as pd
45

6+
57
def excel_to_markdown(filename, sheet_name=0):
68
# Read the specified sheet from the Excel file
79
df = pd.read_excel(filename, sheet_name=sheet_name)
8-
10+
911
# Convert dataset IDs into Markdown links
1012
# Format: [dataset_id](https://nemar.org/dataexplorer/detail?dataset_id=dataset_id)
11-
df['DatasetID'] = df['DatasetID'].astype(str).apply(
12-
lambda x: f"[{x}](https://nemar.org/dataexplorer/detail?dataset_id={x})"
13+
df["DatasetID"] = (
14+
df["DatasetID"]
15+
.astype(str)
16+
.apply(lambda x: f"[{x}](https://nemar.org/dataexplorer/detail?dataset_id={x})")
1317
)
14-
18+
1519
# Replace "Schizophrenia/Psychosis" with "Psychosis" in the entire DataFrame
1620
df = df.replace("Schizophrenia/Psychosis", "Psychosis")
17-
21+
1822
# Convert the DataFrame to a Markdown table (excluding the index)
1923
markdown = df.to_markdown(index=False)
2024
return markdown
2125

22-
if __name__ == '__main__':
26+
27+
if __name__ == "__main__":
2328
if len(sys.argv) < 2:
2429
print("Usage: python script.py <excel_filename> [sheet_name]")
2530
sys.exit(1)
26-
31+
2732
excel_filename = sys.argv[1]
2833
sheet = sys.argv[2] if len(sys.argv) > 2 else 0
2934

eegdash.egg-info/PKG-INFO

-139
This file was deleted.

eegdash.egg-info/SOURCES.txt

-14
This file was deleted.

0 commit comments

Comments
 (0)