-
Notifications
You must be signed in to change notification settings - Fork 227
Pseudo Online evaluation and dataset conversion #641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Signed-off-by: Bru <a.bruno@aluno.ufabc.edu.br>
Signed-off-by: Bru <a.bruno@aluno.ufabc.edu.br>
|
@carraraig Is it the same as in https://arxiv.org/abs/2308.11656 ? |
|
It is the same @gcattan, but we never merge this because was going against the pipeline logic that we define on how to define the events. |
|
@copilot, can you study how to do exact same thing only manipulating the events present within the dataset? |
|
May be could play on tmin, tmax? And run evaluation with different time window. |
|
I feel that we just need to play with tmin and tmax, and change use different split base within the moabb spliter |
Resolved conflicts in: - docs/source/whats_new.rst: Combined Pseudo Online and develop entries - moabb/datasets/bnci.py: Merged imports (Annotations + find_events) - moabb/evaluations/utils.py: Kept both _normalized_mcc and _ensure_fitted - moabb/paradigms/base.py: Added both overlap and scorer parameters - moabb/paradigms/motor_imagery.py: Merged scoring logic (scorer precedence, then overlap)
Currently for BNCI2014001, BNCI2014002, BNCI2014004, BNCI2015001