Understanding Biases in BCI Experiments: Balancing Act or Freedom to Reflect?

Published on November 22, 2022

Brain Computer Interfaces (BCIs) bring humans and computers together, like synchronized performers on a grand stage. BCIs use different means of communication, such as voice, gestures, or reading brain signals. But there’s a catch! The BCI algorithms that classify the signals into categories can be biased by unbalanced experimental conditions. Researchers have traditionally enforced balance, but it decreases dataset diversity. So, do we really need to balance the scales? To answer this question, scientists conducted experiments using Electroencephalogram (EEG) data from visual stimuli trials. They dug deep into how bias affects decision-making, which features impact classification the most, which parts of the brain signal are impacted, and how likely neural categorization happens. By modeling and quantifying the effects of covariates, they identified the regions of the EEG that allow optimal classification while minimizing bias. Through their research, they discovered that stimulus category mostly affects later brain responses, while covariates have a biasing effect on earlier responses. They also found that when properly selecting a region of interest, the classification remains reliable despite covariate effects. In conclusion, understanding biases in BCI experiments is like fine-tuning a dance routine for perfect synchronization between humans and computers. The research provides valuable insights into how to strike a balance and isolate category-dependent brain responses for future studies on neural processes.

Brain Computer Interfaces (BCIs) consists of an interaction between humans and computers with a specific mean of communication, such as voice, gestures, or even brain signals that are usually recorded by an Electroencephalogram (EEG). To ensure an optimal interaction, the BCI algorithm typically involves the classification of the input signals into predefined task-specific categories. However, a recurrent problem is that the classifier can easily be biased by uncontrolled experimental conditions, namely covariates, that are unbalanced across the categories. This issue led to the current solution of forcing the balance of these covariates across the different categories which is time consuming and drastically decreases the dataset diversity. The purpose of this research is to evaluate the need for this forced balance in BCI experiments involving EEG data. A typical design of neural BCIs involves repeated experimental trials using visual stimuli to trigger the so-called Event-Related Potential (ERP). The classifier is expected to learn spatio-temporal patterns specific to categories rather than patterns related to uncontrolled stimulus properties, such as psycho-linguistic variables (e.g., phoneme number, familiarity, and age of acquisition) and image properties (e.g., contrast, compactness, and homogeneity). The challenges are then to know how biased the decision is, which features affect the classification the most, which part of the signal is impacted, and what is the probability to perform neural categorization per se. To address these problems, this research has two main objectives: (1) modeling and quantifying the covariate effects to identify spatio-temporal regions of the EEG allowing maximal classification performance while minimizing the biasing effect, and (2) evaluating the need to balance the covariates across categories when studying brain mechanisms. To solve the modeling problem, we propose using a linear parametric analysis applied to some observable and commonly studied covariates to them. The biasing effect is quantified by comparing the regions highly influenced by the covariates with the regions of high categorical contrast, i.e., parts of the ERP allowing a reliable classification. The need to balance the stimulus’s inner properties across categories is evaluated by assessing the separability between category-related and covariate-related evoked responses. The procedure is applied to a visual priming experiment where the images represent items belonging to living or non-living entities. The observed covariates are the commonly controlled psycho-linguistic variables and some visual features of the images. As a result, we identified that the category of the stimulus mostly affects the late evoked response. The covariates, when not modeled, have a biasing effect on the classification, essentially in the early evoked response. This effect increases with the diversity of the dataset and the complexity of the algorithm used. As the effects of both psycho-linguistic variables and image features appear outside of the spatio-temporal regions of significant categorical contrast, the proper selection of the region of interest makes the classification reliable. Having proved that the covariate effects can be separated from the categorical effect, our framework can be further used to isolate the category-dependent evoked response from the rest of the EEG to study neural processes involved when seeing living vs. non-living entities.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>