Spatial Localization of EEG Electrodes in a TOF+CCD Camera System

Published on April 9, 2019

Spatial Localization of EEG Electrodes in a TOF+CCD Camera System

Yu He1, Huili Qiu2, Xi Yan1,3 and Meng Zhao2
1College of Computer Science, Zhejiang University of Technology, Hangzhou, China2School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China3College of Business, Missouri State University, Missouri, TX, United States
A crucial link of electroencephalograph (EEG) technology is the accurate estimation of EEG electrode positions on a specific human head, which is very useful for precise analysis of brain functions. Photogrammetry has become an effective method in this field. This study aims to propose a more reliable and efficient method which can acquire 3D information conveniently and locate the source signal accurately in real-time. The main objective is identification and 3D location of EEG electrode positions using a system consisting of CCD cameras and Time-of-Flight (TOF) cameras. To calibrate the camera group accurately, differently to the previous camera calibration approaches, a method is introduced in this report which uses the point cloud directly rather than the depth image. Experimental results indicate that the typical distance error of reconstruction in this study is 3.26 mm for real-time applications, which is much better than the widely used electromagnetic method in clinical medicine. The accuracy can be further improved to a great extent by using a high-resolution camera.

Introduction
The electroencephalograph (EEG) technology is now widely used in clinical medicine such as epilepsy, coma, brain deaths and so on, due to its use, economy, safety, and non-invasive detection (Jeon et al., 2018). To well-use the EEG technology for analyzing the brain activities, it is important to accurately locate the position of scalp signal in the cerebral cortex (Qian and Sheng, 2011; Reis and Lochmann, 2015; Butler et al., 2016; Saha et al., 2017; Liu et al., 2018). At present, there are several kinds of EEG electrode localization methods, including (1) manual method, (2) digital radio frequency (RF) electromagnetic instrument, (3) magnetic resonance (MR), (4) ultrasonic transmission and reflection, and (5) photogrammetric method (Koessler et al., 2007). The manual method needs a relevant tool to measure the distance according to the preset sensor. This method is low in cost, but it is time-consuming and labor-consuming, and it is easy to cause errors due to manual operation (Russell et al., 2005). Electromagnetic RF digital instrumentation is currently the most widely utilized method. The principle is to locate the position of an EEG electrode through the magnetic field, and its accuracy is up to 4 mm. Of course, it is faster and more convenient than the manual method, but the disadvantage is that single point measurements are prone to mistakes, which means that to obtain accurate results the work needs to be repeated many times. Moreover, this method is strict with the overall measurement environment, requiring appropriate air humidity and temperature and no metal artifacts. Additional data conversion tools are also necessary. The specific implementation of the MR method requires an additional calibration object, which is not applicable to multi-sensor situations. The ultrasonic method is the same as the digital electromagnetic conversion method, which requires a single point measurement and consumes time and energy. One of the common disadvantages of the above methods is that the electrical signal will interfere with the weak EEG signals, which will affect the final detection results.
Compared with traditional methods, the photogrammetric method is fast, accurate, and easy to operate. From early 2000, Bauer et al. used a method to achieve the EEG electrode localization system with 12 industrial cameras, which did not specify the system settings and operating procedures (Bauer et al., 2000). Russell et al. used 11 sets of industrial cameras to locate the electrode position (Russell et al., 2005). The method is simple in operation, time-saving for operators, and there is no need for additional devices. The experimental process only takes 15–20 min, and patients are not required to participate in the subsequent data processing, which brings great convenience to patients and doctors. The working principle of this method is to calibrate the 11 cameras and obtain the three-dimensional (3D) information of each electrode with the ideas of stereo matching in computer vision. Yet, there are three shortcomings. Firstly, each electrode of the image must be manually marked, which is likely to cause artificial errors. Secondly, the system is only suitable for self-made electrode caps, not applicable to other types of electrode caps, but other traditional methods do not have this limitation. Thirdly, the system can only identify the visible electrode points. For some invisible electrode points which may be hidden in the hair, this method is useless, but electromagnetic digital method and ultrasonic method do not have this limitation (Zhang et al., 2014). The equipment is so complex that it is not easy to operate. Baysal and Sengül (2010) used only one camera to locate the electrode position, hoping to reduce costs. The working process is to move the camera along a pre-set route, taking pictures at every angle (Koessler et al., 2007). Although the cost is reduced, the patient must stay still for a long period of time, increasing the likelihood of human error and prolonging the duration of data acquisition.
Recently, there has been a great deal of interest in the development and applications of time-of-flight (TOF) depth cameras. In 2015, Yao et al. presented the full very large-scale integration (VLSI) implementation of a new high-resolution depth-sensing system on a chip (SoC) based on active infrared structured light, which estimates the 3D scene depth by matching randomized speckle patterns (Yao et al., 2015). At the same year, Golbach et al. presented a computer-vision system for seedling phenotyping that combines best of both approaches by utilizing TOF depth cameras (Golbach et al., 2016). Although TOF has its unique features, the practical applicability of TOF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine TOF cameras with other sensors in order to enhance and upsample depth images (Eichhardt et al., 2017). Calibration between depth cameras and other sensors has become a major concern. A modified method about multi-modal camera calibration is proposed in this report.
In summary, methods in previous studies, to some degree, can solve data acquisition and operability, but there are still many limitations. This report proposes a convenient and accurate method, which is also based on the photogrammetry principle (Russell et al., 2005; Clausner et al., 2017). The acquisition system of EEG signals based on RGB-Depth (RGB-D) multi-modal data is constructed by using the high resolution industrial camera and the high precision depth camera to capture the object’s distance and color information simultaneously. The system captures images from five perspectives, which contains all the collected electrodes from all the perspectives. Electrode distribution of the electrode cap adopts the international 10–20 standard. The information collecting process can be performed in real-time. All image processing algorithms are achieved off-line, which greatly improves the flexibility and operability of the system.
This article reports the design of such a photogrammetry system both theoretically and experimentally. The remainder of this report is structured as follows. Section Technology and Implementation introduces the implementation technology, including the sensing method, camera calibration, and singular value decomposition (SVD) algorithm. The experimental process for electrode identification and localization will be presented in section Experiments and Results. Finally, the report summarizes the findings and concluding remarks.
Technology and Implementation
System Setup
The existing photogrammetric methods, whether measured through a monocular, binocular, or multi-camera system, without exception, are to obtain 3D information of the electrode positions by adopting the stereo vision method. Theoretically, each electrode point needs to be captured by two or more cameras. They need to deal with more pictures, and the algorithm is more complex. Therefore, this report proposes the use of a depth camera, MESA-SR-4000, based on TOF technology, which can directly obtain the depth information. The existing depth camera cannot directly identify the position of the EEG electrode because of its low resolution. However, the color camera can get the target color, texture and other 2D information. Hence, this project combines the two cameras to get the distance and color information of the scene. Accordingly, the EEG signal acquisition system based on RGB-D multi modal data is built. As long as all the electrodes are captured by the system, all the 3D information of the electrode can be obtained. This system can avoid the complexity of shooting the same electrode from two or more angles. Compared with the multipurpose camera, the system reduces the cost of materials, decreases the number of cameras, and greatly simplifies the algorithm. Compared with the single-camera, this system simplifies the experimental process and makes the operation simpler. There is no need to have a pre-set line nor to debug the angle of the placed mirror (Qian and Sheng, 2011), while at the same time, it improves efficiency.
The system processes in the following way. Firstly, the image is collected by using both the color camera and the depth camera. The color camera is responsible for the color picture of the electrode, so that the EEG electrode can be conveniently detected in the image and the 2D information of the electrode can be obtained. The depth camera is responsible for obtaining the point cloud data of the electrode, so that distance information of the electrode can be obtained. The key issue is the calibration of two different cameras. Secondly, this project uses the multi-camera measurement scheme, which can obtain all the electrodes, rather than the distance information. In this project, a five-camera group is applied to photograph the experimental targets in five angles. The five angles are located around the head. Of course, if the experimental equipment is not complete, the same camera group can also be located around the head at five angles, respectively. Ideally all the electrode information can be captured by the camera in five angles. Compared with the color camera based photogrammetry system, the photogrammetric system designed in this project has greatly reduced the number of angles taken and the complexity of the systematic framework.
In this project, the resolution of the color camera CCD is 1,624 × 1,234, and the depth camera TOF (MESA-SR-4000) has a resolution of 176 × 144. The combined camera system is shown in Figure 1. The electrode cap covered on a head model and a subjective head for practical tests are shown in Figure 2. The 10–20 electrodes are organized on a cap that is placed on the heads. The different colors on the electrode dot can easily be made, e.g., using some paint coat or sticky paper. In either way it is also easy to change colors. Making the dot colors does not affect the electrode functions or costs.

FIGURE 1

Figure 1. The camera system.

FIGURE 2

Figure 2. The electrode cap on a head model and on a subjective head.

According to the accuracy of the TOF camera’s sensing range, the best shooting distance of the TOF camera is between 0.5 and 8 m. The schematic diagram is shown in Figure 3. Five groups of cameras are used in this system to take pictures simultaneously, four (1, 2, 3, and 4 in Figure 3) of which are aligned around the head with an angle of 90°, while the last is located overhead. Then all the electrodes will be reconstructed through the color image captured by the CCD camera and depth information is obtained by the TOF camera. The target RGB-D data is obtained from multiple angles. The horizontal distance between the model and the camera is 60 cm and the vertical distance is 40 cm.

FIGURE 3

Figure 3. The schematic diagram.

The operational flow of the system is shown in Figure 4. Firstly, the color images and the 3D point cloud data are obtained by using the color camera and the depth sensor in five angles. Then, electrode coordinates are detected and extracted in color images. Its coordinates in 3D space can be calculated by using the calibration results of color camera and depth sensor. Finally, the correlation algorithm is used to calculate the relationship between the five coordinate systems of the five views (Wang et al., 2017). Therefore, all the electrodes of different angles of view in five different coordinate systems are registered in the same spatial coordinate system.

FIGURE 4

Figure 4. Data processing flow.

System Calibration
Traditionally, the calibration method utilizes the depth map obtained by the TOF camera and the color map obtained by the CCD camera to complete calibration (Cheng et al., 2016; Raposo et al., 2016). Nevertheless, the resolution of the depth image is very low, and the results are often unstable. In order to solve this problem, this project uses a new calibration plate and accurate point cloud data to perform camera calibration (Jung et al., 2015; Wei and Zhang, 2015). The comparison of the two methods will be described in the next section. The camera calibration model is designed as follows. Assuming that Q is a point in the space, the coordinates of the camera coordinate system are (xc,yc,zc)T. The projection of point Q in the normalized image is Xn
Xn=[xy]=[xc/zcyc/zc]    (1)
If taking into account the lens distortion, the above coordinates are mapped Xd
Xd=[x′y′]=(1+k1r2+k2r4)Xn    (2)
where r=x2+y2, k1, k2 are radial distortion coefficients. Xd is mapped to the image coordinates Xq, i.e.,
Xq=[x*y*]=[fxx′+cxfyy′+cy]    (3)
where f x and f y are focal length in x and y directions, respectively, and cx and cy are the principle point coordinates.
The relationship between the camera groups can be described as the relationship between the coordinates of the point Q in the two camera coordinates. Assuming Xcd is the coordinate vector of the point Q in the TOF camera coordinate system, Xcc is the coordinate vector of the point Q in the CCD camera coordinate system, and their relationship can be described as
Xcc=RXcd+T    (4)
The goal of calibration is to solve the rotation matrix R and the translation matrix T.
Decomposition for Data Stitching
With regard to the point cloud stitching problem, many works use an ICP algorithm or improved ICP algorithm (Cheng et al., 2016; Yang et al., 2016). However, here, due to the large deviation of the angle of view, the performance of ICP algorithm is not ideal, thus SVD is adopted to calculate the conversion relationship between the two sets of point clouds (Sorkine, 2009; Jung et al., 2015; Raposo et al., 2016). The principle is described firstly from this transform
(R,t)=armgin∑i=1nwi‖(Rmi+t)-ni‖2    (5)
where wi >0 is the weight of each point in the cloud. Calculate the displacement, and the above formula R is set to invariant to derive t, at the same time F(t) = (R, t), which has the derived derivative
0=∂F∂t=∑i=1n2wi(Rmi+t-ni)   =2t(∑i=1nwi)+2R(∑i=1nwimi) -2∑i=1nwini    (6)
where
m¯=∑i=1nwimi∑i=1nwi,n¯=∑i=1nwini∑i=1nwi    (7)
t=n¯-Rm¯    (8)
Substitute (6–8) into (5) and we have
∑i=1nwi‖(Rmi+t)−ni‖2=∑i=1nwi‖Rmi+ n¯−R m¯−ni‖2                                              =∑i=1nwi‖R(mi−m¯)−(ni− n¯)‖2    (9)
Xi:=mi- m¯,    Yi:=ni- n¯.    (10)
R=armgin∑i=1nwi||RXi-Yi||2    (11)
To calculate the amount of rotation (11), is expanded in a matrix representation,
||RXi-Yi||2=(RXi-Yi)T(RXi-Yi)                =(XiTRT-YiT)(RXi-Yi)=XiTRTRXi-YiTRXi-XiTRTYi+YiTYi       =XiTXi-YiTRXi-XiTRTYi+YiTYi    (12)
Since the rotation matrix R is an orthogonal matrix, there is RTR = 1. YiTRXi and XiTRTYi are scalar. The transposition of the scalar is still equal to the scalar itself, i.e.,
XiTRTYi=(XiTRTYi)T=YiTRXi.    (13)
‖RXi-Yi‖2=XiTXi-2YiTRXi+YiTYi    (14)
Only one of them is related to R and transforms it into the minimum of its variable,
armgin(-2∑i=1nwiYiTRXi)    (15)
armgin(-2∑i=1nwiYiTRXi)=armgin∑i=1nwiYiTRXi.    (16)
∑i=1nwiYiTRXi=tr(WYTRX)    (17)
The conversion of the above formula makes a switch from cumulative to matrix based multiplication. Here, W is a diagonal matrix of n × n, and X and Y are 3 × n matrices. The traces of these matrices are equal to the left-hand side of the equation.
tr(AB)=tr(BA)    (18)
tr(WYTRX)=tr((WYT)(RX))=tr(RXWYT)    (19)
S=XWYT,svd(S)→S=U∑VT    (20)
tr(RXWYT)=tr(RS)=tr(RU∑VT)=tr(∑VTRU)    (21)
The last step of the above transformation also uses the nature of (18). Since U, R, and V are orthogonal matrices, O = VTRU is also an orthogonal matrix.
I=ojToj=∑i=1doij2⇒oij≤1⇒|oij|

Read Full Article (External Site)