Please wait a minute...
Tsinghua Science and Technology  2021, Vol. 26 Issue (4): 505-522    doi: 10.26599/TST.2020.9010015
    
Cross-Target Transfer Algorithm Based on the Volterra Model of SSVEP-BCI
Jiajun Lin(),Liyan Liang(),Xu Han(),Chen Yang(),Xiaogang Chen(),Xiaorong Gao*()
Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China.
School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China.
Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, China.
Download: PDF (15260 KB)      HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

In general, a large amount of training data can effectively improve the classification performance of the Steady-State Visually Evoked Potential (SSVEP)-based Brain-Computer Interface (BCI) system. However, it will prolong the training time and considerably restrict the practicality of the system. This study proposed a SSVEP nonlinear signal model based on the Volterra filter, which could reconstruct stable reference signals using relatively small number of training targets by transfer learning, thereby reducing the training cost of SSVEP-BCI. Moreover, this study designed a transfer-extended Canonical Correlation Analysis (t-eCCA) method based on the model to achieve cross-target transfer. As a result, in a single-target SSVEP experiment with 16 stimulus frequencies, t-eCCA obtained an average accuracy of 86.96%±12.87% across 12 subjects using only half of the calibration time, which exhibited no significant difference from the representative training classification algorithms, namely, extended canonical correlation analysis (88.32%±13.97%) and task-related component analysis (88.92%±14.44%), and was significantly higher than that of the classic non-training algorithms, namely, Canonical Correlation Analysis (CCA) as well as filter-bank CCA. Results showed that the proposed cross-target transfer algorithm t-eCCA could fully utilize the information about the targets and its stimulus frequencies and effectively reduce the training time of SSVEP-BCI.



Key wordsSteady-State Visually Evoked Potential (SSVEP)      Brain-Computer Interface (BCI)      Volterra filter      cross-target information      transfer learning     
Received: 06 April 2020      Published: 12 January 2021
Fund:  National Key Basic Research and Development Program of China(2017YFB1002505);Key Research and Development Program of Guangdong Province(2018B030339001);National Natural Science Foundation of China(61431007)
Corresponding Authors: Xiaorong Gao     E-mail: lin-jj17@mails.tsinghua.edu.cn;18618488256@163.com;hanxu29@ucla.edu;yuanchouyc@gmail.com;chenxg@bme.cams.cn;gxr-dea@tsinghua.edu.cn
About author: Jiajun Lin received the BS degree from Beijing University of Posts and Telecommunications in 2017. He is currently a master student at the Department of Biomedical Engineering, Tsinghua University. His research interests are biomedical signal processing, brain-computer interface, and machine learning.|Liyan Liang received the BE degree from Beijing University of Technology in 2015. He is currently working toward the PhD degree in Tsinghua University. His research interests include brain-computer interface, biomedical signal processing, and machine learning.|Xu Han received the BS degree from Huazhong University of Science and Technology in 2016. He is now a master student in Tsinghua University. His research interests focus on biomedical signal processing, brain-computer interface, and machine learning.|Chen Yang received the MS degree in biomedical engineering from Beijing University of Posts and Telecommunications in 2012, and the PhD degree in biomedical engineering from Tsinghua University in 2019. Since 2019, he has been doing postdoctoral research in Beijing University of Posts and Telecommunications. His research interests include brain-computer interface and signal processing.|Xiaogang Chen received the BEng degree in biomedical engineering from Xianning College, Xianning, China in 2008, the MEng degree in biomedical engineering from Hebei University of Technology, Tianjin, China in 2011, and the PhD degree in biomedical engineering from Tsinghua University, Beijing, China in 2015. He is currently working as an associate research fellow in Institute of Biomedical Engineering, Chinese Academy of Medical Sciences. His research interests include brain-computer interface and biomedical signal processing.|Xiaorong Gao received the BS degree from Zhejiang University in 1986, the MS degree from Peking Union Medical College in 1989, and the PhD degree from Tsinghua University in 1992. He is currently a professor at the Department of Biomedical Engineering, Tsinghua University. His current research interests include biomedical signal processing and medical instrumentation, especially the study of brain-computer interface.
Cite this article:

Jiajun Lin,Liyan Liang,Xu Han,Chen Yang,Xiaogang Chen,Xiaorong Gao. Cross-Target Transfer Algorithm Based on the Volterra Model of SSVEP-BCI. Tsinghua Science and Technology, 2021, 26(4): 505-522.

URL:

http://tst.tsinghuajournals.com/10.26599/TST.2020.9010015     OR     http://tst.tsinghuajournals.com/Y2021/V26/I4/505

Fig. 1 Flowchart of the proposed transfer algorithm.
MethodAlgorithmTemplate
M1eCCASine-cosine template + training EEG template
M2Variant of eCCASine-cosine template + training SSVEP template
M3t-eCCASine-cosine template + transferred SSVEP template
M4CCASine-cosine template
M5FBCCASine-cosine template
M6TRCATraining EEG template
Table 1 Different templates used in different algorithms.
Fig. 2 Flowchart of the single-target offline experiment.
Fig. 3 Fitting results of Subject S4 at the Oz channel in the time domain waveform for all visual stimuli at stimulus frequencies of [8 : 0.5 : 15.5] Hz using the Volterra model.
H𝟐c?(𝐞𝐣w𝟏,𝐞𝐣w𝟐) (see Eq. (A8)). (d) Second-order phase frequency response of the Volterra model, which only showed the phase of the diagonal component of H𝟐c?(𝐞𝐣w𝟏,𝐞𝐣w𝟐) (see Eq. (A8)).
">
Fig. 4 Identification results of Subject S4 at the Oz channel using the Volterra model. (a) First-order amplitude frequency response of the Volterra model. (b) First-order phase frequency response of the Volterra model. (c) Second-order amplitude frequency response of the Volterra model, which only showed the amplitude of the diagonal component of H𝟐c?(𝐞𝐣w𝟏,𝐞𝐣w𝟐) (see Eq. (A8)). (d) Second-order phase frequency response of the Volterra model, which only showed the phase of the diagonal component of H𝟐c?(𝐞𝐣w𝟏,𝐞𝐣w𝟐) (see Eq. (A8)).
Fig. 5 Comparison results of (a) amplitude of the fundamental wave (left), phase of the fundamental wave (right) and (b) amplitude of the second harmonic (left), phase of the second harmonic (right) of the transferred and observed SSVEP signals of Subject S4 at the Oz channel using the Volterra model. The dot marker indicates the training frequencies ([8 : 15] Hz) and the double frequencies ([16 : 2 : 30] Hz); the square marker indicates the transferred frequencies ([8.5 : 15.5] Hz) and the double frequencies ([17 : 2 : 31] Hz)
Fig. 6 Average correlation matrix across all subjects at the Oz channel between transferred and observed SSVEP signals at different stimulus frequencies.
Fig. 7 Target identification accuracy calculated by eCCA and the variant of eCCA using different data lengths from 0.5 to 1.0 s with an interval of 0.25 s. The numbers above the horizontal bars represent the p values between different methods, ns denotes not significant, and the error bars indicate the standard errors.
SubjectM1M2M3M6
S181.1382.3380.8383.50
S295.5493.5093.6396.67
S399.3898.6397.7998.71
S494.8393.5094.0893.71
S598.0497.0497.6398.50
S699.5097.8897.9298.88
S792.1791.2991.4293.96
S899.8398.3398.8899.29
S999.7999.1799.3399.67
S1099.7598.5098.6798.96
S1199.1798.5098.6398.75
S1294.7196.0894.9293.88
Mean96.1595.4095.3196.20
Std5.164.624.994.40
Table 2 Detailed target identification accuracy of 12 subjects with 2 s data length calculated by different methods. (%)
Fig. 8 Target identification accuracy as functions of data length (from 0.5 s to 2 s with an interval of 0.25 s) calculated by different methods. The asterisks and numbers above the horizontal bars represent the p values between M3 and other methods (*: p<0.05; **: p<0.005; ***: p<0.0005), ns denotes not significant, and the error bars indicate the standard errors.
1.5 s1.75 s2 s
SubjectM3M5M3M5M3M5
S170.5845.4277.4657.0880.8362.08
S290.9271.8893.3382.0893.6384.79
S396.7599.5897.2599.7997.7999.79
S492.6389.5894.3393.1394.0892.29
S597.5496.0497.7596.8897.6398.54
S696.7196.2597.6798.3397.9298.75
S787.2973.7589.6779.5891.4281.88
S898.3898.1398.7599.1798.8899.58
S998.83100.0099.29100.0099.33100.00
S1098.2199.7998.54100.0098.67100.00
S1198.1397.0898.5098.5498.6398.33
S1292.1781.0495.1786.0494.9288.75
Mean93.1887.3894.8190.8995.3192.07
Std7.6615.965.8912.384.9910.94
Table 3 Detailed target identification accuracy of 12 subjects as functions of data length (from 1.5 to 2 s with an interval of 0.25 s) calculated by the M3 and M5 methods. (%)
ModelNMSE
MA0.21
QP1.00
Volterra0.04
Table 4 NMSE of fitting results of different signal models.
Fig. 9 Comparison of the fitting results of different signal models in the time domain waveform and amplitude spectrum: (a) fitting result of the MA model in the time domain waveform; (b) fitting result of the MA model in the amplitude spectrum; (c) fitting result of the QP model in the time domain waveform; (d) fitting result of the QP model in the amplitude spectrum; (e) fitting result of the Volterra model in the time domain waveform; and (f) fitting result of the Volterra model in the amplitude spectrum.
Fig. 10 Target identification accuracy as a function of data length (from 1 to 2.5 s with an interval of 0.25 s) calculated by different methods. The asterisks above the horizontal bars represent the p values between M3 and other methods (*: p< 0.05; **: p< 0.005; ***: p< 0.0005), ns indicates not significant, and the error bars indicate the standard errors.
Fig. 11 Target identification accuracy as functions of data length (from 0.5 to 2 s with an interval of 0.25 s) calculated by the proposed transfer algorithm t-eCCA at different sampling rates (i.e., 25%, 37.5%, 50%, and 62.5%). The error bars indicate the standard errors.
[1]   Gao S. K., Wang Y. J., Gao X. R., and Hong B., Visual and auditory brain-computer interfaces, IEEE Transactions on Biomedical Engineering, vol. 61, no. 5, pp. 1436-1447, 2014.
[2]   Lin K., Gao S. K., and Gao X. R., Boosting the information transfer rate of an SSVEP-BCI system using maximal-phase-locking value and minimal-distance spatial filter banks, Tsinghua Science and Technology, vol. 24, no. 3, pp. 262-270, 2019.
[3]   Zhang S. G., Han X., and Gao X. R., Studying the effect of the pre-stimulation paradigm on steady-state visual evoked potentials with dynamic models based on the zero-pole analytical method, Tsinghua Science and Technology, vol. 25, no. 3, pp. 435-446, 2020.
[4]   Regan D., Human Brain Electrophysiology: Evoked Potentials and Evoked Magnetic Fields in Science and Medicine. New York, NY, USA: Elsevier, 1989.
[5]   Norcia A. M., Appelbaum L. G., Ales J. M., Cottereau B. R., and Rossion B., The steady-state visual evoked potential in vision research: A review, Journal of Vision, vol. 15, no. 6, p. 4, 2015.
[6]   Friman O., Volosyak I., and Graeser A., Multiple channel detection of steady-state visual evoked potentials for brain-computer interfaces, IEEE Transactions on Biomedical Engineering, vol. 54, no. 4, pp. 742-750, 2007.
[7]   Zhang Y. S., Xu P., Cheng K. W., and Yao D. Z., Multivariate synchronization index for frequency recognition of SSVEP-based brain-computer interface, Journal of Neuroscience Methods, vol. 221, pp. 32-40, 2014.
[8]   Nakanishi M., Wang Y. J., Wang Y. T., and Jung T. P., A comparison study of canonical correlation analysis based methods for detecting steady-state visual evoked potentials, Plos One, vol. 10, no. 10, p. e0140703, 2015.
[9]   Bin G. Y., Gao X. R., Yan Z., Hong B., and Gao S. K., An online multi-channel SSVEP-based brain-computer interface using a canonical correlation analysis method, Journal of Neural Engineering, vol. 6, no. 4, p. 046002, 2009.
[10]   Chen X. G., Wang Y. J., Gao S. K., Jung T. P., and Gao X. R., Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface, Journal of Neural Engineering, vol. 12, no. 4, p. 046008, 2015.
[11]   Yang C., Han X., Wang Y. J., Saab R., Gao S. K., and Gao X. R., A dynamic window recognition algorithm for SSVEP-based brain-computer interfaces using a spatio-temporal equalizer, International Journal of Neural Systems, vol. 28, no. 10, p. 1850028, 2018.
[12]   Chen X. G., Wang Y. J., Nakanishi M., Gao X. R., Jung T. P., and Gao S. K., High-speed spelling with a noninvasive brain-computer interface, Proceedings of the National Academy of Sciences of the United States of America, vol. 112, no. 44, pp. 6058-6067, 2015.
[13]   Nakanishi M., Wang Y. J., Chen X. G., Wang Y. T., Gao X. R., and Jung T. P., Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis, IEEE Transactions on Biomedical Engineering, vol. 65, no. 1, pp. 104-112, 2018.
[14]   Regan D., Some characteristics of average steady-state and transient responses evoked by modulated light, Electroencephalography and Clinical Neurophysiology, vol. 20, no. 3, pp. 238-248, 1966.
[15]   Sakar C. O., Kursun O., and Gurgen F., Ensemble canonical correlation analysis, Applied Intelligence, vol. 40, no. 2, pp. 291-304, 2014.
[16]   Cheng M. and Gao S. K., An EEG-based cursor control system, in Proceedings of the First Joint BMES/EMBS Conference, Atlanta, GA, USA, 1999, p. 669.
[17]   Nakanishi M., Wang Y. J., Wang Y. T., Mitsukura Y., and Jung T. P., A high-speed brain speller using steady-state visual evoked potentials, International Journal of Neural Systems, vol. 24, no. 6, p. 1450019, 2014.
[18]   Xu M., Han J., Wang Y., Jung T., and Ming D., Implementing over 100 command codes for a high-speed hybrid brain-computer interface using concurrent P300 and SSVEP features, .
doi: 10.1109/TBME.2020.2975614
[19]   Cao T., Wan F., Wong C. M., Da Cruz J. N., and Hu Y., Objective evaluation of fatigue by EEG spectral analysis in steady-state visual evoked potential-based brain-computer interfaces, .
doi: 10.1186/1475-925X-13-28
[20]   Yuan P., Chen X. G., Wang Y. J., Gao X. R., and Gao S. K., Enhancing performances of SSVEP-based brain-computer interfaces via exploiting inter-subject information, Journal of Neural Engineering, vol. 12, no. 4, p. 046006, 2015.
[21]   Waytowich N. R., Faller J., Garcia J., Vettel J. M., and Sajda P., Unsupervised adaptive transfer learning for steady-state visual evoked potential brain-computer interfaces, in Proceedings of 2016 IEEE International Conference on Systems, Man, and Cybernetics, Budapest, Hungary, 2016, pp. 4135-4140.
[22]   Suefuda K. and Tanaka T., Reduced calibration by efficient transformation of templates for high speed hybrid coded SSVEP brain-computer interfaces, presented at the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 2017.
[23]   Wong C. M., Wan F., Wang B. Y., Wang Z., Nan W. Y., Lao K. F., Mak P. U., Vai M. I., and Rosa A., Learning across multi-stimulus enhances target recognition methods in SSVEP-based BCIs, Journal of Neural Engineering, vol. 17, no. 1, p. 016026, 2020.
[24]   Zhang S. G., Han X., Chen X. G., Wang Y. J., Gao S. K., and Gao X. R., A study on dynamic model of steady-state visual evoked potentials, Journal of Neural Engineering, vol. 15, no. 4, p. 046010, 2018.
[25]   Han X., Zhang S. G., and Gao X. R., A study on reducing training time of BCI system based on an SSVEP dynamic model, presented at the 2019 7th International Winter Conference on brain-computer interface (BCI), Gangwon, South Korea, 2019.
[26]   Pan J., Gao X. R., Duan F., Yan Z., and Gao S. K., Enhancing the classification accuracy of steady-state visual evoked potential-based brain-computer interfaces using phase constrained canonical correlation analysis, Journal of Neural Engineering, vol. 8, no. 3, p. 036027, 2011.
[27]   Mathews V. J. and Sicuranza G. L., Polynomial Signal Processing. New York, NY, USA: Wiley, 2000.
[28]   Bussgang J. J., Ehrman L., and Graham J. W., Analysis of nonlinear systems with multiple inputs, Proceeding of the IEEE, vol. 62, no. 8, pp. 1088-1119, 1974.
[29]   Kus R., Duszyk A., Milanowski P., Labecki M., Bierzynska M., Radzikowska Z., Michalska M., Zygierewicz J., Suffczynski P., and Durka P. J., On the quantification of SSVEP frequency responses in human EEG in realistic BCI conditions, Plos One, vol. 8, no. 10, p. e77536, 2013.
[30]   Brainard D. H., The psychophysics toolbox, Spatial Vision, vol. 10, no. 4, pp. 433-436, 1997.
[31]   Celesia G. G., Steady-state and transient visual evoked-potentials in clinical-practice, Annals of the New York Academy of Sciences, vol. 388, pp. 290-305, 1982.
[32]   Sharon O. and Nir Y., Attenuated fast steady-state visual evoked potentials during human sleep, Cerebral Cortex, vol. 28, no. 4, pp. 1297-1311, 2018.
[33]   Mueller-Putz G. R., Scherer R., Brauneis C., and Pfurtscheller G., Steady-State Visual Evoked Potential (SSVEP)-based communication: Impact of harmonic frequency components, Journal of Neural Engineering, vol. 2, no. 4, pp. 123-130, 2015.
[34]   Nowak R. D. and Vanveen B. D., Random and pseudorandom inputs for Volterra filter identification, IEEE Transactions on Signal Processing, vol. 42, no. 8, pp. 2124-2135, 1994.
[35]   Birpoutsoukis G. and Schoukens J., Nonparametric Volterra kernel estimation using regularization, in Proceedings of 2015 IEEE International Instrumentation and Measurement Technology Conference, Pisa, Italy, 2015, pp. 222-227.
[36]   Wang Y. J., Chen X. G., Gao X. R., and Gao S. K., A benchmark dataset for SSVEP-based brain-computer interfaces, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 10, pp. 1746-1752, 2017.
[37]   Russo F. D., Pitzalis S., Aprile T., Spitoni G., Patria F., Stella A., Spinelli D., and Hillyard S. A., Spatiotemporal analysis of the cortical sources of the steady-state visual evoked potential, Human Brain Mapping, vol. 28, no. 4, pp. 323-334, 2007.
[38]   Chen J. J., Zhang D., Engel A. K., Gong Q., and Maye A., Application of a single-flicker online SSVEP BCI for spatial navigation, Plos One, vol. 12, no. 5, p. e0178385, 2017.
[39]   Chen J. J., Maye A., Engel A. K., Wang Y. J., Gao X. R., and Zhang D., Simultaneous decoding of eccentricity and direction information for a single-flicker SSVEP BCI, Electronics, vol. 8, no. 12, p. 1554, 2019.
[1] Wenchang Zhang, Fuchun Sun, Hang Wu, Chuanqi Tan, Yuzhen Ma. Asynchronous Brain-Computer Interface Shared Control of Robotic Grasping[J]. Tsinghua Science and Technology, 2019, 24(03): 360-370.