UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network
Liu,Yu1; Ding,Yufeng1; Li,Chang1; Cheng,Juan1; Song,Rencheng1; Wan,Feng2; Chen,Xun3
2020-07-22
Source PublicationCOMPUTERS IN BIOLOGY AND MEDICINE
ISSN0010-4825
Volume123Pages:103927
Abstract

In recent years, deep learning (DL) techniques, and in particular convolutional neural networks (CNNs), have shown great potential in electroencephalograph (EEG)-based emotion recognition. However, existing CNN-based EEG emotion recognition methods usually require a relatively complex stage of feature pre-extraction. More importantly, the CNNs cannot well characterize the intrinsic relationship among the different channels of EEG signals, which is essentially a crucial clue for the recognition of emotion. In this paper, we propose an effective multi-level features guided capsule network (MLF-CapsNet) for multi-channel EEG-based emotion recognition to overcome these issues. The MLF-CapsNet is an end-to-end framework, which can simultaneously extract features from the raw EEG signals and determine the emotional states. Compared with original CapsNet, it incorporates multi-level feature maps learned by different layers in forming the primary capsules so that the capability of feature representation can be enhanced. In addition, it uses a bottleneck layer to reduce the amount of parameters and accelerate the speed of calculation. Our method achieves the average accuracy of 97.97%, 98.31% and 98.32% on valence, arousal and dominance of DEAP dataset, respectively, and 94.59%, 95.26% and 95.13% on valence, arousal and dominance of DREAMER dataset, respectively. These results show that our method exhibits higher accuracy than the state-of-the-art methods.

KeywordCapsule Network Deep Learning Electroencephalogram (Eeg) Emotion Recognition
DOI10.1016/j.compbiomed.2020.103927
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaLife Sciences & Biomedicine - Other Topics ; Computer Science ; Engineering ; Mathematical & Computational Biology
WOS SubjectBiology ; Computer Science, Interdisciplinary Applications ; Engineering, Biomedical ; Mathematical & Computational Biology
WOS IDWOS:000558010800042
PublisherElsevier Ltd
Scopus ID2-s2.0-85088374674
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING
Corresponding AuthorLi,Chang
Affiliation1.Department of Biomedical Engineering,Hefei University of Technology,Hefei,230009,China
2.Department of Electrical and Computer Engineering,University of Macau,Macau,China
3.Department of Electronic Science and Technology,University of Science and Technology of China,Hefei,230027,China
Recommended Citation
GB/T 7714
Liu,Yu,Ding,Yufeng,Li,Chang,et al. Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network[J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2020, 123, 103927.
APA Liu,Yu., Ding,Yufeng., Li,Chang., Cheng,Juan., Song,Rencheng., Wan,Feng., & Chen,Xun (2020). Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network. COMPUTERS IN BIOLOGY AND MEDICINE, 123, 103927.
MLA Liu,Yu,et al."Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network".COMPUTERS IN BIOLOGY AND MEDICINE 123(2020):103927.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Liu,Yu]'s Articles
[Ding,Yufeng]'s Articles
[Li,Chang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Liu,Yu]'s Articles
[Ding,Yufeng]'s Articles
[Li,Chang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Liu,Yu]'s Articles
[Ding,Yufeng]'s Articles
[Li,Chang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.