Inner speech, or self-talk, is a process by which we talk to ourselves to think through problems or plan actions. Although inner speech is ubiquitous, its neural basis remains largely unknown. This thesis investigates the feasibility of using brain signal analysis to classify the recorded electroencephalography (EEG) data from participants engaged in tasks involving Inner Speech and made publicly available by Nieto et al. (2021). We present the implementation of four machine learning models, demonstrate the results, and compare using different protocols. The results are compared to the ones obtained by Berg et al. (2021), who used the same dataset. Two of the classical models we tried (SVC and LinearSVC) prove superior even against results obtained with deep learning models. We also compare the results from Inner Speech with Pronounced Speech to validate the reusability of the proposed method. We found an apparent regularity in the data on the results, validating the method’s quality and reusability.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-95658 |
Date | January 2022 |
Creators | Torquato Rollin, Fellipe, Buenrostro-Leiter, Valeria |
Publisher | Luleå tekniska universitet, Institutionen för system- och rymdteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.002 seconds