Return to search

Brain Signal Analysis For Inner Speech Detection

Inner speech, or self-talk, is a process by which we talk to ourselves to think through problems or plan actions. Although inner speech is ubiquitous, its neural basis remains largely unknown. This thesis investigates the feasibility of using brain signal analysis to classify the recorded electroencephalography (EEG) data from participants engaged in tasks involving Inner Speech and made publicly available by Nieto et al. (2021). We present the implementation of four machine learning models, demonstrate the results, and compare using different protocols. The results are compared to the ones obtained by Berg et al. (2021), who used the same dataset. Two of the classical models we tried (SVC and LinearSVC) prove superior even against results obtained with deep learning models. We also compare the results from Inner Speech with Pronounced Speech to validate the reusability of the proposed method. We found an apparent regularity in the data on the results, validating the method’s quality and reusability.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-95658
Date January 2022
CreatorsTorquato Rollin, Fellipe, Buenrostro-Leiter, Valeria
PublisherLuleå tekniska universitet, Institutionen för system- och rymdteknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.002 seconds