Development of an EEG based Imagined speech recognition system for augmentation of patient care communication
Implementing Organization
Visvesvaraya National Institute of Technology
Principal Investigator
Dr. Pradnya Hemant Ghare
Visvesvaraya National Institute of Technology
CO-Principal Investigator
Dr. Ashwin Kothari
Visvesvaraya National Institute of Technology
CO-Principal Investigator
Dr. Prathamesh Haridas Kamble
All India Institute of Medical Sciences
Project Overview
Human speech production is a complex process that requires precise coordination of different speech articulators. Imagined speech refers to the imagination of speaking without actually moving any articulators. The human brain generates electrical waves corresponding to specific activities, including imagining words. Correctly decoding these brain signals, recorded using electroencephalography, can be beneficial for patients with severe paralysis. Accuracy is the most important parameter for such a system. This proposal focuses on recognizing imagined words related to patient care, using machine learning or deep learning algorithms for classification. Most researchers have used machine learning algorithms, but hyperparameter tuning is needed for better classification accuracy. This project proposes testing both CNN and machine learning models to address this issue. To achieve highest classification accuracy, spatio-temporal features from EEG signals are required. The project proposes a combination of CNN-LsTM networks for optimal feature selection. Optimal channel selection is crucial for generating specific words, and algorithms for optimal channel selection will be developed. The results from this project will be helpful for hospitals dealing with patients suffering from Parkinson's and paralysis.