Version 2 2025-03-11, 03:50Version 2 2025-03-11, 03:50
Version 1 2025-02-21, 02:39Version 1 2025-02-21, 02:39
journal contribution
posted on 2025-03-11, 03:50authored byMehshan Ahmed Khan, Houshyar AsadiHoushyar Asadi, Mohammad Reza Chalak Qazani, Adetokunbo Arogbonlo, Siamak Pedrammehr, Adnan Anwar, Hailing Zhou, Lei WeiLei Wei, Asim BhattiAsim Bhatti, Sam Oladazimi, Burhan Khan, Saeid Nahavandi
Functional near-infrared spectroscopy (fNIRS) is employed as a non-invasive method to monitor functional brain activation by capturing changes in the concentrations of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR). Various machine learning classification techniques have been utilized to distinguish cognitive states. However, conventional machine learning methods, although simpler to implement, undergo a complex pre-processing phase before network training and demonstrate reduced accuracy due to inadequate data preprocessing. Additionally, previous research in cognitive load assessment using fNIRS has predominantly focused on differentiating between two levels of mental workload. These studies mainly aim to classify low and high levels of cognitive load or distinguish between easy and difficult tasks. To address these limitations associated with conventional methods, this paper conducts a comprehensive exploration of the impact of Long Short-Term Memory (LSTM) layers on the effectiveness of Convolutional Neural Networks (CNNs) within deep learning models. This is to address the issues related to spatial feature overfitting and the lack of temporal dependencies in CNNs discussed in the previous studies. By integrating LSTM layers, the model can capture temporal dependencies in the fNIRS data, allowing for a more comprehensive understanding of cognitive states. The primary objective is to assess how incorporating LSTM layers enhances the performance of CNNs. The experimental results presented in this paper demonstrate that the integration of LSTM layers with convolutional layers results in an increase in the accuracy of deep learning models from 97.40% to 97.92%.