Article
DEEP LEARNING BASED MULTIMODAL HUMAN ACTIVITY RECOGNITION FOR PERSONALIZED HEALTHCARE
In the evolving landscape of healthcare, continuous patient monitoring has shifted from manual oversight to intelligent automation powered by IoT devices and deep learning models. This project presents a robust system for recognizing human activities in a healthcare setting using multimodal IoT sensor data from accelerometers and gyroscopes. The proposed model integrates a hybrid deep learning architecture combining Random Forest for feature selection, Gated Recurrent Unit (GRU) for temporal analysis, and an Attention Mechanism (AM) for focusing on critical features. The system processes the KUHAR dataset, training the hybrid ELM-GRUAM model on 80% of the data and testing on the remaining 20%. Experimental results show that the proposed model achieves, outperforming traditional models such as Random Forest. Performance metrics including precision, recall, F1-score, and confusion matrices confirm the model's reliability. A web-based interface supports functionalities such as user registration, login, dataset processing, model training, and activity recognition. The end-user can upload test data and receive real-time activity predictions, making the system practical for real-world personal healthcare applications
Full Text Attachment