Skip navigation
st. Mary's University Institutional Repository St. Mary's University Institutional Repository

Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/7871
Title: Emotion Recognition from Facial Expression Using Convolutional Neural Network
Authors: Yalew, Behaylu
Keywords: Emotion Recognition; Facial Expression; Digital Image Processing; Convolutional Neural Network
Issue Date: Jan-2024
Publisher: St. Mary's University
Abstract: Facial expressions are a fundamental component of human communication. Recognizing emotions conveyed through facial expressions helps us understand others' feelings, intentions, and social cues, facilitating effective interaction and empathy. This paper uses the FER 2013 dataset to provide an extensive analysis of facial emotion recognition. This study's primary objective was to select a suitable model for face emotion detection using transfer learning techniques. The evaluation process focuses on assessing the accuracy of the models employed. Specifically, to gauge whether interpolation yields improved outcomes, the researchers plan to conduct an experimental analysis of the interpolation technique's effectiveness in upscaling lower-resolution images. By systematically analyzing the impact of interpolation on image quality and model performance, the study aims to provide empirical evidence regarding the efficacy of this technique in enhancing the accuracy of the models employed in image processing tasks specaly for face emotion detection. By systematically analyzing the impact of interpolation on both image quality and model performance, this study seeks to offer empirical evidence concerning the effectiveness of this technique in improving the accuracy of models utilized in image processing tasks, particularly for facial emotion detection. In order to classify seven distinct emotions this study tested with three alternative convolutional neural network architectures: VGG16, Resnet50, and Inception V3 the accuracy measurement like precision, recall and f-1 score metrics were used to illustrate the model's performance and results using interpolation to a 48x48 size, this study could obtain a maximum of 23 percent recall for all models examined. This study offers valuable insights into the efficacy of various pre-trained CNN architectures and interpolation methods in the domain of facial emotion detection. By assessing these models, it not only informs the selection of suitable architectures and interpolation sizes for emotion detection tasks but also serves as a catalyst for further research in this field. The findings not only guide practitioners in choosing optimal models for their applications but also inspire additional investigations aimed at refining and advancing the techniques used in facial emotion detection.
URI: http://hdl.handle.net/123456789/7871
Appears in Collections:Master of computer science

Files in This Item:
File Description SizeFormat 
2. Behaylu Yalew.pdf2.21 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.