Doctor of Philosophy (PhD)
Engineering Science (Interdepartmental Program)
Affect (emotion) recognition has many applications, such as human assistive robotics, human computer interaction and empathic agents, virtual tutoring, marketing, surveillance, and counseling. Previous research has focused primarily on unimodal or bimodal affect recognition (facial expressions and speech). This research developed multimodal emotion recognition by using data from facial expressions, head position, hand movement, body posture and speech. A novel hybrid event driven fusion technique was used to combine data from multiple input channels at the feature level and decision level. Position and temporal data from tracked feature points was used for training a support vector machine based classifier. New rule based features in addition to existing geometric, kinetic and 3D features were created. An emotional key word look-up using speech recognition technology was incorporated in the recognition process. The research developed a real time affect estimation system that accurately predicts multiple emotions, intensity of the emotions and maintains the context history of recognized emotions.
Document Availability at the Time of Submission
Student has submitted appropriate documentation to restrict access to LSU for 365 days after which the document will be released for worldwide access.
Patwardhan, Amol Sriniwas, "Multimodal Affect Recognition Using Facial Expression, Body Posture and Speech Input" (2016). LSU Doctoral Dissertations. 4243.