Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Speech Emotion Recognition for Affective Human Robot Interaction full report
Post: #1

Speech Emotion Recognition for Affective Human Robot Interaction

We evaluate the performance of a speech emotion recognition method for affective human-robot interaction. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad, and surprised. After applying noise reduction and speech detection, we obtain a feature vector for an utterance from statistics of phonetic and prosodic information. The phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; the prosodic information includes pitch, jitter, and rate of speech. Then a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. To simulate a human-robot interaction situation, we record speech commands and dialogs uttered at 2m away from a microphone. Experimental results show that the proposed method achieves the classification accuracy of 58.6% while listeners give 60.4% with the reference labels given by speakersâ„¢ intention. On the other hand, the proposed method shows the classification accuracy of 51.2% with the reference labels given by the listenersâ„¢ majority decision.

Presented By:
Kwang-Dong Jang and Oh-Wook Kwon Department of Control and Instrumentation Engineering Chungbuk National University, Korea {kdjang,owkwon}[at]

1. Introduction
A human conveys emotion as well as linguistic information via speech signals. The emotion in speech makes verbal communications natural, emphasizes a speakerâ„¢s intention, and shows oneâ„¢s psychological state. Recently there has been a lot of research activities for affective human-robot interaction with a humanoid robot by recognizing the emotion expressed through facial images and speech. In particular, speech emotion recognition requires less hardware and computational complexity compared to facial emotion recognition. A speech emotion recognizer can be used in an interactive intelligent robot which responds appropriately to a userâ„¢s command according to the userâ„¢s emotional state. It can be also embedded in a music player which suggests a suitable music list to the userâ„¢s emotional state. Emotion can be recognized by using acoustic information and/or linguistic information . Emotion recognition from linguistic information is done by spotting exclamatory words from input utterances and thus cannot be used when there are no exclamatory words. However, acoustic information extracted from speech signals is more flexible for emotion recognition than linguistic information because it does not require any speech recognition system to spot exclamatory words and can be extended to any other language. Among many features suggested for speech emotion recognition, we select the following acoustic information: pitch, energy, formats, tempo, duration, jitter, shimmer, mel frequency coefficient (MFCC), linear predictive coding (LPC) coefficient, and Teager energy. A pattern classifier based on support vector machines (SVM) classifies the motion by using the feature vector obtained from statistics of the acoustic information. We compare the performance of automatic emotion recognition when the reference labels are given by speakers and human listeners. This paper is organized as follows: Section 2 explains the base features extracted from speech and the pattern classifier. Section 3 describes the experimental results when the reference labels are supplied by human listeners and speakers. Section 4 concludes the paper.

for full report please see

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: speech emotion recognition ppt, emotion recognition from speech source code in matlab, speech recognition full report using vhdl code, human gait recognition technology, speech emotion ppt, human emotion detection from image, speech recognition report,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  automatic vehicle locator full report computer science technology 3 5,769 07-09-2013 11:30 PM
Last Post: Guest
  INSTRUMENTATION CONTROL full report project topics 7 7,304 11-05-2013 09:58 AM
Last Post: computer topic
  Optical Satellite Communication full report computer science technology 1 2,920 16-01-2013 12:25 PM
Last Post: seminar details
  Fingerprint Recognition based on Silicon Chips seminar class 2 2,195 14-01-2013 08:16 AM
Last Post: mall99
  PH Control Technique using Fuzzy Logic full report computer science technology 4 7,133 16-03-2012 10:23 AM
Last Post: seminar paper
  programmable logic controller plc full report computer science technology 14 21,307 12-03-2012 03:59 PM
Last Post: seminar paper
  INTELLIGENT WIRELESS VIDEO CAMERA USING COMPUTER full report computer science technology 2 7,744 19-01-2012 10:53 AM
Last Post: seminar addict
  SPEECH RECOGNITION USING DSP full report computer science technology 18 15,610 16-01-2012 12:04 PM
Last Post: seminar addict
  air muscles full report project report tiger 5 8,418 03-10-2011 09:35 AM
Last Post: seminar addict
  Welding Processes full report seminar class 1 3,161 30-07-2011 02:17 PM
Last Post: smart paper boy