Volume 13 | Issue 2
Volume 13 | Issue 2
Volume 13 | Issue 2
Volume 13 | Issue 2
Volume 13 | Issue 2
Automatic emotion detection from human speech is becoming more prevalent today because it improves interactions between humans and machines. Human speech can be used to extract a range of temporal and spectral properties. Pitch-related characteristics, Mel Frequency Cepstral Coefficients (MFCCs), and speech formants can all be categorised using different techniques. This study examines statistical characteristics, such as Linear Discriminant Analysis was used to classify these features and MFCCs (LDA). A database of artificially emotional Marathi speech is also described in this article. The data samples were taken from male and female Marathi speeches that mimicked the emotions that gave rise to the Marathi utterances that could be employed in ordinary communication & are interpreted in all considered emotions. Three fundamental categories—happy, sad, and angry—were used to classify data samples. The training accuracy and testing accuracy for MFCC and LPC are 98, 82 and 85,82 respectively.