Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "emotion detection" wg kryterium: Temat


Wyświetlanie 1-7 z 7
Tytuł:
Recognition of Human Emotion from a Speech Signal Based on Plutchiks Model
Autorzy:
Kamińska, D.
Pelikant, A.
Tematy:
emotion detection
Plutchik's wheel of emotion
speech signal
Pokaż więcej
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Powiązania:
https://bibliotekanauki.pl/articles/227272.pdf  Link otwiera się w nowym oknie
Opis:
Machine recognition of human emotional states is an essential part in improving man-machine interaction. During expressive speech the voice conveys semantic message as well as the information about emotional state of the speaker. The pitch contour is one of the most significant properties of speech, which is affected by the emotional state. Therefore pitch features have been commonly used in systems for automatic emotion detection. In this work different intensities of emotions and their influence on pitch features have been studied. This understanding is important to develop such a system. Intensities of emotions are presented on Plutchik's cone-shaped 3D model. The k Nearest Neighbor algorithm has been used for classification. The classification has been divided into two parts. First, the primary emotion has been detected, then its intensity has been specified. The results show that the recognition accuracy of the system is over 50% for primary emotions, and over 70% for its intensities.
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Personality-based affective adaptation methods for intelligent systems
Autorzy:
Kutt, Krzysztof
Nalepa, Grzegorz
Bobek, Szymon
Drążyk, Dominika
Opis:
In this article, we propose using personality assessment as a way to adapt affective intelligent systems. This psychologically-grounded mechanism will divide users into groups that differ in their reactions to affective stimuli for which the behaviour of the system can be adjusted. In order to verify the hypotheses, we conducted an experiment on 206 people, which consisted of two proof-of-concept demonstrations: a “classical” stimuli presentation part, and affective games that provide a rich and controllable environment for complex emotional stimuli. Several significant links between personality traits and the psychophysiological signals (electrocardiogram (ECG), galvanic skin response (GSR)), which were gathered while using the BITalino (r)evolution kit platform, as well as between personality traits and reactions to complex stimulus environment, are promising results that indicate the potential of the proposed adaptation mechanism.
Dostawca treści:
Repozytorium Uniwersytetu Jagiellońskiego
Artykuł
Tytuł:
Hierarchical Bi-LSTM based emotion analysis of textual data
Autorzy:
Mahto, Dashrath
Yadav, Subhash Chandra
Tematy:
emotion analysis
machine learning
emotion detection
deep learning
hierarchical Bi-LSTM
analiza emocji
uczenie maszynowe
wykrywanie emocji
głęboka nauka
hierarchiczna dwukierunkowa pamięć krótkotrwała
Pokaż więcej
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Powiązania:
https://bibliotekanauki.pl/articles/2173676.pdf  Link otwiera się w nowym oknie
Opis:
Nowadays, Twitter is one of the most popular microblogging sites that is generating a massive amount of textual data. Such textual data is intended to incorporate human feelings and opinions with related events like tweets, posts, and status updates. It then becomes difficult to identify and classify the emotions from the tweets due to their restricted word length and data diversity. In contrast, emotion analysis identifies and classifies different emotions based on the text data generated from social media platforms. The underlying work anticipates an efficient category and prediction technique for analyzing different emotions from textual data collected from Twitter. The proposed research work deliberates an enhanced deep neural network (EDNN) based hierarchical Bi-LSTM model for emotion analysis from textual data; that classifies the six emotions mainly sadness, love, joy, surprise, fear, and anger. Furthermore, the emotion analysis result obtained by the proposed hierarchical Bi-LSTM model is being compared and validated with the traditional hybrid CNN-LSTM approach regarding the accuracy, recall, precision, and F1-Score. It can be observed from the results that the proposed hierarchical Bi-LSTM achieves an average accuracy of 89% for emotion analysis, whereas the existing CNN-LSTM model achieved an overall accuracy of 75%. This result shows that the proposed hierarchical Bi-LSTM approach achieves desired performance compared to the CNN-LSTM model.
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Music Playlist Generation using Facial Expression Analysis and Task Extraction
Autorzy:
Sen, A.
Popat, D.
Shah, H.
Kuwor, P.
Johri, E.
Tematy:
facial expression analysis
emotion recognition
feature extraction
viola jones face detection
gabor filter
adaboost
k-NN algorithm
task extraction
music classification
playlist generation
Pokaż więcej
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Powiązania:
https://bibliotekanauki.pl/articles/908868.pdf  Link otwiera się w nowym oknie
Opis:
In day to day stressful environment of IT Industry, there is a truancy for the appropriate relaxation time for all working professionals. To keep a person stress free, various technical or non-technical stress releasing methods are now being adopted. We can categorize the people working on computers as administrators, programmers, etc. each of whom require varied ways in order to ease themselves. The work pressure and the vexation of any kind for a person can be depicted by their emotions. Facial expressions are the key to analyze the current psychology of the person. In this paper, we discuss a user intuitive smart music player. This player will capture the facial expressions of a person working on the computer and identify the current emotion. Intuitively the music will be played for the user to relax them. The music player will take into account the foreground processes which the person is executing on the computer. Since various sort of music is available to boost one's enthusiasm, taking into consideration the tasks executed on the system by the user and the current emotions they carry, an ideal playlist of songs will be created and played for the person. The person can browse the playlist and modify it to make the system more flexible. This music player will thus allow the working professionals to stay relaxed in spite of their workloads.
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A deep learning based hybrid model for maternal health risk detection and multifaceted emotion analysis in social networks
Autorzy:
Geethanjali, R.
Valarmathi, A.
Tematy:
multifaceted emotion analysis
social network
maternal health
risk factor detection
deep learning
hybrid approach
analiza emocji
sieć społecznościowa
zdrowie matki
wykrywanie czynników ryzyka
uczenie głębokie
podejście hybrydowe
Pokaż więcej
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Powiązania:
https://bibliotekanauki.pl/articles/59123769.pdf  Link otwiera się w nowym oknie
Opis:
In the field of public health, accurately identifying maternal health risks through social network data is both vital and challenging due to the complexities of multimodal sentiment analysis. Our study addresses this challenge by introducing the maternal health risk factor detection using deep learning approach (MHRFD-DLA), a novel framework that integrates convolutional neural networks, long short-term memory networks, and attention mechanisms. This approach enhances sentiment analysis and risk detection in maternal health, with the focus on critical areas such as prenatal care, mental health, and nutrition. MHRFD-DLA utilizes multimodal data, including text and electrocardiogram (ECG) signals, offering a comprehensive assessment of maternal health risks. Our model outperforms existing multimodal sentiment analysis models, achieving an accuracy of 98.4%, a precision of 97.6%, a recall of 95.6%, and an F1 score of 98.4%. Through performance evaluations, visualizations such as the confusion matrix and class distributions further validate its robustness. The MHRFD-DLA model not only bridges significant gaps in current methodologies, but it also sets a new benchmark for maternal health surveillance and intervention, demonstrating its practicality and effectiveness in real-world applications.
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition using wavelet packet reconstruction with attention-based deep recurrent neutral networks
Autorzy:
Meng, Hao
Yan, Tianhao
Wei, Hongwei
Ji, Xun
Tematy:
speech emotion recognition
voice activity detection
wavelet packet reconstruction
feature extraction
LSTM networks
attention mechanism
rozpoznawanie emocji mowy
wykrywanie aktywności głosowej
rekonstrukcja pakietu falkowego
wyodrębnianie cech
mechanizm uwagi
sieć LSTM
Pokaż więcej
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Powiązania:
https://bibliotekanauki.pl/articles/2173587.pdf  Link otwiera się w nowym oknie
Opis:
Speech emotion recognition (SER) is a complicated and challenging task in the human-computer interaction because it is difficult to find the best feature set to discriminate the emotional state entirely. We always used the FFT to handle the raw signal in the process of extracting the low-level description features, such as short-time energy, fundamental frequency, formant, MFCC (mel frequency cepstral coefficient) and so on. However, these features are built on the domain of frequency and ignore the information from temporal domain. In this paper, we propose a novel framework that utilizes multi-layers wavelet sequence set from wavelet packet reconstruction (WPR) and conventional feature set to constitute mixed feature set for achieving the emotional recognition with recurrent neural networks (RNN) based on the attention mechanism. In addition, the silent frames have a disadvantageous effect on SER, so we adopt voice activity detection of autocorrelation function to eliminate the emotional irrelevant frames. We show that the application of proposed algorithm significantly outperforms traditional features set in the prediction of spontaneous emotional states on the IEMOCAP corpus and EMODB database respectively, and we achieve better classification for both speaker-independent and speaker-dependent experiment. It is noteworthy that we acquire 62.52% and 77.57% accuracy results with speaker-independent (SI) performance, 66.90% and 82.26% accuracy results with speaker-dependent (SD) experiment in final.
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition using wavelet packet reconstruction with attention-based deep recurrent neutral networks
Autorzy:
Meng, Hao
Yan, Tianhao
Wei, Hongwei
Ji, Xun
Tematy:
speech emotion recognition
voice activity detection
wavelet packet reconstruction
feature extraction
LSTM networks
attention mechanism
rozpoznawanie emocji mowy
wykrywanie aktywności głosowej
rekonstrukcja pakietu falkowego
wyodrębnianie cech
mechanizm uwagi
sieć LSTM
Pokaż więcej
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Powiązania:
https://bibliotekanauki.pl/articles/2090711.pdf  Link otwiera się w nowym oknie
Opis:
Speech emotion recognition (SER) is a complicated and challenging task in the human-computer interaction because it is difficult to find the best feature set to discriminate the emotional state entirely. We always used the FFT to handle the raw signal in the process of extracting the low-level description features, such as short-time energy, fundamental frequency, formant, MFCC (mel frequency cepstral coefficient) and so on. However, these features are built on the domain of frequency and ignore the information from temporal domain. In this paper, we propose a novel framework that utilizes multi-layers wavelet sequence set from wavelet packet reconstruction (WPR) and conventional feature set to constitute mixed feature set for achieving the emotional recognition with recurrent neural networks (RNN) based on the attention mechanism. In addition, the silent frames have a disadvantageous effect on SER, so we adopt voice activity detection of autocorrelation function to eliminate the emotional irrelevant frames. We show that the application of proposed algorithm significantly outperforms traditional features set in the prediction of spontaneous emotional states on the IEMOCAP corpus and EMODB database respectively, and we achieve better classification for both speaker-independent and speaker-dependent experiment. It is noteworthy that we acquire 62.52% and 77.57% accuracy results with speaker-independent (SI) performance, 66.90% and 82.26% accuracy results with speaker-dependent (SD) experiment in final.
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-7 z 7

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies