Srivastava, Harshvardhan
Srivastava, Harshvardhan
Home
Work Experience
Publications
Projects
Contact
CV
Light
Dark
Automatic
Publications
Type
Conference paper
Preprint
Date
2022
2020
MMER: Multimodal Multi-task learning for Emotion Recognition in Spoken Utterances
We propose a multimodal multitask learning approach for ER from individual utterances in isolation. Experiments on the IEMOCAP benchmark show that our proposed model performs better than our re-implementation of state-of-the-art and achieves better performance than all other unimodal and multimodal approaches in literature.
Harshvardhan Srivastava
,
Sreyan Ghosh
,
S. Umesh
Preprint
Code
A Discourse Aware Sequence Learning Approach for Emotion Recognition in Conversations
We propose a novel architecture boosted by a modified LSTM cell, which we call DiscLSTM, that better captures the interaction between conversational and sequential context. DiscLSTM brings together the best of both worlds and provides a more intuitive and efficient way to model the information flow between individual utterances by better capturing long-distance conversational background through discourse relations and sequential context through recurrence.
Sreyan Ghosh
,
Harshvardhan Srivastava
,
S. Umesh
Preprint
Code
Misogynistic Meme Detection using Early Fusion Model with Graph Network
We propose an early fusion model for prediction and identification of misogynistic memes and its type in this paper for which we participated in SemEval-2022 Task 5
Harshvardhan Srivastava
Preprint
Code
Zero Shot Crosslingual Eye-Tracking Data Prediction using Multilingual Transformer Models
Our model uses text representations from transformers and some hand engineered features with a regression layer on top to predict statistical measures of mean and standard deviation for 2 main eye-tracking features.
Harshvardhan Srivastava
Preprint
Code
Poster
Slides
Video
Causality Detection using Sentence Embeddings in Financial Reports
This work is associated with the first sub-task of identifying causality in sentences. The various models used in the experiments tried to obtain a latent space representation for each of the sentences.
Harshvardhan Srivastava
,
Arka Mitra
,
Yugam Tiwari
Preprint
PDF
Code
Cite
×