ashutosh saxena iitk

We intend to experiment with datasets for requisite and compatible data based on commonsense question-answering(QA) and sequential knowledge of events.

Recently, MELD dataset was released which accelerated the research in conversational systems involving emotion recognition.

Ashutosh’s education is listed on their profile. Research Domain Criteria (RDoC) is a framework that integrates multi-dimensional information for a better understanding of mental disorders.

In our work, we solve for the task of regressing funniness and predicting the funnier edited headline by leveraging the recently proposed powerful LM’s and humor heuristics-based features. We also explored various other approaches that involved the use of classical methods, other neural architectures and the incorporation of different linguistic features. In this work, we propose a novel learning technique called Learning from Description (LDES) and analyze our approach for the case of zero-shot text classification (ZS-TC). An unstructured text contains valuable information but retrieving elements of interest from the unstructured text requires crucial NLP techniques to process unstructured text.

Such an emotion→cause extraction pipeline disregards the inherent dependence between emotions and causes while also limiting the applicability of the model.

Second, identifying the most crucial reason why a statement does not make sense.

Our ranks for sub-task A were Greek-19 out of 37, Turkish-22 out of 46, Danish-26 out of 39, Arabic-39 out of 53, and English-20 out of 85.

We propose an end-to-end model that takes as input the text and corresponding to each word gives the probability of the word to be emphasized. Third, generating novel reasons for explaining the against common sense statement. We leverage techniques from Natural Language Processing (NLP) and Computer Vision (CV) towards the sentiment classification of internet memes (Subtask A). We propose to incorporate emotion as prior for the probabilistic state-of-the-art sentence generation models such as GPT-2 and BERT. Further, our approach is orthogonal to existing meta-learning (Vilalta and Drissi, 2002) based techniques – therefore, one can use our method in conjunction with meta-learning based techniques. Our work is relevant to any task concerned with the combination of different modalities. Our results show that transformer-based models are particularly effective in this task. Most of the modern applications prepare knowledge base using KG and derive hidden insights. For Subtask 3, word embeddings from Bert were passed onto the classifier alongside the already existing relations. Sanjeev Saxena, "A Simple Proof of Bernoulli's Inequality", viXra:1205.0068, May 2012.

We ranked 4-th and 9-th in the overall leaderboard. In the second step, we extract the key adjectives from the retrieved corpus using adjective clustering. In this project, we generate sarcastic remarks on a topic given by the user as input. In this project, we develop a system for addressing the research problem posed in Task 10 of SemEval-2020: Emphasis Selection For Written Text in Visual Media. We consider Bimodal (text and image) as well as Unimodal (text-only) techniques in our study ranging from the Naïve Bayes classifier to Transformer-based approaches. Why NLP is Hard and Linguistic Fundamentals-1 Linguistic Fundamentals-2 and NLP Pipeline In this project, we develop a system for addressing the research problem posed in SemEval 2020 Task 11: Detection of Propaganda Techniques in News Articles for each of the two subtasks of Span Identification and Technique Classification. We consider the problem of devising an adversarial attack scheme that can be applied to any general NLP model. Ashutosh Sharma, Modesto, California. Our technique achieved an F1 score of 85.43 and 35.2 for Subtask 1 and Subtask 2, respectively. State-of-the-art NLP models fail at simple additions and deletions of characters in the input sentences, calling for a need to defend against such attacks. Second, we apply a transformer-based model like BERTSUMABS on the Inshorts news articles.