[NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach'.
-
Updated
Aug 17, 2022 - Python
[NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach'.
The purpose of this repository is to introduce new dialogue-level commonsense inference datasets and tasks. We chose dialogues as the data source because dialogues are known to be complex and rich in commonsense.
Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from huggingface.
This app Classifies the text generated by AI tools like chatGPT. Roberta-base-openai-detector Model has been used from hugging face to detect ai generated texts.
This repository contains the code of our winning solution for the Shared Task on Detecting Signs of Depression from Social Media Text at LT-EDI-ACL2022.
Source code for CoNLL 2021 paper by Huebner et al. 2021
Convert pretrained RoBerta models to various long-document transformer models
This repository contains the solutions to three problem statements completed during the hackathon. Each problem statement is categorized based on its difficulty level: Easy, Moderate, and Hard.
It is the nlp task to classify empathetic dialogues datasets using RoBERTa, ERNIE-2.0 and XLNet with different preprocessing method. You can get some detailed introduction and experimental results in the link below.
A project demonstrating the use of Large Language Models (LLMs) for text classification using the RoBERTa model.
Building this project to generate MCQ Questions from any type of text and generate answers and distractors for it.
Resources for the paper: Monolingual Pre-trained Language Models for Tigrinya
Tutorial on training a RoBERTa Transformers model from scratch
All-in-one repo for the Lemone-embed project, a series of fine-tuned embedding models for Tax retrieval augmented generation (RAG).
🙂🙃 Being happy :) being sad :( with this tool, you become sentiment GIGA chad!
This work focuses on the development of machine learning models, in particular neural networks and SVM, where they can detect toxicity in comments. The topics we will be dealing with: a) Cost-sensitive learning, b) Class imbalance
A web application built using the Django framework that allows users to analyze the sentiment of text inputs, providing insights into the emotional tone and polarity of the content.
A safe social media webapp using NLP models(BERT and GPT3) for bad language detection and content recommendations, deployed using docker and Kubernetes on digital ocean
predict movie's genres based on overview
Add a description, image, and links to the roberta-model topic page so that developers can more easily learn about it.
To associate your repository with the roberta-model topic, visit your repo's landing page and select "manage topics."