Skip to content

pyvandenbussche/transformers-ner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

transformers-ner

Experiment on NER task using Huggingface state-of-the-art Natural Language Models

Installation

Prerequisites

  • Python ≥ 3.6

Provision a Virtual Environment

Create and activate a virtual environment (conda)

conda create --name py36_transformers-ner python=3.6
source activate py36_transformers-ner

If pip is configured in your conda environment, install dependencies from within the project root directory

pip install -r requirements.txt

Data Pre-processing

From Stanford format

The current pipeline is generating a Stanford NER compatible format. We can start our experiment from this file. Small modifications should be applied to the file so it can be processed by BERT NER. In particular the file do not use B-LABEL and I-LABEL for teh first occurrence and following one of a label.

After putting the Stanford NER format file (e.g. StanfordNov19.txt) in data folder, execute the following command:

python ./preprocess/generate_from_stanford.py --input_data ./data/StanfordNov19.txt --output_dir ./data/

The script ouputs two files train.txt and test.txt that will be the input of the NER pipeline.

NER pipeline

To execute the NER pipeline, run the following scripts:

python ./run_ner.py --data_dir ./data --model_type bert --model_name_or_path bert-base-cased --output_dir ./output --labels ./data/labels.txt --do_train --do_predict --save_steps 200000 --max_seq_length 512 --overwrite_output_dir --overwrite_cache

The script will output the results and predictions in the output directory.

Download the pre-trained models

SciBERT

Download and unzip the model, vocab and its config. Rename config file to config.json as expected from the script.

curl -Ol https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/pytorch_models/scibert_scivocab_cased.tar
tar -xvf scibert_scivocab_cased.tar -C scibert_scivocab_cased
cd scibert_scivocab_cased/
tar -zxvf weights.tar.gz
mv bert_config.json config.json
rm weights.tar.gz

SpanBERT

Download and unzip the model, vocab and its config. Rename config file to config.json as expected from the script. Note that SpanBERT does not come with its own vocab.txt file. Instead it reuses the same as BERT-large-cased model

curl -Ol https://dl.fbaipublicfiles.com/fairseq/models/spanbert_hf_base.tar.gz
mkdir spanbert_hf_base
tar -zxvf spanbert_hf_base.tar.gz -C spanbert_hf_base
cd spanbert_hf_base
curl -Ol https://raw.githubusercontent.com/pyvandenbussche/transformers-ner/master/data/bert_large_cased_vocab.txt
mv bert_large_cased_vocab.txt vocab.txt

About

Experiment on NER task using Huggingface state-of-the-art Transformers Natural Language Models library

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages