spacysentence

Adding multiple special cases for Spacy tokenizer


I am trying to segment text in a txt file (utf-8) into sentences using Spacy. It segments sentences with abbreviations (e.g., Mr., Dr., etc.) as separate sentences when it is meant to read as a single sentence. For example: 'Mr. John Doe says' becomes Sentence 0: Dr. Sentence 1: Jane Doe says

I tried to use nlp.tokenizer.add_special_case to recognize Dr. as a special case, and it works for one case (code below). BUT because I have many abbreviations in the rest of the dataset, I would like to have a list of abbreviations (preferably from a text file but really just a list is fine!) where it adds everything on the list as special cases.

This is my code:

import spacy
import pathlib
from spacy.attrs import ORTH, NORM

nlp = spacy.load('en_core_web_sm')
nlp.tokenizer.add_special_case('Dr.', [{ORTH: 'Dr .', NORM: 'Doctor'}])

file_name = r"text_test_sentence.txt" #filename of textfile to split
doc = nlp(pathlib.Path(file_name).read_text(encoding="utf-8"))
sentences = list (doc.sents) 

Thank you in advance!!!


Solution

  • If you would like to like add multiple rules to your tokenizer, then I would suggest writing a for loop over a list that stores all the various abbreviations that you would like to add to the special cases.