nlpaug.augmenter.word.context_word_embs

Augmenter that apply operation (word level) to textual input based on contextual word embeddings.

class nlpaug.augmenter.word.context_word_embs.ContextualWordEmbsAug(model_path='bert-base-uncased', action='substitute', temperature=1.0, top_k=100, top_p=None, name='ContextualWordEmbs_Aug', aug_min=1, aug_max=10, aug_p=0.3, stopwords=None, device='cpu', force_reload=False, optimize=None, stopwords_regex=None, verbose=0, silence=True)[source]

Bases: nlpaug.augmenter.word.word_augmenter.WordAugmenter

Augmenter that leverage contextual word embeddings to find top n similar word for augmentation.

Parameters:
  • model_path (str) – Model name or model path. It used transformers to load the model. Tested ‘bert-base-uncased’, ‘bert-base-cased’, ‘distilbert-base-uncased’, ‘roberta-base’, ‘distilroberta-base’, ‘xlnet-base-cased’.
  • action (str) – Either ‘insert or ‘substitute’. If value is ‘insert’, a new word will be injected to random position according to contextual word embeddings calculation. If value is ‘substitute’, word will be replaced according to contextual embeddings calculation
  • temperature (float) – Controlling randomness. Default value is 1 and lower temperature results in less random behavior
  • top_k (int) – Controlling lucky draw pool. Top k score token will be used for augmentation. Larger k, more token can be used. Default value is 100. If value is None which means using all possible tokens.
  • top_p (float) – Controlling lucky draw pool. Top p of cumulative probability will be removed. Larger p, more token can be used. Default value is None which means using all possible tokens.
  • aug_p (float) – Percentage of word will be augmented.
  • aug_min (int) – Minimum number of word will be augmented.
  • aug_max (int) – Maximum number of word will be augmented. If None is passed, number of augmentation is calculated via aup_p. If calculated result from aug_p is smaller than aug_max, will use calculated result from aug_p. Otherwise, using aug_max.
  • stopwords (list) – List of words which will be skipped from augment operation.
  • stopwords_regex (str) – Regular expression for matching words which will be skipped from augment operation.
  • device (str) – Default value is CPU. If value is CPU, it uses CPU for processing. If value is CUDA, it uses GPU for processing. Possible values include ‘cuda’ and ‘cpu’. (May able to use other options)
  • force_reload (bool) – Force reload the contextual word embeddings model to memory when initialize the class. Default value is False and suggesting to keep it as False if performance is the consideration.
  • optimize (bool) – If true, optimized process will be executed. For example, GPT2 will use “return_past” to reduce inference time.
  • silence (bool) – Default is True. transformers library will print out warning message when leveraing pre-trained model. Set True to disable the expected warning message.
  • name (str) – Name of this augmenter
>>> import nlpaug.augmenter.word as naw
>>> aug = naw.ContextualWordEmbsAug()
augment(data, n=1, num_thread=1)
Parameters:
  • data (object/list) – Data for augmentation. It can be list of data (e.g. list of string or numpy) or single element (e.g. string or numpy)
  • n (int) – Default is 1. Number of unique augmented output. Will be force to 1 if input is list of data
  • num_thread (int) – Number of thread for data augmentation. Use this option when you are using CPU and n is larger than 1
Returns:

Augmented data

>>> augmented_data = aug.augment(data)
device = None

TODO: Reserve 2 spaces (e.g. [CLS], [SEP]) is not enough as it hit CUDA error in batch processing mode. Therefore, forcing to reserve 5 times of reserved spaces (i.e. 5)