Augmenter that apply operation (sentence level) to textual input based on contextual word embeddings.
ContextualWordEmbsForSentenceAug(model_path='gpt2', name='ContextualWordEmbsForSentence_Aug', min_length=100, max_length=500, batch_size=32, temperature=1.0, top_k=50, top_p=0.9, device='cpu', force_reload=False, silence=True)¶
Augmenter that leverage contextual word embeddings to find top n similar word for augmentation.
- model_path (str) – Model name or model path. It used transformers to load the model. Tested ‘gpt2’, ‘distilgpt2’.
- batch_size (int) – Batch size.
- min_length (int) – The min length of output text.
- max_length (int) – The max length of output text.
- temperature (float) – The value used to module the next token probabilities.
- top_k (int) – The number of highest probability vocabulary tokens to keep for top-k-filtering.
- top_p (float) – If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.
- device (str) – Default value is CPU. If value is CPU, it uses CPU for processing. If value is CUDA, it uses GPU for processing. Possible values include ‘cuda’ and ‘cpu’. (May able to use other options)
- force_reload (bool) – Force reload the contextual word embeddings model to memory when initialize the class. Default value is False and suggesting to keep it as False if performance is the consideration.
- silence (bool) – Default is True. transformers library will print out warning message when leveraing pre-trained model. Set True to disable the expected warning message.
- name (str) – Name of this augmenter
>>> import nlpaug.augmenter.sentence as nas >>> aug = nas.ContextualWordEmbsForSentenceAug()
augment(data, n=1, num_thread=1)¶
- data (object/list) – Data for augmentation. It can be list of data (e.g. list of string or numpy) or single element (e.g. string or numpy)
- n (int) – Default is 1. Number of unique augmented output. Will be force to 1 if input is list of data
- num_thread (int) – Number of thread for data augmentation. Use this option when you are using CPU and n is larger than 1
>>> augmented_data = aug.augment(data)