mangoes.modeling.coref module

This module provides a torch/transformers implementation of the fine-tuning procedure described in “BERT for Coreference Resolution: Baselines and Analysis” (https://arxiv.org/pdf/1908.09091.pdf)

class mangoes.modeling.coref.TransformerModelForCoreferenceResolutionBase(pretrained_model_or_config, max_span_width=30, ffnn_hidden_size=1000, top_span_ratio=0.4, max_top_antecendents=50, use_metadata=False, metadata_feature_size=20, genres=('bc', 'bn', 'mz', 'nw', 'pt', 'tc', 'wb'), max_training_segments=5, coref_depth=2, coref_dropout=0.3, **base_model_keyword_args)

Bases: transformers.modeling_utils.PreTrainedModel

Class for fine-tuning a transformer model for the coreference resolution task. This is an implementation of https://arxiv.org/pdf/1908.09091.pdf, which uses the fine tuning procedure described in https://arxiv.org/pdf/1804.05392.pdf

Attributes
base_model

torch.nn.Module: The main body of the model.

device

torch.device: The device on which the module is (assuming that all the module parameters are on the same

dtype

torch.dtype: The dtype of the module (assuming that all the module parameters have the same dtype).

dummy_inputs

Dict[str, torch.Tensor]: Dummy inputs to do a forward pass in the network.

framework
str

Identifies that this is a PyTorch model.

is_gradient_checkpointing

Whether gradient checkpointing is activated for this model or not.

Methods

add_memory_hooks()

Add a memory hook before and after each sub-module forward pass to record increase in memory consumption.

add_module(name, module)

Adds a child module to the current module.

adjust_logits_during_generation(logits, **kwargs)

Implement in subclasses of [PreTrainedModel] for custom behavior to adjust the logits in the generate method.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

beam_sample(input_ids, beam_scorer[, …])

Generates sequences of token ids for models with a language modeling head using beam search multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

beam_search(input_ids, beam_scorer[, …])

Generates sequences of token ids for models with a language modeling head using beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

bucket_distance(distances)

Places the given values (designed for distances) into 10 semi-logscale buckets: [0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+].

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

coarse_to_fine_pruning(span_emb, …)

Compute fast estimate antecedent scores and prune based on these scores.

compute_transition_beam_scores(sequences, …)

compute the transition probabilities of sequences given generation scores and beam indices

config_class

alias of transformers.models.auto.configuration_auto.AutoConfig

constrained_beam_search(input_ids, …[, …])

Generates sequences of token ids for models with a language modeling head using constrained beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

contrastive_search(input_ids[, top_k, …])

Generates sequences of token ids for models with a language modeling head using contrastive search and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

estimate_tokens(input_dict)

Helper function to estimate the total number of tokens from the model inputs.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

extract_spans(candidate_starts, …)

Extracts the candidate spans with the highest mention scores, who’s spans don’t cross over other spans.

float(*args)

Casts all floating point parameters and buffers to float datatype.

floating_point_ops(input_dict[, …])

Get number of (optionally, non-embeddings) floating-point operations for the forward and backward passes of a batch with this transformer model.

forward(input_ids, attention_mask, sentence_map)

Pass input document through model, calculating loss if labels are present.

from_pretrained(…)

Instantiate a pretrained pytorch model from a pre-trained model configuration.

generate([inputs, max_length, min_length, …])

Generates sequences of token ids for models with a language modeling head.

get_candidate_labels(candidate_starts, …)

get labels of candidates from gold ground truth

get_extended_attention_mask(attention_mask, …)

Makes broadcastable attention and causal masks so that future and masked tokens are ignored.

get_fast_antecedent_scores(span_emb)

Obtains representations of the spans

get_head_mask(head_mask, num_hidden_layers)

Prepare the head mask if needed.

get_input_embeddings()

Returns the model’s input embeddings.

get_memory_footprint([return_buffers])

Get the memory footprint of a model.

get_output_embeddings()

Returns the model’s output embeddings.

get_slow_antecedent_scores(top_span_emb, …)

Compute slow antecedent scores

get_span_embeddings(hidden_states, …)

Obtains representations of the spans

get_span_word_attention_scores(…)

Parameters

gradient_checkpointing_disable()

Deactivates gradient checkpointing for the current model.

gradient_checkpointing_enable()

Activates gradient checkpointing for the current model.

greedy_search(input_ids[, logits_processor, …])

Generates sequences of token ids for models with a language modeling head using greedy decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

group_beam_search(input_ids, beam_scorer[, …])

Generates sequences of token ids for models with a language modeling head using diverse beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

half(*args)

Casts all floating point parameters and buffers to half datatype.

init_weights()

If needed prunes and maybe initializes weights.

invert_attention_mask(encoder_attention_mask)

Invert an attention mask (e.g., switches 0.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

num_parameters([only_trainable, …])

Get number of (optionally, trainable or non-embeddings) parameters in the module.

parameters([recurse])

Returns an iterator over module parameters.

post_init()

A method executed at the end of each Transformer model initialization, to execute code that needs the model’s modules properly initialized (such as weight initialization).

prune_heads(heads_to_prune)

Prunes heads of the base model.

push_to_hub(repo_id[, use_temp_dir, …])

Upload the model file to the 🤗 Model Hub while synchronizing a local clone of the repo in repo_path_or_name.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_for_auto_class([auto_class])

Register this class with a given auto class.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

reset_memory_hooks_state()

Reset the mem_rss_diff attribute of each module (see [~modeling_utils.ModuleUtilsMixin.add_memory_hooks]).

resize_token_embeddings([new_num_tokens])

Resizes input token embeddings matrix of the model if new_num_tokens != config.vocab_size.

sample(input_ids[, logits_processor, …])

Generates sequences of token ids for models with a language modeling head using multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

save_pretrained(save_directory[, …])

Save a model and its configuration file to a directory, so that it can be re-loaded using the [~PreTrainedModel.from_pretrained] class method.

set_input_embeddings(value)

Set model’s input embeddings.

softmax_loss(top_antecedent_scores, …)

Calculate softmax loss

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

tie_weights()

Tie the weights between the input embeddings and the output embeddings.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

batch_gather

cluster_merging

create_extended_attention_mask_for_decoder

get_position_embeddings

resize_position_embeddings

retrieve_modules_from_names

share_memory

base_model_prefix = 'features_model'
config_class

alias of transformers.models.auto.configuration_auto.AutoConfig

forward(input_ids, attention_mask, sentence_map, speaker_ids=None, genre=None, gold_starts=None, gold_ends=None, cluster_ids=None, output_attentions=None, output_hidden_states=None, return_dict=None, **extra_inputs)

Pass input document through model, calculating loss if labels are present.

Parameters
input_ids: tensor of size (num_segments, sequence_length)

input token ids

attention_mask: tensor of size (num_segments, sequence_length)

attention mask of input segments

sentence_map: tensor of size (num_tokens)

sentence id for each input token in (flattened) input document

speaker_ids: tensor of size (num_segments, sequence_length)

speaker ids for each token (only used if self.use_metadata is True)

genre: tensor of size (1)

genre id for document

gold_starts: tensor of size (labeled)

start token indices (in flattened document) of labeled spans

gold_ends: tensor of size (labeled)

end token indices (in flattened document) of labeled spans

cluster_ids: tensor of size (labeled)

cluster ids of each labeled span

output_attentions: Boolean

Whether or not to return the attentions tensors of all attention layers.

output_hidden_states: Boolean

Whether or not to return the hidden states of all layers.

return_dict: Boolean

Whether or not to return a ModelOutput (dictionary) instead of a plain tuple.

extra_inputs: dict
Returns
tuple containing the following tensors if return_dict is False, else dict with following keys:
loss:

loss value if label input arguments (gold_starts, gold_ends, cluster_ids) are not None, else not returned.

candidate_starts: tensor of size (num_spans)

start token indices in flattened document of candidate spans

candidate_ends: tensor of size (num_spans)

end token indices in flattened document of candidate spans

candidate_mention_scores: tensor of size (num_spans)

mention scores for each candidate span

top_span_starts: tensor of size (num_top_spans)

start token indices in flattened document of candidate spans with top mention scores

top_span_ends: tensor of size (num_top_spans)

end token indices in flattened document of candidate spans with top mention scores

top_antecedents: tensor of shape (num_top_spans, antecedent_candidates)

indices in top span candidates of top antecedents for each mention

top_antecedent_scores: tensor of shape (num_top_spans, antecedent_candidates)

final antecedent scores of top antecedents for each mention

flattened_ids: tensor of shape (num_words)

flattened ids of input sentences. The start and end candidate indices map into this tensor.

static extract_spans(candidate_starts, candidate_ends, candidate_mention_scores, num_top_mentions)

Extracts the candidate spans with the highest mention scores, who’s spans don’t cross over other spans.

candidate_starts: tensor of size (candidates)

Indices of the starts of spans for each candidate.

candidate_ends: tensor of size (candidates)

Indices of the ends of spans for each candidate.

candidate_mention_scores: tensor of size (candidates)

Mention score for each candidate.

num_top_mentions: int

Number of candidates to extract

top_span_indices: tensor of size (num_top_mentions)

Span indices of the non-crossing spans with the highest mention scores

get_slow_antecedent_scores(top_span_emb, top_antecedents, top_antecedent_emb, top_antecedent_offsets, top_span_speaker_ids, genre_emb, segment_distance)

Compute slow antecedent scores

Parameters
top_span_emb: tensor of size (candidates, emb_size)

span representations

top_antecedents: tensor of size (candidates, antecedents)

indices of antecedents for each candidate

top_antecedent_emb: tensor of size (candidates, antecedents, emb)

embeddings of top antecedents for each candidate

top_antecedent_offsets: tensor of size (candidates, antecedents)

offsets for each mention/antecedent pair

top_span_speaker_ids: tensor of size (candidates)

speaker ids for each span

genre_emb: tensor of size (feature_size)

genre embedding for document

segment_distance: tensor of size (candidates, antecedents)

segment distances for each candidate antecedent pair

Returns
tensor of shape (candidates, antecedents)

antecedent scores

coarse_to_fine_pruning(span_emb, mention_scores, num_top_antecedents)

Compute fast estimate antecedent scores and prune based on these scores.

Parameters
span_emb: tensor of size (candidates, emb_size)

span representations

mention_scores: tensor of size (candidates)

mention scores of spans

num_top_antecedents: int

number of antecedents

Returns
top_antecedents: tensor of shape (mentions, antecedent_candidates)

indices of top antecedents for each mention

top_antecedents_mask: tensor of shape (mentions, antecedent_candidates)

boolean mask for antecedent candidates

top_antecedents_fast_scores: tensor of shape (mentions, antecedent_candidates)

fast scores for each antecedent candidate

top_antecedent_offsets: tensor of shape (mentions, antecedent_candidates)

offsets for each mention/antecedent pair

static batch_gather(emb, indices)
static get_candidate_labels(candidate_starts, candidate_ends, labeled_starts, labeled_ends, labels)

get labels of candidates from gold ground truth

Parameters
candidate_starts, candidate_ends: tensor of size (candidates)

start and end token indices (in flattened document) of candidate spans

labeled_starts, labeled_ends: tensor of size (labeled)

start and end token indices (in flattened document) of labeled spans

labels: tensor of size (labeled)

cluster ids

Returns
candidate_labels: tensor of size (candidates)
static bucket_distance(distances)

Places the given values (designed for distances) into 10 semi-logscale buckets: [0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+].

Parameters
distances: tensor of size (candidates, candidates)

token distances between pairs

Returns
distance buckets

tensor of size (candidates, candidates)

get_fast_antecedent_scores(span_emb)

Obtains representations of the spans

Parameters
span_emb: tensor of size (candidates, emb_size)

span representations

Returns
fast antecedent scores

tensor of size (candidates, span_embedding_size)

get_span_embeddings(hidden_states, span_starts, span_ends)

Obtains representations of the spans

Parameters
hidden_states: tensor of size (num_tokens, emb_size)

outputs of base model, reshaped

span_starts, span_ends: tensor of size (num_candidates)

indices of starts and ends of spans

Returns
tensor of size (num_candidates, span_embedding_size)
get_span_word_attention_scores(hidden_states, span_starts, span_ends)
Parameters
hidden_states: tensor of size (num_tokens, emb_size)

outputs of base model, reshaped

span_starts, span_ends: tensor of size (num_candidates)

indices of starts and ends of spans

Returns
tensor of size (num_candidates, span_embedding_size)
static softmax_loss(top_antecedent_scores, top_antecedent_labels)

Calculate softmax loss

Parameters
top_antecedent_scores: tensor of size [top_cand, top_ant + 1]

scores of each antecedent for each mention candidate

top_antecedent_labels: tensor of size [top_cand, top_ant + 1]

labels for each antecedent

Returns
tensor of size (num_candidates)

loss for each mention

cluster_merging(top_span_emb, top_antecedent_idx, top_antecedent_scores)
T_destination

alias of TypeVar(‘T_destination’)

add_memory_hooks()

Add a memory hook before and after each sub-module forward pass to record increase in memory consumption.

Increase in memory consumption is stored in a mem_rss_diff attribute for each module and can be reset to zero with model.reset_memory_hooks_state().

add_module(name: str, module: Optional[torch.nn.modules.module.Module])None

Adds a child module to the current module.

The module can be accessed as an attribute using the given name.

Args:
name (string): name of the child module. The child module can be

accessed from this module using the given name

module (Module): child module to be added to the module.

adjust_logits_during_generation(logits: torch.FloatTensor, **kwargs)torch.FloatTensor

Implement in subclasses of [PreTrainedModel] for custom behavior to adjust the logits in the generate method.

apply(fn: Callable[[torch.nn.modules.module.Module], None])T

Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).

Args:

fn (Module -> None): function to be applied to each submodule

Returns:

Module: self

Example:

>>> @torch.no_grad()
>>> def init_weights(m):
>>>     print(m)
>>>     if type(m) == nn.Linear:
>>>         m.weight.fill_(1.0)
>>>         print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
property base_model

torch.nn.Module: The main body of the model.

beam_sample(input_ids: torch.LongTensor, beam_scorer: transformers.generation.beam_search.BeamScorer, logits_processor: Optional[transformers.generation.logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None, logits_warper: Optional[transformers.generation.logits_process.LogitsProcessorList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = False, **model_kwargs)Union[transformers.generation.utils.BeamSampleEncoderDecoderOutput, transformers.generation.utils.BeamSampleDecoderOnlyOutput, torch.LongTensor]

Generates sequences of token ids for models with a language modeling head using beam search multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

beam_scorer (BeamScorer):

A derived instance of [BeamScorer] that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation of [BeamScorer] should be read.

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

logits_warper (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsWarper] used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.

max_length (int, optional, defaults to 20):

DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[~generation.BeamSampleDecoderOnlyOutput], [~generation.BeamSampleEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.BeamSampleDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.BeamSampleEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples:

```python >>> from transformers import ( … AutoTokenizer, … AutoModelForSeq2SeqLM, … LogitsProcessorList, … MinLengthLogitsProcessor, … TopKLogitsWarper, … TemperatureLogitsWarper, … BeamSearchScorer, … ) >>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> encoder_input_str = "translate English to German: How old are you?"
>>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
>>> # lets run beam search using 3 beams
>>> num_beams = 3
>>> # define decoder start token ids
>>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> # add encoder_outputs to model keyword arguments
>>> model_kwargs = {
...     "encoder_outputs": model.get_encoder()(
...         encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
...     )
... }
>>> # instantiate beam scorer
>>> beam_scorer = BeamSearchScorer(
...     batch_size=1,
...     max_length=model.config.max_length,
...     num_beams=num_beams,
...     device=model.device,
... )
>>> # instantiate logits processors
>>> logits_processor = LogitsProcessorList(
...     [MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id)]
... )
>>> # instantiate logits processors
>>> logits_warper = LogitsProcessorList(
...     [
...         TopKLogitsWarper(50),
...         TemperatureLogitsWarper(0.7),
...     ]
... )
>>> outputs = model.beam_sample(
...     input_ids, beam_scorer, logits_processor=logits_processor, logits_warper=logits_warper, **model_kwargs
... )
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Wie alt bist du?']
```

Generates sequences of token ids for models with a language modeling head using beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

beam_scorer (BeamScorer):

An derived instance of [BeamScorer] that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation of [BeamScorer] should be read.

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

max_length (int, optional, defaults to 20):

DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[generation.BeamSearchDecoderOnlyOutput], [~generation.BeamSearchEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.BeamSearchDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.BeamSearchEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples:

```python >>> from transformers import ( … AutoTokenizer, … AutoModelForSeq2SeqLM, … LogitsProcessorList, … MinLengthLogitsProcessor, … BeamSearchScorer, … ) >>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> encoder_input_str = "translate English to German: How old are you?"
>>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
>>> # lets run beam search using 3 beams
>>> num_beams = 3
>>> # define decoder start token ids
>>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> # add encoder_outputs to model keyword arguments
>>> model_kwargs = {
...     "encoder_outputs": model.get_encoder()(
...         encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
...     )
... }
>>> # instantiate beam scorer
>>> beam_scorer = BeamSearchScorer(
...     batch_size=1,
...     num_beams=num_beams,
...     device=model.device,
... )
>>> # instantiate logits processors
>>> logits_processor = LogitsProcessorList(
...     [
...         MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
...     ]
... )
>>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Wie alt bist du?']
```
bfloat16()T

Casts all floating point parameters and buffers to bfloat16 datatype.

Returns:

Module: self

buffers(recurse: bool = True)Iterator[torch.Tensor]

Returns an iterator over module buffers.

Args:
recurse (bool): if True, then yields buffers of this module

and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields:

torch.Tensor: module buffer

Example:

>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children()Iterator[torch.nn.modules.module.Module]

Returns an iterator over immediate children modules.

Yields:

Module: a child module

compute_transition_beam_scores(sequences: torch.Tensor, scores: Tuple[torch.Tensor], beam_indices: torch.Tensor, eos_token_id: Optional[int] = None)

compute the transition probabilities of sequences given generation scores and beam indices

Generates sequences of token ids for models with a language modeling head using constrained beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

constrained_beam_scorer (ConstrainedBeamSearchScorer):

A derived instance of [BeamScorer] that defines how beam hypotheses are constructed, stored and sorted during generation, while satisfying a list of positive constraints. For more information, the documentation of [ConstrainedBeamSearchScorer] should be read.

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

logits_warper (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsWarper] used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.

max_length (int, optional, defaults to 20):

DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[generation.BeamSearchDecoderOnlyOutput], [~generation.BeamSearchEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.BeamSearchDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.BeamSearchEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples:

```python >>> from transformers import ( … AutoTokenizer, … AutoModelForSeq2SeqLM, … LogitsProcessorList, … MinLengthLogitsProcessor, … ConstrainedBeamSearchScorer, … PhrasalConstraint, … ) >>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> encoder_input_str = "translate English to German: How old are you?"
>>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
>>> # lets run beam search using 3 beams
>>> num_beams = 3
>>> # define decoder start token ids
>>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> # add encoder_outputs to model keyword arguments
>>> model_kwargs = {
...     "encoder_outputs": model.get_encoder()(
...         encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
...     )
... }
>>> constraint_str = "Sie"
>>> constraint_token_ids = tokenizer.encode(constraint_str)[:-1]  # slice to remove eos token
>>> constraints = [PhrasalConstraint(token_ids=constraint_token_ids)]
>>> # instantiate beam scorer
>>> beam_scorer = ConstrainedBeamSearchScorer(
...     batch_size=1, num_beams=num_beams, device=model.device, constraints=constraints
... )
>>> # instantiate logits processors
>>> logits_processor = LogitsProcessorList(
...     [
...         MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
...     ]
... )
>>> outputs = model.constrained_beam_search(
...     input_ids, beam_scorer, constraints=constraints, logits_processor=logits_processor, **model_kwargs
... )
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Wie alt sind Sie?']
```

Generates sequences of token ids for models with a language modeling head using contrastive search and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

top_k (int, optional, defaults to 1):

The size of the candidate set that is used to re-rank for contrastive search

penalty_alpha (float, optional, defaults to 0):

The degeneration penalty for contrastive search; activate when it is larger than 0

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

logits_warper (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsWarper] used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific keyword arguments will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[~generation.ContrastiveSearchDecoderOnlyOutput], [~generation.ContrastiveSearchEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.ContrastiveSearchDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.ContrastiveSearchEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples: ```python >>> from transformers import ( … AutoTokenizer, … AutoModelForCausalLM, … StoppingCriteriaList, … MaxLengthCriteria, … )

>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
>>> # set pad_token_id to eos_token_id because OPT does not have a PAD token
>>> model.config.pad_token_id = model.config.eos_token_id
>>> input_prompt = "DeepMind Company is"
>>> input_ids = tokenizer(input_prompt, return_tensors="pt")
>>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=64)])
>>> outputs = model.contrastive_search(
...     **input_ids, penalty_alpha=0.6, top_k=4, stopping_criteria=stopping_criteria
... )
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['DeepMind Company is a company that focuses on the development and commercialization of artificial intelligence (AI). DeepMind’s mission is to help people understand and solve problems that are difficult to solve in the world today.\n\nIn this post, we talk about the benefits of deep learning in business and how it']
```
cpu()T

Moves all model parameters and buffers to the CPU.

Returns:

Module: self

static create_extended_attention_mask_for_decoder(input_shape, attention_mask, device=None)
cuda(device: Optional[Union[int, torch.device]] = None)T

Moves all model parameters and buffers to the GPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Args:
device (int, optional): if specified, all parameters will be

copied to that device

Returns:

Module: self

property device

torch.device: The device on which the module is (assuming that all the module parameters are on the same device).

double()T

Casts all floating point parameters and buffers to double datatype.

Returns:

Module: self

property dtype

torch.dtype: The dtype of the module (assuming that all the module parameters have the same dtype).

property dummy_inputs

Dict[str, torch.Tensor]: Dummy inputs to do a forward pass in the network.

dump_patches: bool = False

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

estimate_tokens(input_dict: Dict[str, Union[torch.Tensor, Any]])int

Helper function to estimate the total number of tokens from the model inputs.

Args:

inputs (dict): The model inputs.

Returns:

int: The total number of tokens.

eval()T

Sets the module in evaluation mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

This is equivalent with self.train(False).

Returns:

Module: self

extra_repr()str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

float(*args)

Casts all floating point parameters and buffers to float datatype.

Returns:

Module: self

floating_point_ops(input_dict: Dict[str, Union[torch.Tensor, Any]], exclude_embeddings: bool = True)int

Get number of (optionally, non-embeddings) floating-point operations for the forward and backward passes of a batch with this transformer model. Default approximation neglects the quadratic dependency on the number of tokens (valid if 12 * d_model << sequence_length) as laid out in [this paper](https://arxiv.org/pdf/2001.08361.pdf) section 2.1. Should be overridden for transformers with parameter re-use e.g. Albert or Universal Transformers, or if doing long-range modeling with very high sequence lengths.

Args:
batch_size (int):

The batch size for the forward pass.

sequence_length (int):

The number of tokens in each line of the batch.

exclude_embeddings (bool, optional, defaults to True):

Whether or not to count embedding and softmax operations.

Returns:

int: The number of floating-point operations.

property framework
Str

Identifies that this is a PyTorch model.

classmethod from_pretrained(pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs)

Instantiate a pretrained pytorch model from a pre-trained model configuration.

The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train().

The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.

The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.

Parameters:
pretrained_model_name_or_path (str or os.PathLike, optional):

Can be either:

  • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.

  • A path to a directory containing model weights saved using [~PreTrainedModel.save_pretrained], e.g., ./my_model_directory/.

  • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.

  • A path or url to a model folder containing a flax checkpoint file in .msgpack format (e.g, ./flax_model/ containing flax_model.msgpack). In this case, from_flax should be set to True.

  • None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict).

model_args (sequence of positional arguments, optional):

All remaining positional arguments will be passed to the underlying model’s __init__ method.

config (Union[PretrainedConfig, str, os.PathLike], optional):

Can be either:

  • an instance of a class derived from [PretrainedConfig],

  • a string or path valid as input to [~PretrainedConfig.from_pretrained].

Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

  • The model is a model provided by the library (loaded with the model id string of a pretrained model).

  • The model was saved using [~PreTrainedModel.save_pretrained] and is reloaded by supplying the save directory.

  • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.

state_dict (Dict[str, torch.Tensor], optional):

A state dictionary to use instead of a state dictionary loaded from saved weights file.

This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [~PreTrainedModel.save_pretrained] and [~PreTrainedModel.from_pretrained] is not a simpler option.

cache_dir (Union[str, os.PathLike], optional):

Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

from_tf (bool, optional, defaults to False):

Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument).

from_flax (bool, optional, defaults to False):

Load the model weights from a Flax checkpoint save file (see docstring of pretrained_model_name_or_path argument).

ignore_mismatched_sizes (bool, optional, defaults to False):

Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels).

force_download (bool, optional, defaults to False):

Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

resume_download (bool, optional, defaults to False):

Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.

proxies (Dict[str, str], optional):

A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.

output_loading_info(bool, optional, defaults to False):

Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(bool, optional, defaults to False):

Whether or not to only look at local files (i.e., do not try to download the model).

use_auth_token (str or bool, optional):

The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

revision (str, optional, defaults to “main”):

The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

<Tip>

To test a pull request you made on the Hub, you can pass `revision=”refs/pr/<pr_number>”.

</Tip>

mirror (str, optional):

Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.

_fast_init(bool, optional, defaults to True):

Whether or not to disable fast initialization.

<Tip warning={true}>

One should only disable _fast_init to ensure backwards compatibility with transformers.__version__ < 4.6.0 for seeded model initialization. This argument will be removed at the next major version. See [pull request 11471](https://github.com/huggingface/transformers/pull/11471) for more information.

</Tip>

> Parameters for big model inference

low_cpu_mem_usage(bool, optional):

Tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. This is an experimental feature and a subject to change at any moment.

torch_dtype (str or torch.dtype, optional):

Override the default torch.dtype and load the model under this dtype. If “auto” is passed the dtype will be automatically derived from the model’s weights.

device_map (str or Dict[str, Union[int, str, torch.device]], optional):

A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.

To have Accelerate compute the most optimized device_map automatically, set device_map=”auto”. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).

max_memory (Dict, optional):

A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset.

offload_folder (str or os.PathLike, optional):

If the device_map contains any value “disk”, the folder where we will offload weights.

offload_state_dict (bool, optional):

If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True when there is some disk offload.

load_in_8bit (bool, optional, defaults to False):

If True, will convert the loaded model into mixed-8bit quantized model. To use this feature please install bitsandbytes compiled with your CUDA version by running pip install -i https://test.pypi.org/simple/ bitsandbytes-cudaXXX where XXX is your CUDA version (e.g. 11.6 = 116). Make also sure that you have enough GPU RAM to store half of the model size since the 8bit modules are not compiled and adapted for CPUs.

load_in_8bit_threshold (float, optional, defaults to 6):

Works together with load_in_8bit. This corresponds to the outlier threshold for outlier detection as described in GPT3.int8() : 8-bit Matrix Multiplication for Transformers at Scale paper. Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).

load_in_8bit_skip_modules (List[str], optional):

An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position.

subfolder (str, optional, defaults to “”):

In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here.

kwargs (remaining dictionary of keyword arguments, optional):

Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

  • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)

  • If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ([~PretrainedConfig.from_pretrained]). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

<Tip>

Activate the special [“offline-mode”](https://huggingface.co/transformers/installation.html#offline-mode) to use this method in a firewalled environment.

</Tip>

Examples:

```python >>> from transformers import BertConfig, BertModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable).
>>> model = BertModel.from_pretrained("./test/saved_model/")
>>> # Update configuration during loading.
>>> model = BertModel.from_pretrained("bert-base-uncased", output_attentions=True)
>>> assert model.config.output_attentions == True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower, for example purposes, not runnable).
>>> config = BertConfig.from_json_file("./tf_model/my_tf_model_config.json")
>>> model = BertModel.from_pretrained("./tf_model/my_tf_checkpoint.ckpt.index", from_tf=True, config=config)
>>> # Loading from a Flax checkpoint file instead of a PyTorch model (slower)
>>> model = BertModel.from_pretrained("bert-base-uncased", from_flax=True)
```
  • low_cpu_mem_usage algorithm:

This is an experimental function that loads the model using ~1x model size CPU memory

Here is how it works:

  1. save which state_dict keys we have

  2. drop state_dict before the model is created, since the latter takes 1x model size CPU memory

3. after the model has been instantiated switch to the meta device all params/buffers that are going to be replaced from the loaded state_dict 4. load state_dict 2nd time 5. replace the params/buffers from the state_dict

Currently, it can’t handle deepspeed ZeRO stage 3 and ignores loading errors

generate(inputs: Optional[torch.Tensor] = None, max_length: Optional[int] = None, min_length: Optional[int] = None, do_sample: Optional[bool] = None, early_stopping: Optional[bool] = None, num_beams: Optional[int] = None, temperature: Optional[float] = None, penalty_alpha: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, typical_p: Optional[float] = None, repetition_penalty: Optional[float] = None, bad_words_ids: Optional[Iterable[int]] = None, force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None, bos_token_id: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, length_penalty: Optional[float] = None, no_repeat_ngram_size: Optional[int] = None, encoder_no_repeat_ngram_size: Optional[int] = None, num_return_sequences: Optional[int] = None, max_time: Optional[float] = None, max_new_tokens: Optional[int] = None, decoder_start_token_id: Optional[int] = None, use_cache: Optional[bool] = None, num_beam_groups: Optional[int] = None, diversity_penalty: Optional[float] = None, prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None, logits_processor: Optional[transformers.generation.logits_process.LogitsProcessorList] = None, renormalize_logits: Optional[bool] = None, stopping_criteria: Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None, constraints: Optional[List[transformers.generation.beam_constraints.Constraint]] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, forced_bos_token_id: Optional[int] = None, forced_eos_token_id: Optional[int] = None, remove_invalid_values: Optional[bool] = None, synced_gpus: Optional[bool] = False, exponential_decay_length_penalty: Optional[Tuple[int, float]] = None, suppress_tokens: Optional[List[int]] = None, begin_suppress_tokens: Optional[List[int]] = None, forced_decoder_ids: Optional[List[List[int]]] = None, **model_kwargs)Union[transformers.generation.utils.GreedySearchEncoderDecoderOutput, transformers.generation.utils.GreedySearchDecoderOnlyOutput, transformers.generation.utils.SampleEncoderDecoderOutput, transformers.generation.utils.SampleDecoderOnlyOutput, transformers.generation.utils.BeamSearchEncoderDecoderOutput, transformers.generation.utils.BeamSearchDecoderOnlyOutput, transformers.generation.utils.BeamSampleEncoderDecoderOutput, transformers.generation.utils.BeamSampleDecoderOnlyOutput, transformers.generation.utils.ContrastiveSearchEncoderDecoderOutput, transformers.generation.utils.ContrastiveSearchDecoderOnlyOutput, torch.LongTensor]

Generates sequences of token ids for models with a language modeling head. The method supports the following generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models:

  • greedy decoding by calling [~generation.GenerationMixin.greedy_search] if num_beams=1 and do_sample=False.

  • contrastive search by calling [~generation.GenerationMixin.contrastive_search] if penalty_alpha>0. and top_k>1

  • multinomial sampling by calling [~generation.GenerationMixin.sample] if num_beams=1 and do_sample=True.

  • beam-search decoding by calling [~generation.GenerationMixin.beam_search] if num_beams>1 and do_sample=False.

  • beam-search multinomial sampling by calling [~generation.GenerationMixin.beam_sample] if num_beams>1 and do_sample=True.

  • diverse beam-search decoding by calling [~generation.GenerationMixin.group_beam_search], if num_beams>1 and num_beam_groups>1.

  • constrained beam-search decoding by calling [~generation.GenerationMixin.constrained_beam_search], if constraints!=None or force_words_ids!=None.

<Tip warning={true}>

Apart from inputs, all the arguments below will default to the value of the attribute of the same name as defined in the model’s config (config.json) which in turn defaults to the [~modeling_utils.PretrainedConfig] of the model.

</Tip>

Most of these parameters are explained in more detail in [this blog post](https://huggingface.co/blog/how-to-generate).

Parameters:
inputs (torch.Tensor of varying shape depending on the modality, optional):

The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should of in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values.

max_length (int, optional, defaults to model.config.max_length):

The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. In general, prefer the use of max_new_tokens, which ignores the number of tokens in the prompt.

max_new_tokens (int, optional):

The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.

min_length (int, optional, defaults to model.config.min_length or 10 if the config does not set any value):

The minimum length of the sequence to be generated.

do_sample (bool, optional, defaults to model.config.do_sample or False if the config does not set any value):

Whether or not to use sampling ; use greedy decoding otherwise.

early_stopping (bool, optional, defaults to False):

Whether to stop the beam search when at least num_beams sentences are finished per batch or not.

num_beams (int, optional, defaults to model.config.num_beams or 1 if the config does not set any value):

Number of beams for beam search. 1 means no beam search.

temperature (float, optional, defaults to model.config.temperature or 1.0 if the config does not set any value):

The value used to module the next token probabilities.

penalty_alpha (float, optional, defaults to model.config.penalty_alpha or None if the config does not set any value):

The values balance the model confidence and the degeneration penalty in contrastive search decoding.

top_k (int, optional, defaults to model.config.top_k or 50 if the config does not set any value):

The number of highest probability vocabulary tokens to keep for top-k-filtering.

top_p (float, optional, defaults to model.config.top_p or 1.0 if the config does not set any value):

If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.

typical_p (float, optional, defaults to model.config.typical_p or 1.0 if the config does not set any value):

The amount of probability mass from the original distribution to be considered in typical decoding. If set to 1.0 it takes no effect. See [this paper](https://arxiv.org/pdf/2202.00666.pdf) for more details.

repetition_penalty (float, optional, defaults to model.config.repetition_penalty or 1.0 if the config does not set any value):

The parameter for repetition penalty. 1.0 means no penalty. See [this paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.

pad_token_id (int, optional, defaults to model.config.pad_token_id):

The id of the padding token.

bos_token_id (int, optional, defaults to model.config.bos_token_id):

The id of the beginning-of-sequence token.

eos_token_id (int, optional, defaults to model.config.eos_token_id):

The id of the end-of-sequence token.

length_penalty (float, optional, defaults to model.config.length_penalty or 1.0 if the config does not set any value):

Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences.

no_repeat_ngram_size (int, optional, defaults to model.config.no_repeat_ngram_size or 0 if the config does not set any value):

If set to int > 0, all ngrams of that size can only occur once.

encoder_no_repeat_ngram_size (int, optional, defaults to model.config.encoder_no_repeat_ngram_size or 0 if the config does not set any value):

If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids.

bad_words_ids(List[List[int]], optional, defaults to model.config.bad_words_ids):

List of token ids that are not allowed to be generated. In order to get the token ids of the words that should not appear in the generated text, use tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids.

force_words_ids(List[List[int]] or List[List[List[int]]], optional):

List of token ids that must be generated. If given a List[List[int]], this is treated as a simple list of words that must be included, the opposite to bad_words_ids. If given List[List[List[int]]], this triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081), where one can allow different forms of each word.

num_return_sequences(int, optional, defaults to model.config.num_return_sequences or 1 if the config does not set any value):

The number of independently computed returned sequences for each element in the batch.

max_time(float, optional):

The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed.

attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional):

Mask to avoid performing attention on padding token indices. Mask values are in [0, 1], 1 for tokens that are not masked, and 0 for masked tokens. If not provided, will default to a tensor the same shape as input_ids that masks the pad token. [What are attention masks?](../glossary#attention-mask)

decoder_start_token_id (int, optional):

If an encoder-decoder model starts decoding with a different token than bos, the id of that token.

use_cache (bool, optional, defaults to True):

Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding.

num_beam_groups (int, optional, defaults to model.config.num_beam_groups or 1 if the config does not set any value):

Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details.

diversity_penalty (float, optional, defaults to model.config.diversity_penalty or 0.0 if the config does not set any value):

This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled.

prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]], optional):

If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id and input_ids. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID batch_id and the previously generated tokens inputs_ids. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904).

logits_processor (LogitsProcessorList, optional):

Custom logits processors that complement the default logits processors built from arguments and a model’s config. If a logit processor is passed that is already created with the arguments or a model’s config an error is thrown. This feature is intended for advanced users.

renormalize_logits (bool, optional, defaults to False):

Whether to renormalize the logits after applying all the logits processors or warpers (including the custom ones). It’s highly recommended to set this flag to True as the search algorithms suppose the score logits are normalized but some logit processors or warpers break the normalization.

stopping_criteria (StoppingCriteriaList, optional):

Custom stopping criteria that complement the default stopping criteria built from arguments and a model’s config. If a stopping criteria is passed that is already created with the arguments or a model’s config an error is thrown. This feature is intended for advanced users.

constraints (List[Constraint], optional):

Custom constraints that can be added to the generation to ensure that the output will contain the use of certain tokens as defined by Constraint objects, in the most sensible way possible.

output_attentions (bool, optional, defaults to model.config.output_attentions or False if the config does not set any value):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to model.config.output_hidden_states or False if the config does not set any value):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to model.config.output_scores or False if the config does not set any value):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to model.config.return_dict_in_generate or False if the config does not set any value):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

forced_bos_token_id (int, optional, defaults to model.config.forced_bos_token_id):

The id of the token to force as the first generated token after the decoder_start_token_id. Useful for multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target language token.

forced_eos_token_id (int, optional, defaults to model.config.forced_eos_token_id):

The id of the token to force as the last generated token when max_length is reached.

remove_invalid_values (bool, optional, defaults to model.config.remove_invalid_values):

Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

exponential_decay_length_penalty (tuple(int, float), optional, defaults to model.config.exponential_decay_length_penalty):

This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been generated. The tuple shall consist of: (start_index, decay_factor) where start_index indicates where penalty starts and decay_factor represents the factor of exponential decay

suppress_tokens (List[int], optional, defaults to model.config.suppress_tokens):

A list of tokens that will be supressed at generation. The SupressTokens logit processor will set their log probs to -inf so that they are not sampled.

begin_suppress_tokens (List[int], optional, defaults to model.config.begin_suppress_tokens):

A list of tokens that will be supressed at the begining of the generation. The SupressBeginTokens logit processor will set their log probs to -inf so that they are not sampled.

forced_decoder_ids (List[List[int]], optional, defaults to model.config.forced_decoder_ids):

A list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. For example, [[1, 123]] means the second generated token will always be a token of index 123.

model_kwargs:

Additional model specific kwargs will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_.

Return:

[~utils.ModelOutput] or torch.LongTensor: A [~utils.ModelOutput] (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.FloatTensor.

If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False), the possible [~utils.ModelOutput] types are:

  • [~generation.GreedySearchDecoderOnlyOutput],

  • [~generation.SampleDecoderOnlyOutput],

  • [~generation.BeamSearchDecoderOnlyOutput],

  • [~generation.BeamSampleDecoderOnlyOutput]

If the model is an encoder-decoder model (model.config.is_encoder_decoder=True), the possible [~utils.ModelOutput] types are:

  • [~generation.GreedySearchEncoderDecoderOutput],

  • [~generation.SampleEncoderDecoderOutput],

  • [~generation.BeamSearchEncoderDecoderOutput],

  • [~generation.BeamSampleEncoderDecoderOutput]

Examples:

Greedy Decoding:

```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> prompt = "Today I believe we can finally"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> # generate up to 30 tokens
>>> outputs = model.generate(input_ids, do_sample=False, max_length=30)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Today I believe we can finally get to the point where we can make a difference in the lives of the people of the United States of America.\n']
```

Multinomial Sampling:

```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> prompt = "Today I believe we can finally"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> # sample up to 30 tokens
>>> torch.manual_seed(0)  
>>> outputs = model.generate(input_ids, do_sample=True, max_length=30)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Today I believe we can finally get rid of discrimination," said Rep. Mark Pocan (D-Wis.).\n\n"Just look at the']
```

Beam-search decoding:

```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
>>> sentence = "Paris is one of the densest populated areas in Europe."
>>> input_ids = tokenizer(sentence, return_tensors="pt").input_ids
>>> outputs = model.generate(input_ids, num_beams=5)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Paris ist eines der dichtesten besiedelten Gebiete Europas.']
```
get_extended_attention_mask(attention_mask: torch.Tensor, input_shape: Tuple[int], device: <property object at 0x7ff9a977ba10> = None, dtype: torch.float32 = None)torch.Tensor

Makes broadcastable attention and causal masks so that future and masked tokens are ignored.

Arguments:
attention_mask (torch.Tensor):

Mask with ones indicating tokens to attend to, zeros for tokens to ignore.

input_shape (Tuple[int]):

The shape of the input to the model.

Returns:

torch.Tensor The extended attention mask, with a the same dtype as attention_mask.dtype.

get_head_mask(head_mask: Optional[torch.Tensor], num_hidden_layers: int, is_attention_chunked: bool = False)torch.Tensor

Prepare the head mask if needed.

Args:
head_mask (torch.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional):

The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard).

num_hidden_layers (int):

The number of hidden layers in the model.

is_attention_chunked: (bool, optional, defaults to False):

Whether or not the attentions scores are computed by chunks or not.

Returns:

torch.Tensor with shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] or list with [None] for each layer.

get_input_embeddings()torch.nn.modules.module.Module

Returns the model’s input embeddings.

Returns:

nn.Module: A torch module mapping vocabulary to hidden states.

get_memory_footprint(return_buffers=True)

Get the memory footprint of a model. This will return the memory footprint of the current model in bytes. Useful to benchmark the memory footprint of the current model and design some tests. Solution inspired from the PyTorch discussions: https://discuss.pytorch.org/t/gpu-memory-that-model-uses/56822/2

Arguments:
return_buffers (bool, optional, defaults to True):

Whether to return the size of the buffer tensors in the computation of the memory footprint. Buffers are tensors that do not require gradients and not registered as parameters. E.g. mean and std in batch norm layers. Please see: https://discuss.pytorch.org/t/what-pytorch-means-by-buffers/120266/2

get_output_embeddings()torch.nn.modules.module.Module

Returns the model’s output embeddings.

Returns:

nn.Module: A torch module mapping hidden states to vocabulary.

get_position_embeddings()Union[torch.nn.modules.sparse.Embedding, Tuple[torch.nn.modules.sparse.Embedding]]
gradient_checkpointing_disable()

Deactivates gradient checkpointing for the current model.

Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.

gradient_checkpointing_enable()

Activates gradient checkpointing for the current model.

Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.

Generates sequences of token ids for models with a language modeling head using greedy decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

max_length (int, optional, defaults to 20):

DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific keyword arguments will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[~generation.GreedySearchDecoderOnlyOutput], [~generation.GreedySearchEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.GreedySearchDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.GreedySearchEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples:

```python >>> from transformers import ( … AutoTokenizer, … AutoModelForCausalLM, … LogitsProcessorList, … MinLengthLogitsProcessor, … StoppingCriteriaList, … MaxLengthCriteria, … )

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> # set pad_token_id to eos_token_id because GPT2 does not have a PAD token
>>> model.config.pad_token_id = model.config.eos_token_id
>>> input_prompt = "It might be possible to"
>>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
>>> # instantiate logits processors
>>> logits_processor = LogitsProcessorList(
...     [
...         MinLengthLogitsProcessor(10, eos_token_id=model.config.eos_token_id),
...     ]
... )
>>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
>>> outputs = model.greedy_search(
...     input_ids, logits_processor=logits_processor, stopping_criteria=stopping_criteria
... )
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
["It might be possible to get a better understanding of the nature of the problem, but it's not"]
```

Generates sequences of token ids for models with a language modeling head using diverse beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

beam_scorer (BeamScorer):

An derived instance of [BeamScorer] that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation of [BeamScorer] should be read.

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

max_length (int, optional, defaults to 20):

DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific kwargs that will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[~generation.BeamSearchDecoderOnlyOutput], [~generation.BeamSearchEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.BeamSearchDecoderOnlyOutput] if [~generation.BeamSearchDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.BeamSearchEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples:

```python >>> from transformers import ( … AutoTokenizer, … AutoModelForSeq2SeqLM, … LogitsProcessorList, … MinLengthLogitsProcessor, … HammingDiversityLogitsProcessor, … BeamSearchScorer, … ) >>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> encoder_input_str = "translate English to German: How old are you?"
>>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
>>> # lets run diverse beam search using 6 beams
>>> num_beams = 6
>>> # define decoder start token ids
>>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> # add encoder_outputs to model keyword arguments
>>> model_kwargs = {
...     "encoder_outputs": model.get_encoder()(
...         encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
...     )
... }
>>> # instantiate beam scorer
>>> beam_scorer = BeamSearchScorer(
...     batch_size=1,
...     max_length=model.config.max_length,
...     num_beams=num_beams,
...     device=model.device,
...     num_beam_groups=3,
... )
>>> # instantiate logits processors
>>> logits_processor = LogitsProcessorList(
...     [
...         HammingDiversityLogitsProcessor(5.5, num_beams=6, num_beam_groups=3),
...         MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
...     ]
... )
>>> outputs = model.group_beam_search(
...     input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs
... )
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Wie alt bist du?']
```
half(*args)

Casts all floating point parameters and buffers to half datatype.

Returns:

Module: self

init_weights()

If needed prunes and maybe initializes weights.

invert_attention_mask(encoder_attention_mask: torch.Tensor)torch.Tensor

Invert an attention mask (e.g., switches 0. and 1.).

Args:

encoder_attention_mask (torch.Tensor): An attention mask.

Returns:

torch.Tensor: The inverted attention mask.

property is_gradient_checkpointing

Whether gradient checkpointing is activated for this model or not.

Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.

is_parallelizable = False
load_state_dict(state_dict: OrderedDict[str, Tensor], strict: bool = True)

Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function.

Args:
state_dict (dict): a dict containing parameters and

persistent buffers.

strict (bool, optional): whether to strictly enforce that the keys

in state_dict match the keys returned by this module’s state_dict() function. Default: True

Returns:
NamedTuple with missing_keys and unexpected_keys fields:
  • missing_keys is a list of str containing the missing keys

  • unexpected_keys is a list of str containing the unexpected keys

main_input_name = 'input_ids'
modules()Iterator[torch.nn.modules.module.Module]

Returns an iterator over all modules in the network.

Yields:

Module: a module in the network

Note:

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
        print(idx, '->', m)

0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix: str = '', recurse: bool = True)Iterator[Tuple[str, torch.Tensor]]

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

Args:

prefix (str): prefix to prepend to all buffer names. recurse (bool): if True, then yields buffers of this module

and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields:

(string, torch.Tensor): Tuple containing the name and buffer

Example:

>>> for name, buf in self.named_buffers():
>>>    if name in ['running_var']:
>>>        print(buf.size())
named_children()Iterator[Tuple[str, torch.nn.modules.module.Module]]

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

Yields:

(string, Module): Tuple containing a name and child module

Example:

>>> for name, module in model.named_children():
>>>     if name in ['conv4', 'conv5']:
>>>         print(module)
named_modules(memo: Optional[Set[torch.nn.modules.module.Module]] = None, prefix: str = '')

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

Yields:

(string, Module): Tuple of name and module

Note:

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
        print(idx, '->', m)

0 -> ('', Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix: str = '', recurse: bool = True)Iterator[Tuple[str, torch.nn.parameter.Parameter]]

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Args:

prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module

and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields:

(string, Parameter): Tuple containing the name and parameter

Example:

>>> for name, param in self.named_parameters():
>>>    if name in ['bias']:
>>>        print(param.size())
num_parameters(only_trainable: bool = False, exclude_embeddings: bool = False)int

Get number of (optionally, trainable or non-embeddings) parameters in the module.

Args:
only_trainable (bool, optional, defaults to False):

Whether or not to return only the number of trainable parameters

exclude_embeddings (bool, optional, defaults to False):

Whether or not to return only the number of non-embeddings parameters

Returns:

int: The number of parameters.

parameters(recurse: bool = True)Iterator[torch.nn.parameter.Parameter]

Returns an iterator over module parameters.

This is typically passed to an optimizer.

Args:
recurse (bool): if True, then yields parameters of this module

and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields:

Parameter: module parameter

Example:

>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
post_init()

A method executed at the end of each Transformer model initialization, to execute code that needs the model’s modules properly initialized (such as weight initialization).

prune_heads(heads_to_prune: Dict[int, List[int]])

Prunes heads of the base model.

Arguments:
heads_to_prune (Dict[int, List[int]]):

Dictionary with keys being selected layer indices (int) and associated values being the list of heads to prune in said layer (list of int). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.

push_to_hub(repo_id: str, use_temp_dir: Optional[bool] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Optional[Union[bool, str]] = None, max_shard_size: Optional[Union[int, str]] = '10GB', create_pr: bool = False, **deprecated_kwargs)str

Upload the model file to the 🤗 Model Hub while synchronizing a local clone of the repo in repo_path_or_name.

Parameters:
repo_id (str):

The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization.

use_temp_dir (bool, optional):

Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.

commit_message (str, optional):

Message to commit while pushing. Will default to “Upload model”.

private (bool, optional):

Whether or not the repository created should be private.

use_auth_token (bool or str, optional):

The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.

max_shard_size (int or str, optional, defaults to “10GB”):

Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”).

create_pr (bool, optional, defaults to False):

Whether or not to create a PR with the uploaded files or directly commit.

Examples:

```python from transformers import AutoModel

model = AutoModel.from_pretrained(“bert-base-cased”)

# Push the model to your namespace with the name “my-finetuned-bert”. model.push_to_hub(“my-finetuned-bert”)

# Push the model to an organization with the name “my-finetuned-bert”. model.push_to_hub(“huggingface/my-finetuned-bert”) ```

register_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ], torch.Tensor], Union[Tuple[torch.Tensor, ], torch.Tensor]], Union[None, torch.Tensor]])torch.utils.hooks.RemovableHandle

Registers a backward hook on the module.

This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions.

Returns:
torch.utils.hooks.RemovableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True)None

Adds a buffer to the module.

This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.

Buffers can be accessed as attributes using given names.

Args:
name (string): name of the buffer. The buffer can be accessed

from this module using the given name

tensor (Tensor): buffer to be registered. persistent (bool): whether the buffer is part of this module’s

Example:

>>> self.register_buffer('running_mean', torch.zeros(num_features))
classmethod register_for_auto_class(auto_class='AutoModel')

Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class.

<Tip warning={true}>

This API is experimental and may have some slight breaking changes in the next releases.

</Tip>

Args:
auto_class (str or type, optional, defaults to “AutoModel”):

The auto class to register this new model with.

register_forward_hook(hook: Callable[[], None])torch.utils.hooks.RemovableHandle

Registers a forward hook on the module.

The hook will be called every time after forward() has computed an output. It should have the following signature:

hook(module, input, output) -> None or modified output

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called.

Returns:
torch.utils.hooks.RemovableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_forward_pre_hook(hook: Callable[[], None])torch.utils.hooks.RemovableHandle

Registers a forward pre-hook on the module.

The hook will be called every time before forward() is invoked. It should have the following signature:

hook(module, input) -> None or modified input

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).

Returns:
torch.utils.hooks.RemovableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_full_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ], torch.Tensor], Union[Tuple[torch.Tensor, ], torch.Tensor]], Union[None, torch.Tensor]])torch.utils.hooks.RemovableHandle

Registers a backward hook on the module.

The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> tuple(Tensor) or None

The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

Warning

Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.

Returns:
torch.utils.hooks.RemovableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_parameter(name: str, param: Optional[torch.nn.parameter.Parameter])None

Adds a parameter to the module.

The parameter can be accessed as an attribute using given name.

Args:
name (string): name of the parameter. The parameter can be accessed

from this module using the given name

param (Parameter): parameter to be added to the module.

requires_grad_(requires_grad: bool = True)T

Change if autograd should record operations on parameters in this module.

This method sets the parameters’ requires_grad attributes in-place.

This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).

Args:
requires_grad (bool): whether autograd should record operations on

parameters in this module. Default: True.

Returns:

Module: self

reset_memory_hooks_state()

Reset the mem_rss_diff attribute of each module (see [~modeling_utils.ModuleUtilsMixin.add_memory_hooks]).

resize_position_embeddings(new_num_position_embeddings: int)
resize_token_embeddings(new_num_tokens: Optional[int] = None)torch.nn.modules.sparse.Embedding

Resizes input token embeddings matrix of the model if new_num_tokens != config.vocab_size.

Takes care of tying weights embeddings afterwards if the model class has a tie_weights() method.

Arguments:
new_num_tokens (int, optional):

The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or None, just returns a pointer to the input tokens torch.nn.Embedding module of the model without doing anything.

Return:

torch.nn.Embedding: Pointer to the input tokens Embeddings Module of the model.

retrieve_modules_from_names(names, add_prefix=False, remove_prefix=False)
sample(input_ids: torch.LongTensor, logits_processor: Optional[transformers.generation.logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None, logits_warper: Optional[transformers.generation.logits_process.LogitsProcessorList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = False, **model_kwargs)Union[transformers.generation.utils.SampleEncoderDecoderOutput, transformers.generation.utils.SampleDecoderOnlyOutput, torch.LongTensor]

Generates sequences of token ids for models with a language modeling head using multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Parameters:
input_ids (torch.LongTensor of shape (batch_size, sequence_length)):

The sequence used as a prompt for the generation.

logits_processor (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsProcessor] used to modify the prediction scores of the language modeling head applied at each generation step.

stopping_criteria (StoppingCriteriaList, optional):

An instance of [StoppingCriteriaList]. List of instances of class derived from [StoppingCriteria] used to tell if the generation loop should stop.

logits_warper (LogitsProcessorList, optional):

An instance of [LogitsProcessorList]. List of instances of class derived from [LogitsWarper] used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.

max_length (int, optional, defaults to 20):

DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated.

pad_token_id (int, optional):

The id of the padding token.

eos_token_id (int, optional):

The id of the end-of-sequence token.

output_attentions (bool, optional, defaults to False):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details.

output_hidden_states (bool, optional, defaults to False):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details.

output_scores (bool, optional, defaults to False):

Whether or not to return the prediction scores. See scores under returned tensors for more details.

return_dict_in_generate (bool, optional, defaults to False):

Whether or not to return a [~utils.ModelOutput] instead of a plain tuple.

synced_gpus (bool, optional, defaults to False):

Whether to continue running the while loop until max_length (needed for ZeRO stage 3)

model_kwargs:

Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs.

Return:

[~generation.SampleDecoderOnlyOutput], [~generation.SampleEncoderDecoderOutput] or torch.LongTensor: A torch.LongTensor containing the generated tokens (default behaviour) or a [~generation.SampleDecoderOnlyOutput] if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a [~generation.SampleEncoderDecoderOutput] if model.config.is_encoder_decoder=True.

Examples:

```python >>> from transformers import ( … AutoTokenizer, … AutoModelForCausalLM, … LogitsProcessorList, … MinLengthLogitsProcessor, … TopKLogitsWarper, … TemperatureLogitsWarper, … StoppingCriteriaList, … MaxLengthCriteria, … ) >>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
>>> model.config.pad_token_id = model.config.eos_token_id
>>> input_prompt = "Today is a beautiful day, and"
>>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
>>> # instantiate logits processors
>>> logits_processor = LogitsProcessorList(
...     [
...         MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id),
...     ]
... )
>>> # instantiate logits processors
>>> logits_warper = LogitsProcessorList(
...     [
...         TopKLogitsWarper(50),
...         TemperatureLogitsWarper(0.7),
...     ]
... )
>>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
>>> torch.manual_seed(0)  
>>> outputs = model.sample(
...     input_ids,
...     logits_processor=logits_processor,
...     logits_warper=logits_warper,
...     stopping_criteria=stopping_criteria,
... )
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Today is a beautiful day, and a wonderful day.\n\nI was lucky enough to meet the']
```
save_pretrained(save_directory: Union[str, os.PathLike], is_main_process: bool = True, state_dict: Optional[dict] = None, save_function: Callable = <function save>, push_to_hub: bool = False, max_shard_size: Union[int, str] = '10GB', safe_serialization: bool = False, **kwargs)

Save a model and its configuration file to a directory, so that it can be re-loaded using the [~PreTrainedModel.from_pretrained] class method.

Arguments:
save_directory (str or os.PathLike):

Directory to which to save. Will be created if it doesn’t exist.

is_main_process (bool, optional, defaults to True):

Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.

state_dict (nested dictionary of torch.Tensor):

The state dictionary of the model to save. Will default to self.state_dict(), but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).

save_function (Callable):

The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replace torch.save by another method.

push_to_hub (bool, optional, defaults to False):

Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).

max_shard_size (int or str, optional, defaults to “10GB”):

The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like “5MB”).

<Tip warning={true}>

If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size.

</Tip>

safe_serialization (bool, optional, defaults to False):

Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle).

kwargs:

Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.

set_input_embeddings(value: torch.nn.modules.module.Module)

Set model’s input embeddings.

Args:

value (nn.Module): A module mapping vocabulary to hidden states.

share_memory()T
state_dict(destination=None, prefix='', keep_vars=False)

Returns a dictionary containing a whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.

Returns:
dict:

a dictionary containing a whole state of the module

Example:

>>> module.state_dict().keys()
['bias', 'weight']
supports_gradient_checkpointing = False
tie_weights()

Tie the weights between the input embeddings and the output embeddings.

If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

This can be called as

to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)

Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.

See below for examples.

Note

This method modifies the module in-place.

Args:
device (torch.device): the desired device of the parameters

and buffers in this module

dtype (torch.dtype): the desired floating point or complex dtype of

the parameters and buffers in this module

tensor (torch.Tensor): Tensor whose dtype and device are the desired

dtype and device for all parameters and buffers in this module

memory_format (torch.memory_format): the desired memory

format for 4D parameters and buffers in this module (keyword only argument)

Returns:

Module: self

Examples:

>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16)

>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j,  0.2382+0.j],
        [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
train(mode: bool = True)T

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation

mode (False). Default: True.

Returns:

Module: self

type(dst_type: Union[torch.dtype, str])T

Casts all parameters and buffers to dst_type.

Args:

dst_type (type or string): the desired type

Returns:

Module: self

xpu(device: Optional[Union[int, torch.device]] = None)T

Moves all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Arguments:
device (int, optional): if specified, all parameters will be

copied to that device

Returns:

Module: self

zero_grad(set_to_none: bool = False)None

Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context.

Args:
set_to_none (bool): instead of setting to zero, set the grads to None.

See torch.optim.Optimizer.zero_grad() for details.

training: bool
mangoes.modeling.coref.bucket_distance(offsets)

offsets: [num spans1, num spans2]