primeqa.mrc.trainers.mrc_mskd.MSKD_MRCTrainer#
- class primeqa.mrc.trainers.mrc_mskd.MSKD_MRCTrainer(*args, eval_examples=None, eval_dataset=None, post_process_function=None, **kwargs)#
Bases:
primeqa.mrc.trainers.mrc.MRCTrainer
Methods
Add a callback to the current list of [~transformer.TrainerCallback].
A helper wrapper that creates an appropriate context manager for autocast while feeding it the desired arguments, depending on the situation.
call_model_init
compute_distillation_loss
How the loss is computed by Trainer.
How the loss is computed by Trainer.
create_model_card
Setup the optimizer.
Setup the optimizer and the learning rate scheduler.
Setup the scheduler.
Evaluate model using either eval data passed to method (if given).
Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().
For models that inherit from [PreTrainedModel], uses that method to compute the number of floating point operations for every backward + forward pass.
Returns the evaluation torch DataLoader.
Returns the optimizer class and optimizer parameters based on the training arguments.
Returns the test [~torch.utils.data.DataLoader].
Returns the training torch DataLoader.
Launch an hyperparameter search using optuna or Ray Tune or SigOpt.
Initializes a git repo in self.args.hub_model_id.
Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process.
Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be True for one process).
Log logs on the various objects watching training.
Log metrics in a specially formatted way
Reformat Trainer metrics values to a human-readable format
Helper to get number of samples in a [~torch.utils.data.DataLoader] by accessing its dataset.
Remove a callback from the current list of [~transformer.TrainerCallback] and returns it.
Obtain the predictions using either eval data passed to method (if given).
Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().
Perform an evaluation step on model using inputs.
Upload self.model and self.tokenizer to the 🤗 model hub on the repo self.args.hub_model_id.
Remove a callback from the current list of [~transformer.TrainerCallback].
Save metrics into a json file for that split, e.g.
Will save the model, so you can reload it using from_pretrained().
Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model
store_flos
Main training entry point.
Perform a training step on a batch of inputs.
- add_callback(callback)#
Add a callback to the current list of [~transformer.TrainerCallback].
- Parameters
callback (type or [~transformer.TrainerCallback]) – A [~transformer.TrainerCallback] class or an instance of a [~transformer.TrainerCallback]. In the first case, will instantiate a member of that class.
- autocast_smart_context_manager()#
A helper wrapper that creates an appropriate context manager for autocast while feeding it the desired arguments, depending on the situation.
- compute_erm_loss(model, inputs, return_outputs=False)#
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
- compute_loss(model, inputs, return_outputs=False)#
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
- create_optimizer()#
Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method in a subclass.
- create_optimizer_and_scheduler(num_training_steps: int)#
Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method (or create_optimizer and/or create_scheduler) in a subclass.
- create_scheduler(num_training_steps: int, optimizer: Optional[torch.optim.optimizer.Optimizer] = None)#
Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or passed as an argument.
- Parameters
num_training_steps (int) – The number of training steps to do.
- evaluate(eval_dataset=None, eval_examples=None, ignore_keys=None, metric_key_prefix: str = 'eval')#
Evaluate model using either eval data passed to method (if given). Otherwise use data given to constructor at instantiation.
- Parameters
eval_examples – Each item is an eval examples Dataset from BasePreprocessor.process_eval.
eval_dataset – Each underlying dataset is an eval features Dataset from BasePreprocessor.process_eval.
ignore_keys – Keys to ignore in evaluation loop.
metric_key_prefix – Append this prefix to metric names.
- Returns
Evaluation metrics if post-processing and metric computation functions were provided to constructor at instantiation, otherwise an empty dict.
- evaluation_loop(dataloader: torch.utils.data.dataloader.DataLoader, description: str, prediction_loss_only: Optional[bool] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') transformers.trainer_utils.EvalLoopOutput #
Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().
Works both with or without labels.
- floating_point_ops(inputs: Dict[str, Union[torch.Tensor, Any]])#
For models that inherit from [PreTrainedModel], uses that method to compute the number of floating point operations for every backward + forward pass. If using another model, either implement such a method in the model or subclass and override this method.
- Parameters
inputs (Dict[str, Union[torch.Tensor, Any]]) – The inputs and targets of the model.
- Returns
The number of floating-point operations.
- Return type
int
- get_eval_dataloader(eval_dataset: Optional[datasets.arrow_dataset.Dataset] = None) torch.utils.data.dataloader.DataLoader #
Returns the evaluation torch DataLoader.
Subclass and override this method if you want to inject some custom behavior.
- Parameters
eval_dataset – If provided, will override self.eval_dataset. If it is an datasets.Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement __len__.
- static get_optimizer_cls_and_kwargs(args: transformers.training_args.TrainingArguments) Tuple[Any, Any] #
Returns the optimizer class and optimizer parameters based on the training arguments.
- Parameters
args (transformers.training_args.TrainingArguments) – The training arguments for the training session.
- get_test_dataloader(test_dataset: torch.utils.data.dataset.Dataset) torch.utils.data.dataloader.DataLoader #
Returns the test [~torch.utils.data.DataLoader].
Subclass and override this method if you want to inject some custom behavior.
- Parameters
test_dataset (torch.utils.data.Dataset, optional) – The test dataset to use. If it is an datasets.Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement __len__.
- get_train_dataloader() torch.utils.data.dataloader.DataLoader #
Returns the training torch DataLoader.
Will use no sampler if self.train_dataset does not implement __len__, a random sampler (adapted to distributed training if necessary) otherwise.
Subclass and override this method if you want to inject some custom behavior.
- hyperparameter_search(hp_space: Optional[Callable[[optuna.Trial], Dict[str, float]]] = None, compute_objective: Optional[Callable[[Dict[str, float]], float]] = None, n_trials: int = 20, direction: str = 'minimize', backend: Optional[Union[str, transformers.trainer_utils.HPSearchBackend]] = None, hp_name: Optional[Callable[[optuna.Trial], str]] = None, **kwargs) transformers.trainer_utils.BestRun #
Launch an hyperparameter search using optuna or Ray Tune or SigOpt. The optimized quantity is determined by compute_objective, which defaults to a function returning the evaluation loss when no metric is provided, the sum of all metrics otherwise.
<Tip warning={true}>
To use this method, you need to have provided a model_init when initializing your [Trainer]: we need to reinitialize the model at each new run. This is incompatible with the optimizers argument, so you need to subclass [Trainer] and override the method [~Trainer.create_optimizer_and_scheduler] for custom optimizer/scheduler.
</Tip>
- Parameters
hp_space (Callable[[“optuna.Trial”], Dict[str, float]], optional) – A function that defines the hyperparameter search space. Will default to [~trainer_utils.default_hp_space_optuna] or [~trainer_utils.default_hp_space_ray] or [~trainer_utils.default_hp_space_sigopt] depending on your backend.
compute_objective (Callable[[Dict[str, float]], float], optional) – A function computing the objective to minimize or maximize from the metrics returned by the evaluate method. Will default to [~trainer_utils.default_compute_objective].
n_trials (int, optional, defaults to 100) – The number of trial runs to test.
direction (str, optional, defaults to “minimize”) – Whether to optimize greater or lower objects. Can be “minimize” or “maximize”, you should pick “minimize” when optimizing the validation loss, “maximize” when optimizing one or several metrics.
backend (str or [~training_utils.HPSearchBackend], optional) – The backend to use for hyperparameter search. Will default to optuna or Ray Tune or SigOpt, depending on which one is installed. If all are installed, will default to optuna.
kwargs –
Additional keyword arguments passed along to optuna.create_study or ray.tune.run. For more information see:
the documentation of [optuna.create_study](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.study.create_study.html)
the documentation of [tune.run](https://docs.ray.io/en/latest/tune/api_docs/execution.html#tune-run)
the documentation of [sigopt](https://app.sigopt.com/docs/endpoints/experiments/create)
- Returns
All the information about the best run.
- Return type
[trainer_utils.BestRun]
- init_git_repo(at_init: bool = False)#
Initializes a git repo in self.args.hub_model_id.
- Parameters
at_init (bool, optional, defaults to False) – Whether this function is called before any training or not. If self.args.overwrite_output_dir is True and at_init is True, the path to the repo (which is self.args.output_dir) might be wiped out.
- is_local_process_zero() bool #
Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process.
- is_world_process_zero() bool #
Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be True for one process).
- log(logs: Dict[str, float]) None #
Log logs on the various objects watching training.
Subclass and override this method to inject custom behavior.
- Parameters
logs (Dict[str, float]) – The values to log.
- log_metrics(split, metrics)#
Log metrics in a specially formatted way
Under distributed environment this is done only for a process with rank 0.
- Parameters
split (str) – Mode/split name: one of train, eval, test
metrics (Dict[str, float]) – The metrics returned from train/evaluate/predictmetrics: metrics dict
Notes on memory reports:
In order to get memory usage report you need to install psutil. You can do that with pip install psutil.
Now when this method is run, you will see a report that will include: :
` init_mem_cpu_alloc_delta = 1301MB init_mem_cpu_peaked_delta = 154MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 1345MB train_mem_cpu_peaked_delta = 0MB train_mem_gpu_alloc_delta = 693MB train_mem_gpu_peaked_delta = 7MB `
Understanding the reports:
- the first segment, e.g., train__, tells you which stage the metrics are for. Reports starting with init_
will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the __init__ will be reported along with the eval_ metrics.
- the third segment, is either cpu or gpu, tells you whether it’s the general RAM or the gpu0 memory
metric.
- *_alloc_delta - is the difference in the used/allocated memory counter between the end and the start of the
stage - it can be negative if a function released more memory than it allocated.
- *_peaked_delta - is any extra memory that was consumed and then freed - relative to the current allocated
memory counter - it is never negative. When you look at the metrics of any stage you add up alloc_delta + peaked_delta and you know how much memory was needed to complete that stage.
The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more memory than the rest since it stores the gradient and optimizer states for all participating GPUS. Perhaps in the future these reports will evolve to measure those too.
The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the memory shared with other processes. It is important to note that it does not include swapped out memory, so the reports could be imprecise.
The CPU peak memory is measured using a sampling thread. Due to python’s GIL it may miss some of the peak memory if that thread didn’t get a chance to run when the highest memory was used. Therefore this report can be less than reality. Using tracemalloc would have reported the exact peak memory, but it doesn’t report memory allocations outside of python. So if some C++ CUDA extension allocated its own memory it won’t be reported. And therefore it was dropped in favor of the memory sampling approach, which reads the current process memory usage.
The GPU allocated and peak memory reporting is done with torch.cuda.memory_allocated() and torch.cuda.max_memory_allocated(). This metric reports only “deltas” for pytorch-specific allocations, as torch.cuda memory management system doesn’t track any memory allocated outside of pytorch. For example, the very first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory.
Note that this tracker doesn’t account for memory allocations outside of [Trainer]’s __init__, train, evaluate and predict calls.
Because evaluation calls may happen during train, we can’t handle nested invocations because torch.cuda.max_memory_allocated is a single counter, so if it gets reset by a nested eval call, train’s tracker will report incorrect info. If this [pytorch issue](https://github.com/pytorch/pytorch/issues/16266) gets resolved it will be possible to change this class to be re-entrant. Until then we will only track the outer level of train, evaluate and predict methods. Which means that if eval is called during train, it’s the latter that will account for its memory usage and that of the former.
This also means that if any other tool that is used along the [Trainer] calls torch.cuda.reset_peak_memory_stats, the gpu peak memory stats could be invalid. And the [Trainer] will disrupt the normal behavior of any such tools that rely on calling torch.cuda.reset_peak_memory_stats themselves.
For best performance you may want to consider turning the memory profiling off for production runs.
- metrics_format(metrics: Dict[str, float]) Dict[str, float] #
Reformat Trainer metrics values to a human-readable format
- Parameters
metrics (Dict[str, float]) – The metrics returned from train/evaluate/predict
- Returns
The reformatted metrics
- Return type
metrics (Dict[str, float])
- num_examples(dataloader: torch.utils.data.dataloader.DataLoader) int #
Helper to get number of samples in a [~torch.utils.data.DataLoader] by accessing its dataset.
Will raise an exception if the underlying dataset does not implement method __len__
- pop_callback(callback)#
Remove a callback from the current list of [~transformer.TrainerCallback] and returns it.
If the callback is not found, returns None (and no error is raised).
- Parameters
callback (type or [~transformer.TrainerCallback]) – A [~transformer.TrainerCallback] class or an instance of a [~transformer.TrainerCallback]. In the first case, will pop the first member of that class found in the list of callbacks.
- Returns
The callback removed, if found.
- Return type
[~transformer.TrainerCallback]
- predict(eval_dataset=None, eval_examples=None, ignore_keys=None)#
Obtain the predictions using either eval data passed to method (if given). Otherwise use data given to constructor at instantiation.
- Parameters
eval_examples – Eval examples Dataset from BasePreprocessor.process_eval.
eval_dataset – Eval features Dataset from BasePreprocessor.process_eval.
ignore_keys – Keys to ignore in evaluation loop.
- Returns
Answer predictions if post-processing function was provided to constructor at instantiation, otherwise an empty dict.
- prediction_loop(dataloader: torch.utils.data.dataloader.DataLoader, description: str, prediction_loss_only: Optional[bool] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') transformers.trainer_utils.PredictionOutput #
Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().
Works both with or without labels.
- prediction_step(model: torch.nn.modules.module.Module, inputs: Dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool, ignore_keys: Optional[List[str]] = None) Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]] #
Perform an evaluation step on model using inputs.
Subclass and override to inject custom behavior.
- Parameters
model (nn.Module) – The model to evaluate.
inputs (Dict[str, Union[torch.Tensor, Any]]) –
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument labels. Check your model’s documentation for all accepted arguments.
prediction_loss_only (bool) – Whether or not to return the loss only.
ignore_keys (Lst[str], optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
- Returns
A tuple with the loss, logits and labels (each being optional).
- Return type
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
- push_to_hub(commit_message: Optional[str] = 'End of training', blocking: bool = True, **kwargs) str #
Upload self.model and self.tokenizer to the 🤗 model hub on the repo self.args.hub_model_id.
- Parameters
commit_message (str, optional, defaults to “End of training”) – Message to commit while pushing.
blocking (bool, optional, defaults to True) – Whether the function should return only when the git push has finished.
kwargs – Additional keyword arguments passed along to [~Trainer.create_model_card].
- Returns
The url of the commit of your model in the given repository if blocking=False, a tuple with the url of the commit and an object to track the progress of the commit if blocking=True
- remove_callback(callback)#
Remove a callback from the current list of [~transformer.TrainerCallback].
- Parameters
callback (type or [~transformer.TrainerCallback]) – A [~transformer.TrainerCallback] class or an instance of a [~transformer.TrainerCallback]. In the first case, will remove the first member of that class found in the list of callbacks.
- save_metrics(split, metrics, combined=True)#
Save metrics into a json file for that split, e.g. train_results.json.
Under distributed environment this is done only for a process with rank 0.
- Parameters
split (str) – Mode/split name: one of train, eval, test, all
metrics (Dict[str, float]) – The metrics returned from train/evaluate/predict
combined (bool, optional, defaults to True) – Creates combined metrics by updating all_results.json with metrics of this call
To understand the metrics please read the docstring of [~Trainer.log_metrics]. The only difference is that raw unformatted numbers are saved in the current method.
- save_model(output_dir: Optional[str] = None, _internal_call: bool = False)#
Will save the model, so you can reload it using from_pretrained().
Will only save from the main process.
- save_state()#
Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model
Under distributed environment this is done only for a process with rank 0.
- train(resume_from_checkpoint: Optional[Union[str, bool]] = None, trial: Union[optuna.Trial, Dict[str, Any]] = None, ignore_keys_for_eval: Optional[List[str]] = None, **kwargs)#
Main training entry point.
- Parameters
resume_from_checkpoint (str or bool, optional) – If a str, local path to a saved checkpoint as saved by a previous instance of [Trainer]. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of [Trainer]. If present, training will resume from the model/optimizer/scheduler states loaded here.
trial (optuna.Trial or Dict[str, Any], optional) – The trial run or the hyperparameter dictionary for hyperparameter search.
ignore_keys_for_eval (List[str], optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.
kwargs – Additional keyword arguments used to hide deprecated arguments
- training_step(model: torch.nn.modules.module.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) torch.Tensor #
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
- Parameters
model (nn.Module) – The model to train.
inputs (Dict[str, Union[torch.Tensor, Any]]) –
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument labels. Check your model’s documentation for all accepted arguments.
- Returns
The tensor with training loss on this batch.
- Return type
torch.Tensor