primeqa.mrc.processors.preprocessors.eli5_fid.ELI5FiDPreprocessor#
- class primeqa.mrc.processors.preprocessors.eli5_fid.ELI5FiDPreprocessor(tokenizer: transformers.tokenization_utils_fast.PreTrainedTokenizerFast, stride: int, max_seq_len: Optional[int] = None, negative_sampling_prob_when_has_answer: float = 0.01, negative_sampling_prob_when_no_answer: float = 0.04, num_workers: Optional[int] = None, load_from_cache_file: bool = True, max_q_char_len: int = 128, single_context_multiple_passages: bool = False, max_contexts: Optional[int] = None, max_answer_len: Optional[int] = None)#
Bases:
primeqa.mrc.processors.preprocessors.abstract.AbstractPreProcessor
Methods
Convert dataset into standardized format accepted by the preprocessor.
Param:
Annotate each training feature with a 'subsample_type' of type SubsampleType for subsampling.
preprocess_eli5_batch_fid
preprocess_eli5_function_fid
Process eval examples into features.
Process training examples into features.
set_max_contexts
Subsample training features according to 'subsample_type':
Validate the data schema is correct for this preprocessor.
- adapt_dataset(dataset: datasets.arrow_dataset.Dataset, is_train: bool) datasets.arrow_dataset.Dataset #
Convert dataset into standardized format accepted by the preprocessor. This method will likely need to be overridden when subclassing.
- Parameters
dataset – data to adapt.
is_train – whether the dataset is for training.
- Returns
Adapted dataset.
- encode_passages(batch_text_passages)#
- Param:
batch_text_passages: (bsz, n_doc, ) all passages are encoded and padded to max_length not using max padding will complicate the FID Data Collator the input in the FID system does not need to be padded again
- label_features_for_subsampling(tokenized_examples: transformers.tokenization_utils_base.BatchEncoding, examples: datasets.arrow_dataset.Batch) transformers.tokenization_utils_base.BatchEncoding #
Annotate each training feature with a ‘subsample_type’ of type SubsampleType for subsampling.
- Parameters
tokenized_examples – featurized examples to annotate.
examples – original examples corresponding to the tokenized_examples features.
Returns: tokenized_examples annotated with ‘subsample_type’ for subsampling.
- process_eval(examples: datasets.arrow_dataset.Dataset) Tuple[datasets.arrow_dataset.Dataset, datasets.arrow_dataset.Dataset] #
Process eval examples into features.
- Parameters
examples – examples to process into features.
- Returns
tuple (examples, features) comprising examples adapted into standardized format and processed input features for model.
- process_train(examples: datasets.arrow_dataset.Dataset) Tuple[datasets.arrow_dataset.Dataset, datasets.arrow_dataset.Dataset] #
Process training examples into features.
- Parameters
examples – examples to process into features.
- Returns
tuple (examples, features) comprising examples adapted into standardized format and processed input features for model.
- subsample_features(dataset: datasets.arrow_dataset.Dataset) datasets.arrow_dataset.Dataset #
Subsample training features according to ‘subsample_type’:
All positive features are kept.
All negative features from an example that has an answer are kept with probability self._negative_sampling_prob_when_has_answer.
All negative features from an example that has no answer are kept with probability self._negative_sampling_prob_when_no_answer.
- Parameters
dataset – features to subsample.
- Returns
subsampled features.
- validate_schema(dataset: datasets.arrow_dataset.Dataset, is_train: bool, pre_adaptation: bool = True) None #
Validate the data schema is correct for this preprocessor.
- Parameters
dataset – data to validate schema of
is_train – whether the data is for training
pre_adaptation – whether adapt_dataset has been called. This allows for optional fields (e.g. example_id) to be imputed during adaptation.
- Returns
None
- Raises
ValueError – The data is not in the correct schema.