primeqa.mrc.metrics.mlqa.mlqa.MLQA#
- class primeqa.mrc.metrics.mlqa.mlqa.MLQA(config_name: Optional[str] = None, keep_in_memory: bool = False, cache_dir: Optional[str] = None, num_process: int = 1, process_id: int = 0, seed: Optional[int] = None, experiment_id: Optional[str] = None, max_concurrent_cache_files: int = 10000, timeout: Union[int, float] = 100, **kwargs)#
Bases:
datasets.metric.Metric
This metric wrap the official scoring script for version 1 of the MultiLingual Question Answering (MLQA).
MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between 4 different languages on average.
Computes MLQA SQuAD scores (F1 and EM). :param predictions: List of question-answers dictionaries with the following key-values:
‘id’: id of the question-answer pair as given in the references (see below)
‘prediction_text’: the text of the answer
- Parameters
references –
List of question-answers dictionaries with the following key-values: - ‘id’: id of the question-answer pair (see above), - ‘answers’: a Dict in the SQuAD dataset format
- {
‘text’: list of possible texts for the answer, as a list of strings ‘answer_start’: list of start positions for the answer, as a list of ints
},
- ’answer_language’ the language of the answer
Note that answer_start values are not taken into account to compute the metric.
- Returns
Exact match (the normalized answer exactly match the gold answer) ‘f1’: The F-score of predicted tokens versus the gold answer
- Return type
‘exact_match’
Methods
Add one prediction and reference for the metric's stack.
Add a batch of predictions and references for the metric's stack.
Compute the metrics.
Downloads and prepares dataset for reading.
Attributes
citation
codebase_urls
description
experiment_id
features
format
homepage
datasets.MetricInfo
object containing all the metadata in the metric.inputs_description
license
name
reference_urls
streamable
- add(*, prediction=None, reference=None, **kwargs)#
Add one prediction and reference for the metric’s stack.
- Parameters
prediction (list/array/tensor, optional) – Predictions.
reference (list/array/tensor, optional) – References.
- add_batch(*, predictions=None, references=None, **kwargs)#
Add a batch of predictions and references for the metric’s stack.
- Parameters
predictions (list/array/tensor, optional) – Predictions.
references (list/array/tensor, optional) – References.
- compute(*, predictions=None, references=None, **kwargs) Optional[dict] #
Compute the metrics.
Usage of positional arguments is not allowed to prevent mistakes.
- Parameters
predictions (list/array/tensor, optional) – Predictions.
references (list/array/tensor, optional) – References.
**kwargs (optional) – Keyword arguments that will be forwarded to the metrics
_compute()
method (see details in the docstring).
- Returns
dict or None
Dictionary with the metrics if this metric is run on the main process (
process_id == 0
).None if the metric is not run on the main process (
process_id != 0
).
- download_and_prepare(download_config: Optional[datasets.utils.file_utils.DownloadConfig] = None, dl_manager: Optional[datasets.utils.download_manager.DownloadManager] = None)#
Downloads and prepares dataset for reading.
- Parameters
download_config (
DownloadConfig
, optional) – Specific download configuration parameters.dl_manager (
DownloadManager
, optional) – Specific download manager to use.
- property info#
datasets.MetricInfo
object containing all the metadata in the metric.