deepmol.evaluator package

Submodules

deepmol.evaluator.evaluator module

class Evaluator(model: Model, dataset: Dataset)[source]

Bases: object

Class that evaluates a model on a given dataset. The evaluator class is used to evaluate a Model class on a given Dataset object.

compute_model_performance(metrics: Metric | List[Metric], per_task_metrics: bool = False) Tuple[Dict, Dict][source]

Computes statistics of model on test data and saves results to csv.

Parameters:
  • metrics (Union[Metric, List[Metric]]) – The set of metrics provided. If a single Metric object is provided or a list is provided, it will evaluate Model on those metrics.

  • per_task_metrics (bool) – If True, return computed metric for each task on multitask dataset.

Returns:

  • multitask_scores (dict) – Dictionary mapping names of metrics to metric scores.

  • all_task_scores (dict) – If per_task_metrics == True, then returns a second dictionary of scores for each task separately.

output_predictions(y_preds: ndarray, csv_out: str) None[source]

Writes predictions to file. Writes predictions made on the dataset to a specified file.

Parameters:
  • y_preds (np.ndarray) – Predictions to output

  • csv_out (str) – Name of file to write predictions to.

static output_statistics(scores: Dict[str, float], stats_out: str) None[source]

Write computed stats to file.

Parameters:
  • scores (Score) – Dictionary mapping names of metrics to scores.

  • stats_out (str) – Name of file to write scores to.

Module contents