emmi_inference.runner ===================== .. py:module:: emmi_inference.runner Attributes ---------- .. autoapisummary:: emmi_inference.runner.InferenceStatus emmi_inference.runner.INFERENCE_ENGINE_VERSION Classes ------- .. autoapisummary:: emmi_inference.runner.InferencePayload emmi_inference.runner.InferenceRunner Functions --------- .. autoapisummary:: emmi_inference.runner.compute_metrics Module Contents --------------- .. py:data:: InferenceStatus .. py:data:: INFERENCE_ENGINE_VERSION :value: 'v1' .. py:class:: InferencePayload A container class that hold relevant information about the inference run over a single batch. .. py:attribute:: status :type: InferenceStatus :value: 'ok' Indicator if inference was successful or not. .. py:attribute:: error :type: str | None :value: None Detailed error message in case of failure. .. py:attribute:: inputs :type: dict[str, Any] Input tensors used in the forward pass. .. py:attribute:: outputs :type: dict[str, Any] Model outputs. .. py:attribute:: debug :type: dict[str, Any] Debugging information about the inference run, like timings, misc, etc. .. py:attribute:: meta :type: dict[str, Any] Meta information about the inference run, like model class, inference engine, etc. .. py:attribute:: dropped :type: dict[str, Any] A collection of dropped tensors (not used in the forward pass or elsewhere). .. py:attribute:: kept :type: dict[str, Any] A collection of kept tensors (not used in the forward pass but needed elsewhere). .. py:method:: get(key, default = None) .. py:method:: to_cpu(*, detach = True, numpy = False, float32 = False) Return a shallow-cloned payload with all tensors moved to CPU (optionally numpy). .. py:method:: summary() Lightweight, JSON-friendly shapes/dtypes (no tensor payloads). .. py:class:: InferenceRunner(model, *, device, autocast, dtype = None, preprocessor = None) Thin wrapper around a model plus an execution context. Encapsulates device/dtype/autocast configuration and an optional preprocessor. Use :meth:`run` to execute inference on a single tensor or a mapping of tensors. :param model: The instantiated `torch.nn.Module`. :param device: Target device string (e.g., `"cpu"`, `"cuda"`, `"mps"`). :param autocast: Whether to enable automatic mixed precision (when supported) during forward. :param dtype: Optional `torch.dtype` to cast inputs (and context) to before forward. :param preprocessor: Optional callable applied to inputs before device/dtype casting. .. py:attribute:: model .. py:attribute:: device_context .. py:attribute:: preprocessor :value: None .. py:method:: run(x, batch_simplification = None, batch_keys_to_keep = None, batch_keys_to_drop = None) Execute a forward pass using the configured context. :param x: Input tensor or dict of tensors. :param batch_simplification: Optional dictionary of batch simplifications with keys and reduction numbers. :param batch_keys_to_keep: Optional set of batch keys to keep. :param batch_keys_to_drop: Optional set of batch keys to drop. :returns: The model outputs, as returned by the underlying module. .. py:function:: compute_metrics(predictions, targets, dataset_normalizers, evaluation_modes, target_suffix = '_target') Returns a computed metrics dict from predictions and target batch (typical surface-volume-like data). :param predictions: Input predictions. :param targets: Input batch with ground truth targets. :param dataset_normalizers: Dataset normalization configurations. :param evaluation_modes: A list of evaluation modes. Defaults to ["surface_pressure", "volume_velocity"], if empty. :param target_suffix: Target suffix added to all input tensors. Defaults to "_target". :returns: Dictionary of computed metrics. Metrics names as keys and tensors as values.