Emmi Inference - Key Concepts ============================= Overview -------- This module provides a high level access to the inference pipeline. There is also a flexible API that allows to build custom pipelines within user's workflows. Key Concepts ------------ - **Runner** (:class:`emmi_inference.runner.InferenceRunner`): Preprocesses inputs via collators, moves them to the target device, applies autocast/dtype if requested, and executes the model. - **Writer** (:class:`emmi_inference.writer.AsyncWriter`): Asynchronously writes outputs (tensor / dict / sequence) as ``.pt``/``.pth``/``.th`` or ``.npy``/``.npz`` files. We offer the following options to run the inference: - :doc:`via CLI<../guides/inference/how_to_run_inference_via_cli>` - :doc:`via Code<../guides/inference/how_to_run_inference_via_code>` Even though it is recommended to use accelerated hardware (GPUs), CPU executions are also possible. It is assumed that there is enough of the RAM available (>=32GB) for **AB-UPT** like models.