emmi_inference.device ===================== .. py:module:: emmi_inference.device Attributes ---------- .. autoapisummary:: emmi_inference.device.DeviceType Classes ------- .. autoapisummary:: emmi_inference.device.DeviceEnum emmi_inference.device.DeviceContext Functions --------- .. autoapisummary:: emmi_inference.device.resolve_device emmi_inference.device.sync_device emmi_inference.device.resolve_autocast Module Contents --------------- .. py:class:: DeviceEnum Bases: :py:obj:`enum.StrEnum` Supported execution devices. .. attribute:: AUTO Let the library pick the best available backend (CUDA -> MPS -> CPU). .. attribute:: CPU Force CPU execution. .. attribute:: CUDA NVIDIA CUDA GPU (if available). .. attribute:: MPS Apple Metal Performance Shaders backend on Apple Silicon (if available). Initialize self. See help(type(self)) for accurate signature. .. py:attribute:: AUTO :value: 'auto' .. py:attribute:: CPU :value: 'cpu' .. py:attribute:: CUDA :value: 'cuda' .. py:attribute:: MPS :value: 'mps' .. py:data:: DeviceType .. py:function:: resolve_device(device = DeviceEnum.AUTO) Resolve the effective device string. :param device: Either a `DeviceEnum` or a string ("auto", "cpu", "cuda", "mps"). When set to "auto", prefers CUDA, then MPS, then CPU. :returns: One of "cuda", "mps", or "cpu" depending on runtime availability. :rtype: str .. py:function:: sync_device(device) Synchronize device streams. :param device: Either a `DeviceEnum` or string; will be resolved to the effective device. .. py:function:: resolve_autocast(effective_device, requested) Decide whether autocast should be enabled. Autocast is disabled on Apple MPS as PyTorch doesn't support it. :param effective_device: The resolved device string ("cuda", "mps", or "cpu"). :param requested: Whether the user requested autocast. :returns: True if autocast should be enabled on the effective device, otherwise False. :rtype: bool .. py:class:: DeviceContext(device, dtype, autocast) Carrier for device execution context. .. attribute:: device Effective device string ("cuda", "mps", or "cpu"). .. attribute:: dtype Optional compute dtype used for autocast/mixed precision. .. attribute:: autocast Whether to attempt autocast (subject to device support). .. py:attribute:: device .. py:attribute:: dtype .. py:attribute:: autocast .. py:method:: amp() Return an AMP/autocast context manager for the current device. :returns: `torch.cuda.amp.autocast` when enabled on CUDA; otherwise a no-op context manager. :rtype: ContextManager