ksuit.models.single =================== .. py:module:: ksuit.models.single Classes ------- .. autoapisummary:: ksuit.models.single.Model Module Contents --------------- .. py:class:: Model(model_config, is_frozen = False, update_counter = None, path_provider = None, data_container = None, static_context = None) Bases: :py:obj:`ksuit.models.model_base.ModelBase` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:`to`, etc. .. note:: As per the example above, an ``__init__()`` call to the parent class must be made before assignment on the child. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool Base class for single models, i.e. one model with one optimizer as opposed to CompositeModel. :param model_config: Model configuration. :param is_frozen: If true, will set `requires_grad` of all parameters to false. Will also put the model into eval mode (e.g., to put Dropout or BatchNorm into eval mode). :param path_provider: A path provider used by the initializer to store or retrieve checkpoints. :param data_container: The data container which includes the data and dataloader. This is currently unused but helpful for quick prototyping only, evaluating forward in debug mode, etc. :param static_context: The static context used to pass information between submodules, e.g. patch_size, latent_dim. .. py:property:: is_frozen :type: bool .. py:property:: device :type: torch.device .. py:method:: get_named_models() Returns a dict of {model_name: model}, e.g., to log all learning rates of all models/submodels. .. py:method:: initialize_weights() Freezes the weights of the model by setting requires_grad to False if self.is_frozen is True. .. py:method:: apply_initializers() Apply the initializers to the model, calling initializer.init_weights and initializer.init_optim. .. py:method:: initialize_optimizer() Initialize the optimizer. .. py:method:: optimizer_step(grad_scaler) Perform an optimization step. .. py:method:: optimizer_schedule_step() Perform the optimizer learning rate scheduler step. .. py:method:: optimizer_zero_grad(set_to_none = True) Zero the gradients of the optimizer. .. py:method:: train(mode = True) Set the model to train or eval mode. Overwrites the nn.Module.train method to avoid setting the model to train mode if it is frozen. :param mode: If True, set the model to train mode. If False, set the model to eval mode. .. py:method:: to(device, *args, **kwargs) Performs Tensor dtype and/or device conversion, overwriting nn.Module.to method to set the _device attribute. :param device: The desired device of the tensor. Can be a string (e.g. "cuda:0") or "cpu".