ksuit.models.composite

Classes

CompositeModel

Base class for all neural network modules.

Module Contents

class ksuit.models.composite.CompositeModel(model_config, update_counter=None, path_provider=None, data_container=None, static_context=None)

Bases: ksuit.models.model_base.ModelBase

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Base class for composite models, i.e. models that consist of multiple submodels of type Model.

property submodels: dict[str, ksuit.models.model_base.ModelBase]
Abstractmethod:

Return type:

dict[str, ksuit.models.model_base.ModelBase]

get_named_models()

Returns a dict of {model_name: model}, e.g., to log all learning rates of all models/submodels.

Return type:

dict[str, ksuit.models.model_base.ModelBase]

property device: torch.device
Return type:

torch.device

initialize_weights()

Initialize the weights of the model, calling the initializer of all submodules.

Return type:

Self

apply_initializers()

Apply the initializers to the model, calling the initializer of all submodules.

Return type:

Self

initialize_optimizer()

Initialize the optimizer of the model.

Return type:

None

optimizer_step(grad_scaler)

Perform an optimization step, calling all submodules’ optimization steps.

Parameters:

grad_scaler (torch.cuda.amp.GradScaler | None)

Return type:

None

optimizer_schedule_step()

Perform the optimizer learning rate scheduler step, calling all submodules’ scheduler steps.

Return type:

None

optimizer_zero_grad(set_to_none=True)

Zero the gradients of the optimizer, calling all submodules’ zero_grad methods.

Parameters:

set_to_none (bool)

Return type:

None

property is_frozen: bool
Return type:

bool

train(mode=True)

Set the model to train or eval mode.

Overwrites the nn.Module.train method to avoid setting the model to train mode if it is frozen and to call all submodules’ train methods.

Parameters:

mode – If True, set the model to train mode. If False, set the model to eval mode.

Return type:

Self

to(device, *args, **kwargs)

Performs Tensor dtype and/or device conversion, calling all submodules’ to methods.

Parameters:

device – The desired device of the tensor. Can be a string (e.g. “cuda:0”) or “cpu”.

Return type:

Self