emmi.modules.layers.drop_path¶
Classes¶
Unquantized drop path (Stochastic Depth, https://arxiv.org/abs/1603.09382) per sample. Unquantized means |
Module Contents¶
- class emmi.modules.layers.drop_path.UnquantizedDropPath(config)¶
Bases:
torch.nn.ModuleUnquantized drop path (Stochastic Depth, https://arxiv.org/abs/1603.09382) per sample. Unquantized means that dropped paths are still calculated. Number of dropped paths is fully stochastic, i.e., it can happen that not a single path is dropped or that all paths are dropped. In a quantized drop path, the same amount of paths are dropped in each forward pass, resulting in large speedups with high drop_prob values. See https://arxiv.org/abs/2212.04884 for more discussion. UnquantizedDropPath does not provide any speedup, consider using a quantized version if large drop_prob values are used.
Adapted from https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py#L150
Initialize the UnquantizedDropPath module.
- Parameters:
drop_prob – Probability to drop a path.. Defaults to 0..
scale_by_keep – Up-scales activations during training to avoid train-test mismatch.. Defaults to True.
config (emmi.schemas.modules.layers.UnquantizedDropPathConfig)
- drop_prob¶
- scale_by_keep¶
- property keep_prob¶
Return the keep probability. I.e. the probability to keep a path, which is 1 - drop_prob.
- Returns:
Float value of the keep probability.
- forward(x)¶
Forward function of the UnquantizedDropPath module.
- Parameters:
x (torch.Tensor) – Tensor to apply the drop path. Shape: (batch_size, …).
- Returns:
(batch_size, …). If drop_prob is 0, the input tensor is returned. If drop_prob is 1, a tensor with zeros is returned.
- Return type:
Tensor with drop path applied. Shape
- extra_repr()¶
Extra representation of the UnquantizedDropPath module.
- Returns:
Return a string representation of the module.