emmi.modules.layers.drop_path ============================= .. py:module:: emmi.modules.layers.drop_path Classes ------- .. autoapisummary:: emmi.modules.layers.drop_path.UnquantizedDropPath Module Contents --------------- .. py:class:: UnquantizedDropPath(config) Bases: :py:obj:`torch.nn.Module` Unquantized drop path (Stochastic Depth, https://arxiv.org/abs/1603.09382) per sample. Unquantized means that dropped paths are still calculated. Number of dropped paths is fully stochastic, i.e., it can happen that not a single path is dropped or that all paths are dropped. In a quantized drop path, the same amount of paths are dropped in each forward pass, resulting in large speedups with high drop_prob values. See https://arxiv.org/abs/2212.04884 for more discussion. UnquantizedDropPath does not provide any speedup, consider using a quantized version if large drop_prob values are used. Adapted from https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py#L150 Initialize the UnquantizedDropPath module. :param drop_prob: Probability to drop a path.. Defaults to 0.. :param scale_by_keep: Up-scales activations during training to avoid train-test mismatch.. Defaults to True. .. py:attribute:: drop_prob .. py:attribute:: scale_by_keep .. py:property:: keep_prob Return the keep probability. I.e. the probability to keep a path, which is 1 - drop_prob. :returns: Float value of the keep probability. .. py:method:: forward(x) Forward function of the UnquantizedDropPath module. :param x: Tensor to apply the drop path. Shape: (batch_size, ...). :returns: (batch_size, ...). If drop_prob is 0, the input tensor is returned. If drop_prob is 1, a tensor with zeros is returned. :rtype: Tensor with drop path applied. Shape .. py:method:: extra_repr() Extra representation of the UnquantizedDropPath module. :returns: Return a string representation of the module.