emmi.modules.blocks.irregular_nat_block¶
Classes¶
Neighbourhood Attention Transformer (NAT) block for irregular grids. Consists of a single NAT attention layer and a feedforward layer. |
Module Contents¶
- class emmi.modules.blocks.irregular_nat_block.IrregularNatBlock(dim, num_heads, attn_ctor, mlp_hidden_dim=None, drop_path=0.0, norm_ctor=nn.LayerNorm, layerscale=None, eps=1e-06, init_weights='truncnormal002')¶
Bases:
torch.nn.ModuleNeighbourhood Attention Transformer (NAT) block for irregular grids. Consists of a single NAT attention layer and a feedforward layer.
Initializes a NAT block for irregular grids.
- Parameters:
dim (int) – Hidden dimension of the NAT attention block.
num_heads (int) – Number of attention heads.
attn_ctor (type) – Constructor of the attention module. Why this this not Nat Attention by default?
mlp_hidden_dim (int | None) – Hidden dimension of the FF MLP modules. Defaults to None.
drop_path (float) – Probability to drop a path (i.e, attention/FF module) during training. Defaults to 0.0.
norm_ctor (type) – Constructor of the activation normalization. Defaults to nn.LayerNorm.
layerscale (float | None) – Initial value of the layer scale module. Defaults to None.
eps (float) – Epsilon value for the LayerNorm module. Defaults to 1e-6.
init_weights (str) – Initialization method for the weight parameters. Defaults to “truncnormal002”.
- norm1¶
- attn¶
- ls1¶
- drop_path1¶
- norm2¶
- mlp¶
- ls2¶
- drop_path2¶
- forward(x, pos)¶
_summary_
- Parameters:
x (torch.Tensor) – _description_
pos (torch.Tensor) – _description_
- Returns:
_description_
- Return type:
torch.Tensor