emmi.modules.attention.anchor_attention.self_anchor_attention ============================================================= .. py:module:: emmi.modules.attention.anchor_attention.self_anchor_attention Classes ------- .. autoapisummary:: emmi.modules.attention.anchor_attention.self_anchor_attention.SelfAnchorAttention Module Contents --------------- .. py:class:: SelfAnchorAttention(config) Bases: :py:obj:`emmi.modules.attention.anchor_attention.multi_branch_anchor_attention.MultiBranchAnchorAttention` Anchor attention within branches: each configured branch attends to its own anchors independently. For a list of branches (e.g., A, B, C), this creates a pattern where A tokens attend to A_anchors, B tokens attend to B_anchors, and C tokens attend to C_anchors. It requires all configured branches and their anchors to be present in the input. Example: surface tokens attend to surface_anchors and volume tokens attend to volume_anchors. This is achieved via the following attention patterns: AttentionPattern(query_tokens=["surface_anchors", "surface_queries"], key_value_tokens=["surface_anchors"]) AttentionPattern(query_tokens=["volume_anchors", "volume_queries"], key_value_tokens=["volume_anchors"]) :param dim: Model dimension. :param num_heads: Number of attention heads. :param use_rope: Whether to use rotary position embeddings. :param bias: Whether to use bias in the linear projections. :param init_weights: Weight initialization method. :param branches: A sequence of all participating branch names. :param anchor_suffix: Suffix identifying anchor tokens. Initialize internal Module state, shared by both nn.Module and ScriptModule.