emmi.modules.blocks.perceiver_block =================================== .. py:module:: emmi.modules.blocks.perceiver_block Classes ------- .. autoapisummary:: emmi.modules.blocks.perceiver_block.PerceiverBlock Module Contents --------------- .. py:class:: PerceiverBlock(config) Bases: :py:obj:`torch.nn.Module` For a self-attention module, the input tensor for the query, key, and value are the same. The PerceiverBlock, takes different input tensors for the query and the key/value. Perceiver-style cross-attention block. :param config: Configuration of the PerceiverBlock. .. py:attribute:: norm1q .. py:attribute:: norm1kv .. py:attribute:: attn .. py:attribute:: ls1 .. py:attribute:: drop_path1 .. py:attribute:: norm2 .. py:attribute:: mlp .. py:attribute:: ls2 .. py:attribute:: drop_path2 .. py:method:: forward(q, kv, condition = None, attn_kwargs = None) Forward pass of the PerceiverBlock. :param q: Input tensor with shape (batch_size, seqlen/num_tokens, hidden_dim) for the query representations. :param kv: Input tensor with shape (batch_size, seqlen/num_tokens, hidden_dim) for the key and value representations. :param condition: Conditioning vector. If provided, the attention and MLP will be scaled, shifted and gated feature-wise with predicted values from this vector. :param attn_kwargs: Dict with arguments for the attention (such as the attention mask). Defaults to None. :returns: Tensor after the forward pass of the PerceiverBlock.