How to use the spektral.layers.convolutional.GraphConv function in spektral

To help you get started, we’ve selected a few spektral examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
'activation': activations.serialize(self.activation),
            'dropout_rate': self.dropout_rate,
            'use_bias': self.use_bias,
            'kernel_initializer': initializers.serialize(self.kernel_initializer),
            'bias_initializer': initializers.serialize(self.bias_initializer),
            'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
            'bias_regularizer': regularizers.serialize(self.bias_regularizer),
            'activity_regularizer': regularizers.serialize(self.activity_regularizer),
            'kernel_constraint': constraints.serialize(self.kernel_constraint),
            'bias_constraint': constraints.serialize(self.bias_constraint),
        }
        base_config = super().get_config()
        return dict(list(base_config.items()) + list(config.items()))


class GINConv(GraphConv):
    """
    A Graph Isomorphism Network (GIN) as presented by
    [Xu et al. (2018)](https://arxiv.org/abs/1810.00826).

    **Mode**: single.

    This layer computes for each node \(i\):
    $$
        Z_i = \\textrm{MLP} ( (1 + \\epsilon) \\cdot X_i + \\sum\\limits_{j \\in \\mathcal{N}(i)} X_j)
    $$
    where \(X\) is the node features matrix and \(\\textrm{MLP}\) is a
    multi-layer perceptron.

    **Input**

    - Node features of shape `(n_nodes, n_features)` (with optional `batch`
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
features_neigh = self.aggregate_op(
            tf.gather(features, fltr.indices[:, -1]), fltr.indices[:, -2]
        )
        output = K.concatenate([features, features_neigh])
        output = K.dot(output, self.kernel)

        if self.use_bias:
            output = K.bias_add(output, self.bias)
        if self.activation is not None:
            output = self.activation(output)
        output = K.l2_normalize(output, axis=-1)
        return output


class EdgeConditionedConv(GraphConv):
    """
    An edge-conditioned convolutional layer as presented by [Simonovsky and
    Komodakis (2017)](https://arxiv.org/abs/1704.02901).

    **Mode**: single, batch.

    **This layer expects dense inputs.**
    
    For each node \(i\), this layer computes:
    $$
        Z_i =  \\frac{1}{\\mathcal{N}(i)} \\sum\\limits_{j \\in \\mathcal{N}(i)} F(E_{ji}) X_{j} + b
    $$
    where \(\\mathcal{N}(i)\) represents the one-step neighbourhood of node \(i\),
     \(F\) is a neural network that outputs the convolution kernel as a
    function of edge attributes, \(E\) is the edge attributes matrix, and \(b\)
    is a bias vector.
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
# Convolution
        output = K.dot(features, self.kernel_1)
        output = filter_dot(fltr, output)

        # Skip connection
        skip = K.dot(features, self.kernel_2)
        output += skip

        if self.use_bias:
            output = K.bias_add(output, self.bias)
        if self.activation is not None:
            output = self.activation(output)
        return output


class ARMAConv(GraphConv):
    """
    A graph convolutional layer with ARMA(K) filters, as presented by
    [Bianchi et al. (2019)](https://arxiv.org/abs/1901.01343).

    **Mode**: single, mixed, batch.

    This layer computes:
    $$
        Z = \\frac{1}{K}\\sum \\limits_{k=1}^K \\bar{X}_k^{(T)},
    $$
    where \(K\) is the order of the ARMA(K) filter, and where:
    $$
        \\bar{X}_k^{(t + 1)} =  \\sigma\\left(\\tilde{L}\\bar{X}^{(t)}W^{(t)} + XV^{(t)}\\right)
    $$
    is a graph convolutional skip layer implementing a recursive approximation
    of an ARMA(1) filter, \(\\tilde{L}\) is  normalized graph Laplacian with
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
regularizer=kernel_regularizer,
                                 constraint=kernel_constraint)
        bias = self.add_weight(shape=(units,),
                               name=name + '_bias',
                               initializer=bias_initializer,
                               regularizer=bias_regularizer,
                               constraint=bias_constraint)
        act = activations.get(activation)
        output = K.dot(x, kernel)
        if use_bias:
            output = K.bias_add(output, bias)
        output = act(output)
        return output


class GraphAttention(GraphConv):
    """
    A graph attention layer as presented by
    [Velickovic et al. (2017)](https://arxiv.org/abs/1710.10903).

    **Mode**: single, mixed, batch.

    **This layer expects dense inputs.**
    
    This layer computes a convolution similar to `layers.GraphConv`, but
    uses the attention mechanism to weight the adjacency matrix instead of
    using the normalized Laplacian.

    **Input**

    - node features of shape `(n_nodes, n_features)` (with optional `batch`
    dimension);
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
'kernel_initializer': initializers.serialize(self.kernel_initializer),
            'bias_initializer': initializers.serialize(self.bias_initializer),
            'attn_kernel_initializer': initializers.serialize(self.attn_kernel_initializer),
            'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
            'bias_regularizer': regularizers.serialize(self.bias_regularizer),
            'attn_kernel_regularizer': regularizers.serialize(self.attn_kernel_regularizer),
            'activity_regularizer': regularizers.serialize(self.activity_regularizer),
            'kernel_constraint': constraints.serialize(self.kernel_constraint),
            'bias_constraint': constraints.serialize(self.bias_constraint),
            'attn_kernel_constraint': constraints.serialize(self.attn_kernel_constraint),
        }
        base_config = super().get_config()
        return dict(list(base_config.items()) + list(config.items()))


class GraphConvSkip(GraphConv):
    """
    A graph convolutional layer as presented by
    [Kipf & Welling (2016)](https://arxiv.org/abs/1609.02907), with the addition
    of a skip connection.

    **Mode**: single, mixed, batch.

    This layer computes:
    $$
        Z = \\sigma(A X W_1 + X W_2 + b)
    $$
    where \(X\) is the node features matrix, \(A\) is the normalized laplacian,
    \(W_1\) and \(W_2\) are the convolution kernels, \(b\) is a bias vector,
    and \(\\sigma\) is the activation function.

    **Input**
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
'channels': self.channels,
            'activation': activations.serialize(self.activation),
            'use_bias': self.use_bias,
            'kernel_initializer': initializers.serialize(self.kernel_initializer),
            'bias_initializer': initializers.serialize(self.bias_initializer),
            'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
            'bias_regularizer': regularizers.serialize(self.bias_regularizer),
            'activity_regularizer': regularizers.serialize(self.activity_regularizer),
            'kernel_constraint': constraints.serialize(self.kernel_constraint),
            'bias_constraint': constraints.serialize(self.bias_constraint)
        }
        base_config = super().get_config()
        return dict(list(base_config.items()) + list(config.items()))


class ChebConv(GraphConv):
    """
    A Chebyshev convolutional layer as presented by
    [Defferrard et al. (2016)](https://arxiv.org/abs/1606.09375).

    **Mode**: single, mixed, batch.
    
    Given a list of Chebyshev polynomials \(T = [T_{1}, ..., T_{K}]\), 
    this layer computes:
    $$
        Z = \\sigma( \\sum \\limits_{k=1}^{K} T_{k} X W  + b)
    $$
    where \(X\) is the node features matrix, \(W\) is the convolution kernel, 
    \(b\) is the bias vector, and \(\sigma\) is the activation function.

    **Input**
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
output = K.dot(features, kernel_1)
        output = filter_dot(fltr, output)

        # Skip connection
        skip = K.dot(features_skip, kernel_2)
        skip = Dropout(self.dropout_rate)(skip)
        output += skip

        if use_bias:
            output = K.bias_add(output, bias)
        if activation is not None:
            output = activations.get(activation)(output)
        return output


class APPNP(GraphConv):
    """
    A graph convolutional layer implementing the APPNP operator, as presented by
    [Klicpera et al. (2019)](https://arxiv.org/abs/1810.05997).

    **Mode**: single, mixed, batch.

    **Input**

    - node features of shape `(n_nodes, n_features)` (with optional `batch`
    dimension);
    - Normalized Laplacian of shape `(n_nodes, n_nodes)` (with optional
    `batch` dimension); see `spektral.utils.convolution.localpooling_filter`.

    **Output**

    - node features with the same shape of the input, but the last dimension
github danielegrattarola / spektral / spektral / layers / convolutional.py View on Github external
# Convolution
        supports = list()
        for fltr in fltr_list:
            s = filter_dot(fltr, features)
            supports.append(s)
        supports = K.concatenate(supports, axis=-1)
        output = K.dot(supports, self.kernel)

        if self.use_bias:
            output = K.bias_add(output, self.bias)
        if self.activation is not None:
            output = self.activation(output)
        return output


class GraphSageConv(GraphConv):
    """
    A GraphSage layer as presented by [Hamilton et al. (2017)](https://arxiv.org/abs/1706.02216).

    **Mode**: single.

    This layer computes:
    $$
        Z = \\sigma \\big( \\big[ \\textrm{AGGREGATE}(X) \\| X \\big] W + b \\big)
    $$
    where \(X\) is the node features matrix, \(W\) is a trainable kernel,
    \(b\) is a bias vector, and \(\\sigma\) is the activation function.
    \(\\textrm{AGGREGATE}\) is an aggregation function as described in the
    original paper, that works by aggregating each node's neighbourhood
    according to some rule. The supported aggregation methods are: sum, mean,
    max, min, and product.