Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convolution with full spectral domain filter #31

Open
pinkfloyd06 opened this issue Apr 16, 2018 · 1 comment
Open

Convolution with full spectral domain filter #31

pinkfloyd06 opened this issue Apr 16, 2018 · 1 comment

Comments

@pinkfloyd06
Copy link

Hello @mdeff

Let me thank you for this notebook showing different graph convolution implementation.

https://github.com/mdeff/cnn_graph/blob/master/trials/1_learning_filters.ipynb

l'm wondering if you have an optimized implementation of a convolution in a full spectral domain filter (instead of chebyshev expansion) ?

x ∗ G g = U.T ( diag(w_g)Ux) (link : page3 https://arxiv.org/pdf/1506.05163.pdf)
such that :
x : input
spectral multipliers w_g = ( w_1 ,...,w_N )
U : eigenvectors
U.T : transposed eigenvectors

Here is what l've tried

def graph_convolution(x,U,lambda):
       '''
       graph convolution layer on full spectral domain filter for graph classification

       x : dimension( n,z). where n is the number of nodes and z the number of features per node
      U : eignevectors dim (n,n). where n is the number of nodes
      lambda :  diagonal matrice of eigenvalues dim(n,n). where n is the number of nodes
       '''

      x1=lambda*x
      x2= U*x1
      # Let's say n=32 and z=3 then x2=[32,3]
      # convolution layer input :  32 output 100
      cl1 = nn.Linear(32,100,3) # pytorch version
      x=cl1(x2) # output x is of dim [100,3]
      
     return x

Please correct me.
Thank you,

@mdeff
Copy link
Owner

mdeff commented Jul 20, 2020

cnn_graph/lib/models.py

Lines 387 to 421 in c4d2c75

class fgcnn2(base_model):
"""Graph CNN with full weights, i.e. patch has the same size as input."""
def __init__(self, L, F):
super().__init__()
#self.L = L # Graph Laplacian, NFEATURES x NFEATURES
self.F = F # Number of filters
_, self.U = graph.fourier(L)
def _inference(self, x, dropout):
# x: NSAMPLES x NFEATURES
with tf.name_scope('gconv1'):
# Transform to Fourier domain
U = tf.constant(self.U, dtype=tf.float32)
xf = tf.matmul(x, U)
xf = tf.expand_dims(xf, 1) # NSAMPLES x 1 x NFEATURES
xf = tf.transpose(xf) # NFEATURES x 1 x NSAMPLES
# Filter
W = self._weight_variable([NFEATURES, self.F, 1])
yf = tf.matmul(W, xf) # for each feature
yf = tf.transpose(yf) # NSAMPLES x NFILTERS x NFEATURES
yf = tf.reshape(yf, [-1, NFEATURES])
# Transform back to graph domain
Ut = tf.transpose(U)
y = tf.matmul(yf, Ut)
y = tf.reshape(yf, [-1, self.F, NFEATURES])
# Bias and non-linearity
b = self._bias_variable([1, self.F, 1])
# b = self._bias_variable([1, self.F, NFEATURES])
y += b # NSAMPLES x NFILTERS x NFEATURES
y = tf.nn.relu(y)
with tf.name_scope('fc1'):
W = self._weight_variable([self.F*NFEATURES, NCLASSES])
b = self._bias_variable([NCLASSES])
y = tf.reshape(y, [-1, self.F*NFEATURES])
y = tf.matmul(y, W) + b
return y

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants