代码之家  ›  专栏  ›  技术社区  ›  whoisraibolt

在卷积层CNN上更改过滤器-Python/TensorFlow

  •  0
  • whoisraibolt  · 技术社区  · 8 年前

    def new_weights(shape):
        return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
    

    def new_conv_layer(input,              # The previous layer
                       use_pooling=True):  # Use 2x2 max-pooling
    
        shape = [3, 3, 1, 8]
    
        weights = new_weights(shape=shape)
    
        biases = new_biases(length=8)
    
        layer = tf.nn.conv2d(input=input,
                             filter=weights,
                             strides=[1, 1, 1, 1],
                             padding='SAME')
    
        layer += biases
    
        if use_pooling:
            layer = tf.nn.max_pool(value=layer,
                                   ksize=[1, 2, 2, 1],
                                   strides=[1, 2, 2, 1],
                                   padding='SAME')
    
        layer = tf.nn.relu(layer)
    
        # relu(max_pool(x)) == max_pool(relu(x)) we can
        # save 75% of the relu-operations by max-pooling first.
    
        return layer
    

    因此我们可以观察到滤波器的大小是3x3,滤波器的数量是8。并且滤波器是用随机值定义的。

    我需要做的是用固定值定义所有8个过滤器,即预定值,例如:

    weigths = [
        [[0,  1, 0,],[0, -1, 0,],[0,  0, 0,],],
        [[0,  0, 1,],[0, -1, 0,],[0,  0, 0,],],
        [[0,  0, 0,],[0, -1, 1,],[0,  0, 0,],],
        [[0,  0, 0,],[0, -1, 0,],[0,  0, 1,],],
        [[0,  0, 0,],[0, -1, 0,],[0,  1, 0,],],
        [[0,  0, 0,],[0, -1, 0,],[1,  0, 0,],], 
        [[0,  0, 0,],[1, -1, 0,],[0,  0, 0,],],
        [[1,  0, 0,],[0, -1, 0,],[0,  0, 0,],]
    ]
    

    我无法想象,我怎么能在我的代码里做这些修改,有人知道我怎么做吗?

    3 回复  |  直到 8 年前
        1
  •  2
  •   Ali Salehi    4 年前

    在TF2中可以这样做:

    model = models.Sequential()
    # one 3x3 filter
    model.add(layers.Conv2D(1, (3, 3), input_shape=(None, None, 1)))
    # access to the target layer
    layer = model.layers[0]
    current_w, current_bias = layer.get_weights()  # see the current weights
    new_w = tf.constant([[1,2, 3],
                         [4, 5, 6],
                         [7, 8, 9]])
    new_w = tf.reshape(new_w, custom_w.shape)  # fix the shape
    new_bias = tf.constant([0])
    layer.set_weights([new_w, new_bias])
    model.summary()
    # let's see ..
    tf.print(model.layers[0].get_weights())
    
        2
  •  1
  •   Vladimir Bystricky    8 年前

    如果您想通过一些预定义的值初始化权重,可以使用 tf.constant_initializer tf.constant tf.Variable

    def new_weights(init_vaue, is_const):
        if (is_const) :
            return tf.constant(init_vaue, name='weights')
        else:
            initializer = tf.constant_initializer(init_vaue)
            return tf.get_variable('weights', shape = init_vaue.shape, initializer=initializer)
    
    weights = np.ones([3,3,1,8], dtype=np.float)
    print(weights.shape)
    
    value = new_weights(weights, True)
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        value_ = sess.run(value) 
        print(value_)
    
        3
  •  1
  •   Vijay Mariappan    8 年前

    您只需将权重定义为不可训练,并将新权重定义为:

    new_weights = tf.Variable( tf.reshape(weights, (3,3,1,8)),trainable=False)
    # then apply on the inputs 
    layer = tf.nn.conv2d(inputs, filter=new_weights, strides=[1, 1, 1, 1], padding='SAME')