代码之家  ›  专栏  ›  技术社区  ›  Hichame Yessou Xeoncross

如何导出估计器的最佳模型?

  •  0
  • Hichame Yessou Xeoncross  · 技术社区  · 6 年前

    我正在训练一个简单的有线电视新闻网的基础上,与TF记录自定义估计器。 train_and_evaluate 阶段。

    tf.estimator.BestExporter ,我应该为返回 ServingInputReceiver 但这样做之后 培训和评估 相位崩溃 NotFoundError: model/m01/eval; No such file or directory .

    如果最好的出口商不允许保存评估结果,就像没有出口商一样。我试过用不同的 施维因普曲塞弗 但我总是犯同样的错误。

    按定义 here :

    feature_spec = {
            'shape': tf.VarLenFeature(tf.int64),
            'image_raw': tf.FixedLenFeature((), tf.string),
            'label_raw': tf.FixedLenFeature((43), tf.int64)
        }
    
    def serving_input_receiver_fn():
      serialized_tf_example = tf.placeholder(dtype=tf.string,
                                             shape=[120, 120, 3],
                                             name='input_example_tensor')
      receiver_tensors = {'image': serialized_tf_example}
      features = tf.parse_example(serialized_tf_example, feature_spec)
      return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
    

    here

    def serving_input_receiver_fn():
        feature_spec = {
                'image': tf.FixedLenFeature((), tf.string)
            }
        return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
    

    以下是我的出口商和培训程序:

    exporter = tf.estimator.BestExporter(
        name="best_exporter",
        serving_input_receiver_fn=serving_input_receiver_fn,
        exports_to_keep=5)
    
    train_spec = tf.estimator.TrainSpec(
        input_fn=lambda: imgs_input_fn(train_path, True, epochs, batch_size))
    
    eval_spec = tf.estimator.EvalSpec(
        input_fn=lambda: imgs_input_fn(eval_path, perform_shuffle=False, batch_size=1),
        exporters=exporter)
    
    tf.estimator.train_and_evaluate(ben_classifier, train_spec, eval_spec)
    

    This is a gist 正确的定义方法是什么 施维因普曲塞弗 BestExporter ?

    0 回复  |  直到 6 年前
        1
  •  2
  •   Tensorflow Support    6 年前

    你能试试下面的代码吗:

    def serving_input_receiver_fn():
        """
        This is used to define inputs to serve the model.
        :return: ServingInputReciever
        """
        reciever_tensors = {
            # The size of input image is flexible.
            'image': tf.placeholder(tf.float32, [None, None, None, 1]),
        }
    
        # Convert give inputs to adjust to the model.
        features = {
            # Resize given images.
            'image': tf.reshape(reciever_tensors[INPUT_FEATURE], [-1, INPUT_SHAPE])
        }
        return tf.estimator.export.ServingInputReceiver(receiver_tensors=reciever_tensors,
                                                        features=features)
    

    tf.estimator.BestExporter 如下图所示:

    best_exporter = tf.estimator.BestExporter(
            serving_input_receiver_fn=serving_input_receiver_fn,
            exports_to_keep=1)
        exporters = [best_exporter]
        eval_input_fn = tf.estimator.inputs.numpy_input_fn(
            x={input_name: eval_data},
            y=eval_labels,
            num_epochs=1,
            shuffle=False)
        eval_spec = tf.estimator.EvalSpec(
            input_fn=eval_input_fn,
            throttle_secs=10,
            start_delay_secs=10,
            steps=None,
            exporters=exporters)
    
        # Train and evaluate the model.
        tf.estimator.train_and_evaluate(classifier, train_spec=train_spec, eval_spec=eval_spec)
    

    有关详细信息,请参阅链接: https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_keras_estimator.py

    推荐文章