Introduction to TensorFlow Serving
Brief introduction
The TensorFlow model file includes the Graph and all parameters of the Cloud-ML model. It is actually a checkpoint file. Users can load model files to continue training or externally provide Inference services.
Use Savedmodel to export models
Model export method reference: https://tensorflow.github.io/serving/serving_basic.
This method is basically used as followed.
from tensorflow.python.saved_model import builder as saved_model_builder
export_path_base = sys.argv[-1]
export_path = os.path.join(
compat.as_bytes(export_path_base),
compat.as_bytes(str(FLAGS.model_version)))
print 'Exporting trained model to', export_path
builder = saved_model_builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
'predict_images':
prediction_signature,
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
See https://github.com/tobegit3hub/deep_recommend_system/ for examples of executable code.
./dense_classifier.py --mode savedmodel
Use Exporter to export models
For examples of exported model files supported by TensorFlow Serving, see https://github.com/tobegit3hub/deep_recommend_system/blob/master/dense_classifier.py.
The exported code is also relatively simple. The user can fill in the Inputs and Output during model Inference input and output.
from tensorflow.contrib.session_bundle import exporter
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string("model_path", "./model", "The path to export the model")
flags.DEFINE_integer("export_version", 1, "Version number of the model")
# Define the graph
keys_placeholder = tf.placeholder(tf.int32, shape=[None, 1])
keys = tf.identity(keys_placeholder)
# Start the session
# Export the model
print("Exporting trained model to {}".format(FLAGS.model_path))
model_exporter = exporter.Exporter(saver)
model_exporter.init(
sess.graph.as_graph_def(),
named_graph_signatures={
'inputs': exporter.generic_signature({"keys": keys_placeholder, "features": inference_features}),
'outputs': exporter.generic_signature({"keys": keys, "softmax": inference_softmax, "prediction": inference_op})
})
model_exporter.export(FLAGS.model_path, tf.constant(FLAGS.export_version), sess)
print 'Done exporting!'
Compared with the SavedModel method, both can load directly using TensorFlow Serving. We use the deep_recommend_system to export both kinds of models and test whether the forecasted results are exactly the same, other than the difference in file sizes.
Model files imported with Assert
In scenarios such as NLP, besides the parameter file, it is necessary to import Vocabulary and other such files. Assets_collection can be configured in the Exporter. See https://github.com/tensorflow/serving/issues/264.