Install TensorFlow Serving Inside Docker Container in Linux

Install TensorFlow Serving Inside Docker Container in Linux

TensorFlow Serving is a service for serving machine learning models. TensorFlow Serving has built-in integration with TensorFlow models, but can be easily extended for serving other types of models.

This tutorial explains how to install TensorFlow Serving inside a Docker container on the Linux. Commands have been tested on Ubuntu.

Prepare environment

Make sure you have installed Docker on your system. If you are using Ubuntu, installation instructions can be found in the post.

You also need to have a TensorFlow model. For testing purposes, clone the TensorFlow Serving repository. It has a demo model called Half Plus Two.

git clone https://github.com/tensorflow/serving

Create a directory to store models:

sudo mkdir -p /opt/tensorflow-serving/models

Copy model to newly created directory:

sudo cp -r serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu /opt/tensorflow-serving/models

Create models config file:

sudo nano /opt/tensorflow-serving/models/models.config

Add the following content to the file:

model_config_list {
  config {
    name: 'half_plus_two'
    base_path: '/models/saved_model_half_plus_two_cpu'
    model_platform: 'tensorflow'
  }
}

Install TensorFlow Serving

  • Host network

Run the following command to create a container for TensorFlow Serving that uses the host network:

docker run -d --name=tensorflow-serving --restart=always --network=host \
    -v /opt/tensorflow-serving/models:/models \
    tensorflow/serving \
    --model_config_file=/models/models.config
  • User-defined bridge network

User-defined bridge network can be used for listening on different port. By default, TensorFlow Serving service is listening for gRPC connections on port 8500. It's also listening REST API connections on port 8501. Both ports can be changed with -p option.

docker network create app-net
docker run -d --name=tensorflow-serving --restart=always --network=app-net \
    -p 8080:8500 -p 8081:8501 \
    -v /opt/tensorflow-serving/models:/models \
    tensorflow/serving \
    --model_config_file=/models/models.config

Testing TensorFlow Serving

The Half Plus Two model generates output using equation 0.5 * x + 2. The values of x are provided for prediction. Send POST request to predict output for x values:

curl -d '{"instances": [3.0, 5.0, 7.0]}' \
     -X POST http://192.168.0.252:8501/v1/models/half_plus_two:predict

Output:

{
    "predictions": [3.5, 4.5, 5.5
    ]
}

Uninstall TensorFlow Serving

To completely remove TensorFlow Serving, remove its container:

docker rm --force tensorflow-serving

Remove TensorFlow Serving image:

docker rmi tensorflow/serving

You can also remove TensorFlow Serving models:

sudo rm -rf /opt/tensorflow-serving

If a user-defined bridge network was created, you can delete it as follows:

docker network rm app-net

Leave a Comment

Cancel reply

Your email address will not be published.