Install Precompiled TensorFlow Lite 2.10 on Raspberry Pi

Install Precompiled TensorFlow Lite 2.10 on Raspberry Pi

TensorFlow Lite is an open-source library that enables to run machine learning models and do inference on end devices, such as mobile or embedded devices. We cannot train a model using TensorFlow Lite. Before running the model, we must convert a TensorFlow model to TensorFlow Lite model using TensorFlow Lite converter.

This tutorial shows how to install precompiled TensorFlow Lite 2.10 on Raspberry Pi.

Debian package

We have created Debian package (.deb) that contains precompiled TensorFlow Lite 2.10.0 binaries for Raspberry Pi 3 Model A+/B+ and Raspberry Pi 4 Model B. Binaries are compatible with Raspberry Pi OS Bullseye (32-bit and 64-bit). We have created a release on GitHub repository and uploaded the tensorflow-lite.deb ​package.

TensorFlow Lite was built with the following features:

  • NEON optimization
  • VFPv4 optimization
  • XNNPACK delegate
  • Ruy matrix multiplication library
  • MMAP-based allocation
  • C and C++ APIs
  • Python 3 bindings

Testing performed on Raspberry Pi 4 Model B (8 GB).

Install TensorFlow Lite

Use SSH to connect to Raspberry Pi. Execute the following command to download the .deb package from releases page of the repository:

wget https://github.com/prepkg/tensorflow-lite-raspberrypi/releases/latest/download/tensorflow-lite.deb
wget https://github.com/prepkg/tensorflow-lite-raspberrypi/releases/latest/download/tensorflow-lite_64.deb

When the download is finished, install TensorFlow Lite:

sudo apt install -y ./tensorflow-lite.deb
sudo apt install -y ./tensorflow-lite_64.deb

You can remove .deb package because no longer needed:

rm -rf tensorflow-lite.deb
rm -rf tensorflow-lite_64.deb

Testing TensorFlow Lite (C API)

Debian package contains a shared libraries of C and C++ APIs. First, we will test C API. Before starting, install GNU C compiler:

sudo apt install -y gcc

For testing, we need to have TensorFlow Lite model. You can read post how to convert TensorFlow 2 model to TensorFlow Lite model, or you can download prepared model from Internet:

wget -O model.tflite https://www.dropbox.com/s/b1426ewx13idlr0/simple_linear_regression.tflite?dl=1

This model solves the simple linear regression problem described in the post.

Create a main.c file:

nano main.c

Add the following code:

#include <stdio.h>
#include <tensorflow/lite/c/common.h>
#include <tensorflow/lite/c/c_api.h>

int main()
{
    int numThreads = 4;

    TfLiteModel *model = TfLiteModelCreateFromFile("model.tflite");

    TfLiteInterpreterOptions *options = TfLiteInterpreterOptionsCreate();
    TfLiteInterpreterOptionsSetNumThreads(options, numThreads);
    TfLiteInterpreter *interpreter = TfLiteInterpreterCreate(model, options);

    TfLiteInterpreterAllocateTensors(interpreter);

    float x[] = {15.0f};

    TfLiteTensor *inputTensor = TfLiteInterpreterGetInputTensor(interpreter, 0);
    TfLiteTensorCopyFromBuffer(inputTensor, x, sizeof(x));

    TfLiteInterpreterInvoke(interpreter);

    float y[1];

    const TfLiteTensor *outputTensor = TfLiteInterpreterGetOutputTensor(interpreter, 0);
    TfLiteTensorCopyToBuffer(outputTensor, y, sizeof(y));

    printf("%.4f\n", y[0]);

    TfLiteInterpreterDelete(interpreter);
    TfLiteInterpreterOptionsDelete(options);
    TfLiteModelDelete(model);

    return 0;
}

A code is used to predict a value of y for a previously unknown value of x. The model was trained using the following relationship between variables: y = 2 * x + 1.

We load a model and initialize TensorFlow Lite interpreter. A value of the x variable is copied into the buffer of the input tensor. We execute a model. A value from the buffer of the output tensor is copied to the y variable. Finally, we print result and release resources.

Compile a code:

gcc main.c -o test -ltensorflowlite_c

Run a program:

./test

In this case, x is 15.0 and model returns that y is 31.0044. The result can be verified:

y = 2 * x + 1 = 2 * 15 + 1 = 31

Testing TensorFlow Lite (C++ API)

C API can be used from C++ code. However, TensorFlow Lite has C++ API as well. Make sure you have installed GNU C++ compiler:

sudo apt install -y g++

Create a main.cpp file:

nano main.cpp

When a file is opened, add the following code:

#include <iostream>
#include <tensorflow/lite/interpreter.h>
#include <tensorflow/lite/kernels/register.h>

using namespace tflite;

int main()
{
    int numThreads = 4;

    std::unique_ptr<FlatBufferModel> model = FlatBufferModel::BuildFromFile("model.tflite");

    ops::builtin::BuiltinOpResolver resolver;
    std::unique_ptr<Interpreter> interpreter;
    InterpreterBuilder(*model, resolver)(&interpreter, numThreads);

    interpreter->AllocateTensors();

    float x[] = {15.0f};

    float *inputTensor = interpreter->typed_input_tensor<float>(0);
    memcpy(inputTensor, x, sizeof(x));

    interpreter->Invoke();

    float *y = interpreter->typed_output_tensor<float>(0);

    std::cout << y[0] << std::endl;

    return 0;
}

A code was implemented using C++ API and performs the same job as code implemented with C API.

Execute the following command to compile a code:

g++ main.cpp -o test -ltensorflow-lite -ldl

Run a program:

./test

Testing TensorFlow Lite (Python)

Create a main.py file:

nano main.py

Add the following code:

from tflite_runtime.interpreter import Interpreter
import numpy as np

numThreads = 4

interpreter = Interpreter('model.tflite', num_threads=numThreads)
interpreter.allocate_tensors()

x = np.float32([[15.0]])

inputDetails = interpreter.get_input_details()
interpreter.set_tensor(inputDetails[0]['index'], x)

interpreter.invoke()

outputDetails = interpreter.get_output_details()
y = interpreter.get_tensor(outputDetails[0]['index'])[0]

print('%.4f' % y[0])

Execute a script using the Python 3:

python3 main.py

Uninstall TensorFlow Lite

If you want to completely remove TensorFlow Lite, run the following command:

sudo apt purge --autoremove -y tensorflow-lite

The 9 Comments Found

  1. Avatar
    Sanjib Acharya Reply

    Hi,
    Thanks for posting. I was trying to follow the instructions to install Tensorflow-Lite on my RPi 4b for the use of libcamera-detect. After I install using the first two commands, I do not see any tensorflow-lite folder in my /home/pi directory. Where is this installed? During installation, I see messages saying" unpacking and setting up tensorflow-lite (2.7.0-2):

    pi@raspberrypi:~ $ sudo apt install -y ./tensorflow-lite.deb
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    Note, selecting 'tensorflow-lite' instead of './tensorflow-lite.deb'
    The following NEW packages will be installed:
      tensorflow-lite
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/1,608 kB of archives.
    After this operation, 0 B of additional disk space will be used.
    Get:1 /home/pi/tensorflow-lite.deb tensorflow-lite armhf 2.7.0-2 [1,608 kB]
    Selecting previously unselected package tensorflow-lite.
    (Reading database ... 133705 files and directories currently installed.)
    Preparing to unpack /home/pi/tensorflow-lite.deb ...
    Unpacking tensorflow-lite (2.7.0-2) ...
    Setting up tensorflow-lite (2.7.0-2) ...
    

    But can't see where it is installed. Also, when I run the test as mentioned in this page, it fails with an error:

    pi@raspberrypi:~ $ g++ main.cpp -o test -ltensorflow-lite -ldl
    /usr/bin/ld: //usr/local/lib/libtensorflow-lite.so: undefined reference to `log@GLIBC_2.29`
    /usr/bin/ld: //usr/local/lib/libtensorflow-lite.so: undefined reference to `exp@GLIBC_2.29`
    /usr/bin/ld: //usr/local/lib/libtensorflow-lite.so: undefined reference to `pow@GLIBC_2.29`
    collect2: error: ld returned 1 exit status
    

    What might be wrong?

    • Avatar
      lindevs Reply

      Hi, Sanjib
      Shared libraries libtensorflow-lite.so and libtensorflowlite_c.so are installed to /usr/local/lib directory:
      ls /usr/local/lib | grep tensorflow

      Header files are installed to /usr/local/include/tensorflow directory:
      ls /usr/local/include/tensorflow

      I see that you are using older version of Raspberry Pi OS. At this moment precompiled TensorFlow Lite shared libraries are compatible with Raspberry Pi OS Bullseye. So you need to update Raspberry Pi OS in order to use the library.

    • Avatar
      lindevs Reply

      Hi
      It only includes C and C++ libraries:
      libtensorflowlite_c.so (C library)
      libtensorflow-lite.so (C++ library)

    • Avatar
      lindevs Reply

      Hi, Rahul
      The tensorflow-lite.deb package contains the FlatBuffers header files which are installed to /usr/local/include/flatbuffers directory:
      ls /usr/local/include/flatbuffers

  2. Avatar
    ThomasD Reply

    Hello,
    with a Raspberry B+ V1.2 I get an error message with GCC and G++: "Illegal instruction"
    regards

    • Avatar
      lindevs Reply

      Hi,
      Precompiled TensorFlow Lite libraries are only compatible with Raspberry Pi 3 Model A+/B+ and Raspberry Pi 4 Model B.

Leave a Comment

Cancel reply

Your email address will not be published.