TensorFlow Lite is an open-source library that enables to run machine learning models and do inference on end devices, such as mobile or embedded devices. We cannot train a model using TensorFlow Lite. Before running the model, we must convert a TensorFlow model to TensorFlow Lite model using TensorFlow Lite converter.
This tutorial shows how to install precompiled TensorFlow Lite 2.18 on Raspberry Pi.
Debian package
We have created Debian package (.deb
) that contains precompiled TensorFlow Lite 2.18.0 binaries for Raspberry Pi 3 Model A+/B+ and Raspberry Pi 4 Model B. Binaries are compatible with Raspberry Pi OS Bookworm 64-bit. We have created a release on GitHub repository and uploaded the tensorflow-lite.deb
package.
TensorFlow Lite was built with the following features:
- NEON optimization
- VFPv4 optimization
- XNNPACK delegate
- Ruy matrix multiplication library
- MMAP-based allocation
- C and C++ APIs
- Python 3 bindings
Testing performed on Raspberry Pi 4 Model B (8 GB).
Install TensorFlow Lite
Use SSH to connect to Raspberry Pi. Execute the following command to download the .deb
package from releases page of the repository:
wget https://github.com/prepkg/tensorflow-lite-raspberrypi/releases/latest/download/tensorflow-lite_64.deb
When the download is finished, install TensorFlow Lite:
sudo apt install -y ./tensorflow-lite_64.deb
You can remove .deb
package because no longer needed:
rm -rf tensorflow-lite_64.deb
Testing TensorFlow Lite (C API)
Debian package contains a shared libraries of C and C++ APIs. First, we will test C API. Before starting, install GNU C compiler:
sudo apt install -y gcc
For testing, we need to have TensorFlow Lite model. You can read post how to convert TensorFlow 2 model to TensorFlow Lite model, or you can download prepared model from Internet:
wget -O model.tflite https://www.dropbox.com/s/b1426ewx13idlr0/simple_linear_regression.tflite?dl=1
This model solves the simple linear regression problem described in the post.
Create a main.c
file:
nano main.c
Add the following code:
#include <stdio.h>
#include <tensorflow/lite/c/common.h>
#include <tensorflow/lite/c/c_api.h>
int main()
{
int numThreads = 4;
TfLiteModel *model = TfLiteModelCreateFromFile("model.tflite");
TfLiteInterpreterOptions *options = TfLiteInterpreterOptionsCreate();
TfLiteInterpreterOptionsSetNumThreads(options, numThreads);
TfLiteInterpreter *interpreter = TfLiteInterpreterCreate(model, options);
TfLiteInterpreterAllocateTensors(interpreter);
float x[] = {15.0f};
TfLiteTensor *inputTensor = TfLiteInterpreterGetInputTensor(interpreter, 0);
TfLiteTensorCopyFromBuffer(inputTensor, x, sizeof(x));
TfLiteInterpreterInvoke(interpreter);
float y[1];
const TfLiteTensor *outputTensor = TfLiteInterpreterGetOutputTensor(interpreter, 0);
TfLiteTensorCopyToBuffer(outputTensor, y, sizeof(y));
printf("%.4f\n", y[0]);
TfLiteInterpreterDelete(interpreter);
TfLiteInterpreterOptionsDelete(options);
TfLiteModelDelete(model);
return 0;
}
A code is used to predict a value of y
for a previously unknown value of x
. The model was trained using the following relationship between variables: y = 2 * x + 1
.
We load a model and initialize TensorFlow Lite interpreter. A value of the x
variable is copied into the buffer of the input tensor. We execute a model. A value from the buffer of the output tensor is copied to the y
variable. Finally, we print result and release resources.
Compile a code:
gcc main.c -o test -ltensorflowlite_c
Run a program:
./test
In this case, x
is 15.0 and model returns that y
is 31.0044. The result can be verified:
y = 2 * x + 1 = 2 * 15 + 1 = 31
Testing TensorFlow Lite (C++ API)
C API can be used from C++ code. However, TensorFlow Lite has C++ API as well. Make sure you have installed GNU C++ compiler:
sudo apt install -y g++
Create a main.cpp
file:
nano main.cpp
When a file is opened, add the following code:
#include <iostream>
#include <tensorflow/lite/interpreter.h>
#include <tensorflow/lite/kernels/register.h>
using namespace tflite;
int main()
{
int numThreads = 4;
std::unique_ptr<FlatBufferModel> model = FlatBufferModel::BuildFromFile("model.tflite");
ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter, numThreads);
interpreter->AllocateTensors();
float x[] = {15.0f};
float *inputTensor = interpreter->typed_input_tensor<float>(0);
memcpy(inputTensor, x, sizeof(x));
interpreter->Invoke();
float *y = interpreter->typed_output_tensor<float>(0);
std::cout << y[0] << std::endl;
return 0;
}
A code was implemented using C++ API and performs the same job as code implemented with C API.
Execute the following command to compile a code:
g++ main.cpp -o test -ltensorflow-lite
Run a program:
./test
Testing TensorFlow Lite (Python)
Create a main.py
file:
nano main.py
Add the following code:
from tflite_runtime.interpreter import Interpreter
import numpy as np
numThreads = 4
interpreter = Interpreter('model.tflite', num_threads=numThreads)
interpreter.allocate_tensors()
x = np.float32([[15.0]])
inputDetails = interpreter.get_input_details()
interpreter.set_tensor(inputDetails[0]['index'], x)
interpreter.invoke()
outputDetails = interpreter.get_output_details()
y = interpreter.get_tensor(outputDetails[0]['index'])[0]
print('%.4f' % y[0])
Execute a script using the Python 3:
python3 main.py
Uninstall TensorFlow Lite
If you want to completely remove TensorFlow Lite, run the following command:
sudo apt purge --autoremove -y tensorflow-lite
The 13 Comments Found
Hi,
Thanks for posting. I was trying to follow the instructions to install Tensorflow-Lite on my RPi 4b for the use of libcamera-detect. After I install using the first two commands, I do not see any tensorflow-lite folder in my /home/pi directory. Where is this installed? During installation, I see messages saying" unpacking and setting up tensorflow-lite (2.7.0-2):
But can't see where it is installed. Also, when I run the test as mentioned in this page, it fails with an error:
What might be wrong?
Hi, Sanjib
Shared libraries
libtensorflow-lite.so
andlibtensorflowlite_c.so
are installed to/usr/local/lib
directory:ls /usr/local/lib | grep tensorflow
Header files are installed to
/usr/local/include/tensorflow
directory:ls /usr/local/include/tensorflow
I see that you are using older version of Raspberry Pi OS. At this moment precompiled TensorFlow Lite shared libraries are compatible with Raspberry Pi OS Bullseye. So you need to update Raspberry Pi OS in order to use the library.
Thank you so much!
Regards,
Sanjib
Does this include python? Or is this only for C and C++?
Hi
It only includes C and C++ libraries:
libtensorflowlite_c.so
(C library)libtensorflow-lite.so
(C++ library)Does this installation contain FlatBuffers?
Hi, Rahul
The
tensorflow-lite.deb
package contains the FlatBuffers header files which are installed to/usr/local/include/flatbuffers
directory:ls /usr/local/include/flatbuffers
Hello,
with a Raspberry B+ V1.2 I get an error message with GCC and G++: "Illegal instruction"
regards
Hi,
Precompiled TensorFlow Lite libraries are only compatible with Raspberry Pi 3 Model A+/B+ and Raspberry Pi 4 Model B.
Your Tensorflow binaries are working great. I tried several times to install Tensorflow on my own and was never able to get my programs to compile. This was so quick and easy.
Thank you!!!
This took me a few hours to workout.... When building rpicam-apps for tensorflow and you get the error "tensorflow-lite" not found, tried pkgconfig and cmake.." then you might be missing 'tensorflow-lite.pc'. Create the file in '/usr/local/lib/pkgconfig' with the following content.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'tensorflow-lite:arm64' instead of './tensorflow-lite_64.deb'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
tensorflow-lite:arm64 : Depends: python3-numpy:arm64 (>= 1.24.2) but it is not installable
Hi,
It's likely that you've installed an older version of Raspberry Pi OS. Make sure to install the latest version, Raspberry Pi OS Bookworm. Also, verify that you're using the 64-bit version of the OS.
Leave a Comment
Cancel reply