ONNX Runtime provides support for various execution providers, which are backend implementations that can leverage specific hardware accelerators to optimize model execution. Retrieving the available execution providers in ONNX Runtime can be useful for several reasons. It allows determining the range of execution options available on a particular machine or hardware platform. Retrieving the available execution providers allows developers to explore and experiment with different execution strategies. This tutorial shows how to get available execution providers in ONNX Runtime using C++.
In the following code, we retrieve a vector of strings, where each string represents the name of an available execution provider. We iterate over the vector and print the name of each execution provider to the console.
#include <iostream>
#include <onnxruntime_cxx_api.h>
int main()
{
std::vector<std::string> providers = Ort::GetAvailableProviders();
for (const auto &provider: providers) {
std::cout << provider << std::endl;
}
return 0;
}
Here's an example of the possible output when running the code:
TensorrtExecutionProvider
CUDAExecutionProvider
CPUExecutionProvider
In this example, the output indicates that three execution providers are available.
Leave a Comment
Cancel reply