There are a variety of already trained, open source models you can use immediately with LiteRT to accomplish many machine learning tasks. Using pre-trained LiteRT models lets you add machine learning functionality to your mobile and edge device application quickly, without having to build and train a model. This guide helps you find and decide on trained models for use with LiteRT.
You can start browsing a large set of models on Kaggle Models.
Find a model for your application
Finding an existing LiteRT model for your use case can be tricky depending on what you are trying to accomplish. Here are a few recommended ways to discover models for use with LiteRT:
By example: The fastest way to find and start using models with TensorFlow Lite is to browse the LiteRT Examples section to find models that perform a task which is similar to your use case. This short catalog of examples provides models for common use cases with explanations of the models and sample code to get you started running and using them.
By data input type: Aside from looking at examples similar to your use case, another way to discover models for your own use is to consider the type of data you want to process, such as audio, text, images, or video data. Machine learning models are frequently designed for use with one of these types of data, so looking for models that handle the data type you want to use can help you narrow down what models to consider.
The following lists links to LiteRT models on Kaggle Models for common use cases:
- Image classification models
- Object detection models
- Text classification models
- Text embedding models
- Audio speech synthesis models
- Audio embedding models
Choose between similar models
If your application follows a common use case such as image classification or object detection, you may find yourself deciding between multiple TensorFlow Lite models, with varying binary size, data input size, inference speed, and prediction accuracy ratings. When deciding between a number of models, you should narrow your options based first on your most limiting constraint: size of model, size of data, inference speed, or accuracy.
If you are not sure what your most limiting constraint is, assume it is the size of the model and pick the smallest model available. Picking a small model gives you the most flexibility in terms of the devices where you can successfully deploy and run the model. Smaller models also typically produce faster inferences, and speedier predictions generally create better end-user experiences. Smaller models typically have lower accuracy rates, so you may need to pick larger models if prediction accuracy is your primary concern.
Sources for models
Use the LiteRT Examples section and Kaggle Models as your first destinations for finding and selecting models for use with TensorFlow Lite. These sources generally have up to date, curated models for use with LiteRT, and frequently include sample code to accelerate your development process.
TensorFlow models
It is possible to convert regular TensorFlow models to TensorFlow Lite format. For more information about converting models, see the TensorFlow Lite Converter documentation. You can find TensorFlow models on Kaggle Models and in the TensorFlow Model Garden.