This repository provides AI example applications for the Synaptics Astra SL16xx series, covering computer vision, speech processing, and large language models (LLMs). Follow the instructions below to set up your environment and run various AI examples in few minutes.
The examples in this repository are designed to work with Astra SL series processors leveraging NPUs (for SL1680 and SL1640 processors) and GPUs (for SL1620 processor) using Astra Machina Dev Kit.
Note: Learn more about Synaptics Astra by visiting:
- Astra – Explore the Astra AI platform.
- Astra Machina – Discover our powerful development kit.
- AI Developer Zone – Find step-by-step tutorials and resources.
For instructions on how to set up Astra Machina board , see the Setting up the hardware guide.
Clone the repository using the following command:
git clone https://github.com/synaptics-synap/examples.git
Navigate to the Repository Directory:
cd examples
To get started, set up your Python environment. This step ensures all required dependencies are installed and isolated within a virtual environment:
python3 -m venv .venv --system-site-packages
source .venv/bin/activate
pip install -r requirements.txt
SynapRT Python package allows you to run real-time AI pipelines on your Synaptics Astra board in just a few lines of code:
pip install https://github.com/synaptics-synap/synap-rt/releases/download/v0.0.1-preview/synap_rt-0.0.1-py3-none-any.whl
To run a YOLOv8-small image classification model on a Image:
python3 -m vision.image_class out.jpg
To run a YOLOv8-small body pose model using a connected camera and you can Infer results using :
python3 -m vision.body_pose 'cam'
Moonshine is an speech-to-text model that provides translation from speech to text.
To transcribe an audio file ( for examplejfk.wav
):
python3 -m speech_to_text.moonshine 'samples/jfk.wav'
To enable real-time speech transcription using a USB microphone (such as one from a Webcam or a Headphone):
python3 -m speech_to_text.pipeline
Convert a given text string into synthetic speech using Piper:
python3 -m text_to_speech.piper "synaptics astra example"
SQLite3 is required for certain AI model operations. Install it using the following commands:
wget https://synaptics-astra-labs.s3.us-east-1.amazonaws.com/downloads/sqlite3_3.38.5-r0_arm64.deb
wget https://synaptics-astra-labs.s3.us-east-1.amazonaws.com/downloads/python3-sqlite3_3.10.13-r0_arm64.deb
dpkg -i python3-sqlite3_3.10.13-r0_arm64.deb sqlite3_3.38.5-r0_arm64.deb
This command installs llama-cpp-python, which enables running large language models efficiently:
pip install llama-cpp-python
There is also a prebuilt version, this installs faster but lags version and may not support newer models (e.g. deepseek)
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
To run large language models such as Qwen and DeepSeek:
python3 -m llm.qwen
#python3 -m llm.deepseek
Get a gist for Embeddings and how to generate sentence embeddings using MiniLM, a lightweight transformer-based model:
python3 -m embeddings.minilm "synaptics astra example!"
Launch an AI-powered text assistant with Tool calling functionality:
python3 -m assistant.toolcall
Use SyNAP GStreamer Plugins to run Real-time inference on a video input using Python
python3 -m examples.infer_video -i /home/root/video.mp4 --fullscreen
- AI Developer Zone – Find step-by-step tutorials and resources.
- GitHub Synap-RT – ExploreReal-time AI pipelines with Python.
- GitHub SyNAP-Python-API – Python bindings that closely mirror our SyNAP C++ API.
- GitHub SyNAP C++ – Low-level access to our SyNAP C++ AI Framework
- GitHub Astra SDK – Get started with the Astra SDK for AI development.
We encourage and appreciate community contributions! Here’s how you can get involved:
- Contribute to our Community – Share your work and collaborate with other developers.
- Suggest Features and Improvements – Have an idea? Let us know how we can enhance the project.
- Report Issues and Bugs – Help us improve by identifying and reporting any issues.
Your contributions make a difference, and we look forward to your input!
This project is licensed under the Apache License, Version 2.0.
See the LICENSE file for details.