Skip to content

SyNAP demos, test applications and code snippets

License

Notifications You must be signed in to change notification settings

synaptics-synap/examples

Repository files navigation

Synaptics Astra SL16xx Series AI Examples

home

This repository provides AI example applications for the Synaptics Astra SL16xx series, covering computer vision, speech processing, and large language models (LLMs). Follow the instructions below to set up your environment and run various AI examples in few minutes.

The examples in this repository are designed to work with Astra SL series processors leveraging NPUs (for SL1680 and SL1640 processors) and GPUs (for SL1620 processor) using Astra Machina Dev Kit.

Note: Learn more about Synaptics Astra by visiting:

Setting up Astra Machina Board

For instructions on how to set up Astra Machina board , see the Setting up the hardware guide.

🔧 Installation

Clone the Repository

Clone the repository using the following command:

git clone https://github.com/synaptics-synap/examples.git

Navigate to the Repository Directory:

cd examples

Setup Python Environment

To get started, set up your Python environment. This step ensures all required dependencies are installed and isolated within a virtual environment:

python3 -m venv .venv --system-site-packages
source .venv/bin/activate
pip install -r requirements.txt

Install SynapRT

SynapRT Python package allows you to run real-time AI pipelines on your Synaptics Astra board in just a few lines of code:

pip install https://github.com/synaptics-synap/synap-rt/releases/download/v0.0.1-preview/synap_rt-0.0.1-py3-none-any.whl

🎯 Running AI Examples

🖼️ Vision

To run a YOLOv8-small image classification model on a Image:

 python3 -m vision.image_class out.jpg

To run a YOLOv8-small body pose model using a connected camera and you can Infer results using :

python3 -m vision.body_pose 'cam'

bodypose

🗣️ Speech-to-Text

Moonshine is an speech-to-text model that provides translation from speech to text.

To transcribe an audio file ( for examplejfk.wav):

python3 -m speech_to_text.moonshine 'samples/jfk.wav'

To enable real-time speech transcription using a USB microphone (such as one from a Webcam or a Headphone):

python3 -m speech_to_text.pipeline

🔊 Text-to-Speech

Convert a given text string into synthetic speech using Piper:

python3 -m text_to_speech.piper "synaptics astra example"

🚀 Large Language Models (LLMs)

Install SQLite3 Dependencies

SQLite3 is required for certain AI model operations. Install it using the following commands:

wget https://synaptics-astra-labs.s3.us-east-1.amazonaws.com/downloads/sqlite3_3.38.5-r0_arm64.deb
wget https://synaptics-astra-labs.s3.us-east-1.amazonaws.com/downloads/python3-sqlite3_3.10.13-r0_arm64.deb
dpkg -i python3-sqlite3_3.10.13-r0_arm64.deb sqlite3_3.38.5-r0_arm64.deb

🦙 Install llama-cpp-python

This command installs llama-cpp-python, which enables running large language models efficiently:

pip install llama-cpp-python

There is also a prebuilt version, this installs faster but lags version and may not support newer models (e.g. deepseek)

pip install llama-cpp-python   --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

To run large language models such as Qwen and DeepSeek:

python3 -m llm.qwen
#python3 -m llm.deepseek

qwen

🔍 Embeddings

Get a gist for Embeddings and how to generate sentence embeddings using MiniLM, a lightweight transformer-based model:

python3 -m embeddings.minilm "synaptics astra example!"

🤖 AI - Text Assistant

Launch an AI-powered text assistant with Tool calling functionality:

python3 -m assistant.toolcall

Running gstreamer example

Use SyNAP GStreamer Plugins to run Real-time inference on a video input using Python

python3 -m examples.infer_video -i /home/root/video.mp4 --fullscreen

od

📚 Additional Resources

Contributing

We encourage and appreciate community contributions! Here’s how you can get involved:

  • Contribute to our Community – Share your work and collaborate with other developers.
  • Suggest Features and Improvements – Have an idea? Let us know how we can enhance the project.
  • Report Issues and Bugs – Help us improve by identifying and reporting any issues.

Your contributions make a difference, and we look forward to your input!

License

This project is licensed under the Apache License, Version 2.0.
See the LICENSE file for details.

About

SyNAP demos, test applications and code snippets

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •