Skip to content

Run Segment Anything 2 (SAM 2) on macOS using Core ML models

Notifications You must be signed in to change notification settings

mikeesto/sam2-coreml-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Segment Anything 2 - Core ML

Run Segment Anything 2 (SAM 2) on macOS using Core ML models.

Installation

  1. Clone this repository
  2. Install dependencies:
pip install coremltools numpy pillow opencv-python

Directory Structure

sam2-coreml-python/
├── models/
│   ├── SAM2_1SmallImageEncoderFLOAT16.mlpackage
│   ├── SAM2_1SmallPromptEncoderFLOAT16.mlpackage
│   └── SAM2_1SmallMaskDecoderFLOAT16.mlpackage
├── script.py
└── README.md

Usage

  1. Download the Core ML models and place them in the models directory
  2. Place your input image in the project directory
  3. Update the script.py file with the input image path
  4. Run the script:
python script.py

The script will generate output_mask.png containing the segmentation mask and output_segmented.png containing the segmented image.

Models

The script expects the SAM 2 Core ML models. These need to be downloaded separately and placed in your file system following the above directory structure.

You can find the models here on Hugging Face. I used the coreml-sam2.1-small models on my MacBook Pro M3 and the inference time was around 4 seconds. The script will work with other models as well.

Credit

I took regular inspiration when writing this script from the implementation of sam2-studio, a SwiftUI app that uses the same Core ML models.

About

Run Segment Anything 2 (SAM 2) on macOS using Core ML models

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages