Run Segment Anything 2 (SAM 2) on macOS using Core ML models.
- Clone this repository
- Install dependencies:
pip install coremltools numpy pillow opencv-python
sam2-coreml-python/
├── models/
│ ├── SAM2_1SmallImageEncoderFLOAT16.mlpackage
│ ├── SAM2_1SmallPromptEncoderFLOAT16.mlpackage
│ └── SAM2_1SmallMaskDecoderFLOAT16.mlpackage
├── script.py
└── README.md
- Download the Core ML models and place them in the
models
directory - Place your input image in the project directory
- Update the
script.py
file with the input image path - Run the script:
python script.py
The script will generate output_mask.png
containing the segmentation mask and output_segmented.png
containing the segmented image.
The script expects the SAM 2 Core ML models. These need to be downloaded separately and placed in your file system following the above directory structure.
You can find the models here on Hugging Face. I used the coreml-sam2.1-small
models on my MacBook Pro M3 and the inference time was around 4 seconds. The script will work with other models as well.
I took regular inspiration when writing this script from the implementation of sam2-studio, a SwiftUI app that uses the same Core ML models.