Skip to content

Eliyaser/docker-AI-ollama-setup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Ollama Docker Setup

This repository contains scripts and configurations to run and interact with the Ollama AI model using Docker.

Getting Started

Step 1: Pull the Docker Image

First, pull the latest Ollama image from Docker Hub:

docker pull ollama/ollama:latest

$docker run -d -p 11434:11434 --name ollama_test ollama/ollama

$docker exec -it <containerID> /bin/bash

or using your docker desktop UI have this option.

Step 2: Run the Docker Container

docker run -d -p 11434:11434 --name ollama_test ollama/ollama

Step 3: Access the Container

docker exec -it <containerID> /bin/bash

Step 4: Get Available Models

Once inside the container, list all available models:

ollama list

Step 5: Pull the TinyLlama Model

To pull the TinyLlama model, run:

ollama pull tinyllama

Step 6: Run the TinyLlama Model

Execute the following command to run the TinyLlama model:

ollama run tinyllama

Accessing the Model via Python

Navigate to the directory containing main.py.

Open a terminal in that directory.

Step 1: Install the requests Library

Ensure that the requests library is installed:

pip install requests

Step 2: Send a Request to the AI Model

Run the Python script to interact with the model:

python .\main.py

To stop the running container:

Copy code
docker stop ollama_test.

Remove Exited Containers

Clean up any exited containers:

Copy code
docker rm $(docker ps -a -f "status=exited" -q)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages