@@ -12,9 +12,32 @@ As a data scientist or ML engineer, you've likely faced the challenge of limited
12
12
13
13
## What's Included
14
14
15
- - ` .devcontainer/ ` - Complete GPU-enabled development environment configuration
16
- - ` devcontainer.json ` - Environment setup with NVIDIA CUDA support
17
- - ` Dockerfile ` - Base container configuration using Ubuntu
15
+ - ` devcontainer.json ` - Decalarative and repeatable environment setup including:
16
+ - NVIDIA CUDA support via features
17
+ - GPU requirements and access configuration
18
+ - VS Code Python and Jupyter extensions
19
+ - Python interpreter and formatting settings
20
+ - Automatic requirements.txt installation
21
+ - Kernel specification for Jupyter notebooks
22
+ - ` Dockerfile ` - Base container configuration using Ubuntu with CUDA toolkit and Python packages, including:
23
+ - Python 3.10 with virtual environment
24
+ - IPython kernel for Jupyter notebooks
25
+ - Ollama for LLM inference
26
+ - ` requirements.txt ` - All Python dependencies installed including (but not limited to):
27
+ - NumPy (>=1.24.0)
28
+ - Pandas (>=2.0.0)
29
+ - Matplotlib (>=3.7.0)
30
+ - Seaborn (>=0.12.0)
31
+ - scikit-learn (>=1.3.0)
32
+ - Jupyter (>=1.0.0)
33
+ - IPython Kernel (>=6.0.0)
34
+ - Plotly (>=5.0.0)
35
+ - Plotly Express (>=0.4.0)
36
+ - nbformat (>=5.0.0)
37
+ - ` .gitpod/automations.yaml ` - Gitpod automation examples
38
+ - Starting the ollama server
39
+ - Seeing GPU stats of the environment
40
+ - Running Ollama LLM
18
41
19
42
## Quick Start
20
43
@@ -32,15 +55,32 @@ As a data scientist or ML engineer, you've likely faced the challenge of limited
32
55
- CUDA Toolkit
33
56
- Python 3.x
34
57
- Common ML libraries (PyTorch, TensorFlow)
58
+ - Ollama for local inference
35
59
36
- ## Verify Your Setup
60
+ ## Try It Out
37
61
38
- Once your environment is running:
62
+ Once your environment is running, here are some things you can try :
39
63
64
+ 1 . Run local inference with ollama:
65
+ ``` bash
66
+ ollama run phi3:medium
67
+ ```
68
+
69
+ 2 . See nvidia GPU performance and stats:
40
70
``` bash
41
71
watch -n 1 nvidia-smi
42
72
```
43
73
74
+ 3 . Run a Jupyter notebook:
75
+ ``` bash
76
+ jupyter notebook
77
+ ```
78
+
79
+ 4 . Execute a Python script:
80
+ ``` bash
81
+ python my_script.py
82
+ ```
83
+
44
84
## Customization
45
85
46
86
- Modify ` .devcontainer/devcontainer.json ` to change GPU requirements
@@ -54,7 +94,10 @@ watch -n 1 nvidia-smi
54
94
55
95
** Note:** Refer to AWS documentation for precise costs.
56
96
57
- ## Learn More
97
+ ## Documentation
98
+
99
+ For a full tutorial, check out: [ https://www.gitpod.io/blog/gpu-dev-environments-on-aws ] ( https://www.gitpod.io/blog/gpu-dev-environments-on-aws )
58
100
101
+ Other helpful resources:
59
102
- [ Gitpod Documentation] ( https://www.gitpod.io/docs )
60
103
- [ Dev Container Specification] ( https://containers.dev )
0 commit comments