@@ -12,15 +12,15 @@ If you're new to this, you will need to request access to a GPU-equipped partiti
12
12
If you're unsure whether you have access or not, simply run
13
13
14
14
```
15
- $ ./interactive-gpu.s
15
+ ./interactive-gpu.s
16
16
```
17
17
18
18
If an error is returned, then you probably don't have access and need to contact ACC.
19
19
If no error is returned, and instead you see a message saying you're launching an interactive session, congrats!
20
20
You can confirm that you have access to GPU hardware by typing
21
21
22
22
```
23
- $ nvidia-smi
23
+ nvidia-smi
24
24
```
25
25
26
26
Depending on which GPU partition you have been allocated to, you should see a list of at least one device that looks like a GPU.
@@ -39,19 +39,19 @@ Note that I am using python3 which seems to work, although I have not tested thi
39
39
If you are usure of which python version you'd like to use, I recommend use python3.
40
40
41
41
```
42
- $ virtualenv -p python3 tensorflow
42
+ virtualenv -p python3 tensorflow
43
43
```
44
44
45
45
Once that's finished, begin using your virtual environment by sourcing the activation script.
46
46
47
47
```
48
- $ source tensorflow/bin/activate
48
+ source tensorflow/bin/activate
49
49
```
50
50
51
51
We will keep the virtualenv open for now, but you can terminate the environment at any time by simply typing
52
52
53
53
```
54
- $ deactivate
54
+ deactivate
55
55
```
56
56
57
57
## Installing Tensorflow
@@ -60,7 +60,7 @@ Since virtualenv automatically installs a local version of pip, you can install
60
60
However, because we will be utilizing GPU architecture, we want to be sure to install the GPU-capable version of tensorflow.
61
61
62
62
```
63
- $ pip install tensorflow-gpu
63
+ pip install tensorflow-gpu
64
64
```
65
65
66
66
## Configure LD_LIBRARY_PATH
@@ -69,7 +69,7 @@ Tensorflow requires access to NVIDIA's CUDA library to communicate with the GPU
69
69
Edit your .bash_profile to include the following line (after any other modifications to LD_LIBRARY_PATH.
70
70
71
71
```
72
- $ export LD_LIBRARY_PATH=LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/lib64/nvidia
72
+ export LD_LIBRARY_PATH=$ LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/lib64/nvidia
73
73
```
74
74
75
75
Once done, either log out and back onto Exacloud, or source your .bash_profile.
@@ -79,7 +79,7 @@ Once done, either log out and back onto Exacloud, or source your .bash_profile.
79
79
Assuming you're still in an interactive session with GPU access and you are in your virtual environment, you can test your installation with
80
80
81
81
```
82
- $ python -c "import tensorflow"
82
+ python -c "import tensorflow"
83
83
```
84
84
85
85
If you get an error, then you may well be missing a required CUDA dependency.
@@ -90,7 +90,7 @@ Otherwise, if the above command doesn't return an error (nothing happens), then
90
90
Try running
91
91
92
92
```
93
- $ python gpu-tutorial.py
93
+ python gpu-tutorial.py
94
94
```
95
95
96
96
If everything is working, you should see a bunch of stuff spit out into the console.
@@ -115,7 +115,7 @@ srun python gpu-tutorial.py # runs your python script
115
115
Close out and send this to Slurm using the following
116
116
117
117
```
118
- $ sbatch submit-gpu.s
118
+ sbatch submit-gpu.s
119
119
```
120
120
121
121
Assuming everything works right, after a few seconds you should see your current directory populated with a new file called slurm-[ a bunch of numbers] .out.
0 commit comments