Skip to content

Latest commit

 

History

History
914 lines (889 loc) · 29.2 KB

M7_Storage.md

File metadata and controls

914 lines (889 loc) · 29.2 KB

Module 7: Storage in Kubernetes

Objectives

This exercise focuses on enabling you to do the following:

  • Provision NetApp storage for Kubernetes

  • Create a Persistent Volume on ONTAP

  • Create a Persistent Volume Snapshot on ONTAP with Trident

  • Create a PVC and implement it with a Pod

  • Resize an NFS Persistent Volume using Trident

Trident integrates with Kubernetes' standard interface for managing storage providers. In Kubernetes, you request the creation of a Persistent Volume (PV) by submitting a Persistent Volume Claim (PVC), and the details of the PVC identify the desired storage class.

Task 1: Provision NetApp storage for Kubernetes

In order to handle the provisioning of NetApp storage for your Kubernetes cluster, Trident needs to know what NetApp storage product(s) you are using and the configuration details (IP addresses, credentials, etc.) necessary to interact with them. The Trident administrator creates a construct called a "backend" to contain this information. List the backends configured in this environment. Backends are a Trident construct rather than a native part of Kubernetes, so you need to use the tridentctl command to query for the configured backends.

Step Action
Make sure that you are in the ~/k8s/course/ folder on RHEL3. 

List the available backends:

tridentctl get backends -n trident

Sample output:

+---------------------+----------------+--------------------------------------+--------+---------+

|        NAME         | STORAGE DRIVER |                 UUID                 | STATE  | VOLUMES |

+---------------------+----------------+--------------------------------------+--------+---------+

| BackendForSolidFire | solidfire-san  | d9d6bef6-eef9-4ff0-b5c8-c69d048b739e | online |       0 |

| BackendForNAS       | ontap-nas      | e098abb8-8e16-4b4f-a4bc-a6c9557b39b1 | online |       0 |

+---------------------+----------------+--------------------------------------+--------+---------+

The lab includes definitions for two backends: BackendForSolidFire and BackendForNAS. The value in the STORAGE DRIVER column indicates what kind of storage solution this backup offers. The ontap-nas driver provides the integration for delivering NFS storage on ONTAP platforms, and the solidfire-san driver handles block storage hosted on Element platforms.

Query Trident for greater detail of the current configuration:

tridentctl get backend BackendForNAS -n trident -o json

Display the contents of the json configuration file that was used to create the BackendForNAS backend:

cat backend-nas.json

Sample output:

{

"version": 1,

"storageDriverName": "ontap-nas",

"backendName": "BackendForNAS",

"managementLIF": "192.168.0.135",

"dataLIF": "192.168.0.132",

"svm": "svm1",

"username": "vsadmin",

"password": "Netapp1!"

}

The file shows that the backend is using the ontap-nas driver, which means the backend handles NFS volume provisioning from an ONTAP controller. The other fields in the backend description provide the SVM name, IP addresses, and login credentials that Trident needs when provisioning storage on ONTAP using this backend.

List the Storage Classes defined for the cluster:

kubectl get sc

Sample output:

NAME PROVISIONER AGE

sf-gold csi.trident.netapp.io 86d

sf-silver csi.trident.netapp.io 86d

storage-class-nas csi.trident.netapp.io 163d

storage-class-ssd csi.trident.netapp.io 129d

storage-class-storagepool csi.trident.netapp.io 163d

Display the details of the storage-class-nas storage class:

kubectl describe sc storage-class-nas

Sample output:

Name: storage-class-nas

IsDefaultClass: No

Annotations: <none>

Provisioner: csi.trident.netapp.io

Parameters: backendType=ontap-nas

AllowVolumeExpansion: True

MountOptions: <none>

ReclaimPolicy: Delete

VolumeBindingMode: Immediate

Events: <none>

Examine the json file that was used to create the storage-class-nas storage class:

cat sc-nas.yaml

Sample output:

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: storage-class-nas

provisioner: csi.trident.netapp.io

parameters:

backendType: "ontap-nas"

allowVolumeExpansion: true

If this lab did not already have the storage-class-nas storage class created, you could have used this YAML file to create it by issuing the kubectl create -f sc-nas.yaml command.

task 2: Create a persistEnt Volume on ONTAP

In this exercise you will submit a PVC to request a PV from ONTAP.

Step Action

Examine the contents of pvcfornas.yaml file:

cat pvcfornas.yaml

Sample output:

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: persistent-volume-claim-nas

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 1Gi

storageClassName: storage-class-nas

Submit the PVC to Kubernetes to request the volume:

kubectl create -f pvcfornas.yaml

To verify that the volume was created, display the details of the PVC request:

kubectl get pvc

List the PVs:

kubectl get pv

Display the details of for this PV:

kubectl describe pv <name of the PV volume created by the PVC>

List the backends and use the backendUUID property value from the previous command to determine the name of the associated backend:

tridentctl get backends -n trident

Query Trident for the BackendForNAS backend’s dataLIF property:

tridentctl get backend BackendForNAS -n trident -o json | grep dataLIF

Sample output:

"dataLIF": "192.168.0.132"

Open a PuTTY session to cluster1 and authenticate with the following:

Username: admin

Password: Netapp1!

Verify the svm1’s data LIF IP address:

network interface show -vserver svm1

Sample output:

Logical Status Network Current Current Is

Vserver Interface Admin/Oper Address/Mask Node Port Home

----------- ---------- ---------- ------------------ ------------- ------- ----

svm1

svm1_mgmt up/up 192.168.0.135/24 cluster1-01 e0c true

svm1_nfs_01 up/up 192.168.0.132/24 cluster1-01 e0d true

2 entries were displayed.

Note: one of these LIFs is the same IP address as configured on the BackendForNAS backend that Trident uses.

Verify that the PV’s volume was created by listing svm1’s volumes:

volume show -vserver svm1

Sample output:

Vserver Volume Aggregate State Type Size Available Used%

--------- ------------ ------------ ---------- ---- ---------- ---------- -----

svm1 registry aggr1 online RW 20GB 18.93GB 0%

svm1 svm1_root aggr1 online RW 20MB 17.45MB 8%

svm1 trident_pvc_9c9ef0a2_fb85_4960_aef9_830e2d5bb436

aggr1 online RW 1GB 1023MB 0%

svm1 vol_import_manage

aggr1 online RW 2GB 1.90GB 0%

svm1 vol_import_nomanage

aggr1 online RW 2GB 1.90GB 0%

svm1 www aggr1 online RW 5GB 4.75GB 0%

6 entries were displayed.

List the aggregates assigned to svm1:

vserver show -vserver svm1 -fields aggr-list

Sample output:

vserver aggr-list

------- -----------

svm1 aggr1,aggr2

Display the namespace junction paths for these volumes:

volume show -vserver svm1 -junction

Sample output:

Junction Junction

Vserver Volume Language Active Junction Path Path Source

--------- ------------ -------- -------- ------------------------- -----------

svm1 registry C.UTF-8 true /registry RW_volume

svm1 svm1_root C.UTF-8 true / -

svm1 trident_pvc_9c9ef0a2_fb85_4960_aef9_830e2d5bb436

C.UTF-8 true /trident_pvc_9c9ef0a2_ RW_volume

fb85_4960_aef9_

830e2d5bb436

svm1 vol_import_manage

C.UTF-8 true /vol_import_manage RW_volume

svm1 vol_import_nomanage

C.UTF-8 true /vol_import_nomanage RW_volume

svm1 www C.UTF-8 true /www RW_volume

6 entries were displayed.

In your PuTTY session to rhel3, instantiate a pod that will request a persistent-volume-claim-nas PVC. To perform this task you must use a pod definition file (in YAML format) to create the desired container and to claim the sort of PVC to attach. Once again, this exercise includes a pre-configured pod definition file for that purpose. Examine the contents of that file:

cat pv-alpine.yaml

Sample output:

kind: Pod

apiVersion: v1

metadata:

name: pvpod

spec:

volumes:

- name: task-pv-storage

persistentVolumeClaim:

claimName: persistent-volume-claim-nas

containers:

- name: task-pv-container

image: alpine:3.2

command:

- /bin/sh

- "-c"

- "sleep 60m"

volumeMounts:

- mountPath: "/data"

name: task-pv-storage

Create the pod:

kubectl create -f pv-alpine.yaml

Display the status of this new pod:

kubectl get pod pvpod

Launch a CLI session inside the pvpod container:

kubectl exec -it pvpod /bin/sh

Sample output:

/ #

Display a list of the mounted volumes in the pod to verify that the PV is attached. In this example you are specifically looking for the volume mounted on /data:

/ # mount | grep /data

Sample output:

192.168.0.132:/trident_pvc_9c9ef0a2_fb85_4960_aef9_830e2d5bb436 on /data …

In this example the volume is NFS mounted onto /data from 192.168.0.132:/

trident_pvc_15aa89c0_ba81_4c9f_b106_8d342802120a.

Create a file on the volume and then log out of the pod:

/ # echo "THIS IS A TEST FILE" > /data/testfile.txt

/ # exit

Destroy the pod pvpod:

kubectl delete pod pvpod

Note: The pod will take 30-60 seconds to delete.

List the PVCs:

kubectl get pvc

Even though you deleted the pod, your PVC is still present.

Re-provision the pod:

kubectl create -f pv-alpine.yaml

Make sure the pod is running:

kubectl get pod pvpod

Verify that the new pod connected to your existing PVC:

kubectl exec -it pvpod /bin/sh

Verify the data folder still exists:

/ # mount | grep /data

Sample output:

192.168.0.132:/trident_pvc_9c9ef0a2_fb85_4960_aef9_830e2d5bb436 on /data

Verify that the test file you created earlier is accessible:

/ # cat /data/testfile.txt

/ # exit

As you can see, so long as you do not delete the PVC, you can delete and recreate the pod and it will re-attach to the PVC that still retains your data.

Delete the pod and the PVC:

kubectl delete pod pvpod

kubectl delete pvc persistent-volume-claim-nas

When you deleted the PVC, the associated PV was deleted as well.

In your PuTTY session to cluster1, verify that the ONTAP volume that backed the PV is gone as well:

volume show -vserver svm1 -junction

Sample output:

Junction Junction

Vserver Volume Language Active Junction Path Path Source

--------- ------------ -------- -------- ------------------------- -----------

svm1 registry C.UTF-8 true /registry RW_volume

svm1 svm1_root C.UTF-8 true / -

svm1 vol_import_manage

C.UTF-8 true /vol_import_manage RW_volume

svm1 vol_import_nomanage

C.UTF-8 true /vol_import_nomanage RW_volume

svm1 www C.UTF-8 true /www RW_volume

5 entries were displayed.

The trident_pvc volume you saw here in step 2-12 is now gone.

Task 3: Create a pErsistent Volume Snapshot on ONTAP with Trident

A snapshot is a point-in-time copy of a volume which can be used to provision new volumes or restore volume state. Trident enables Kubernetes users to easily create snapshots and create new volumes from those snapshots. In this exercise, you will use Trident to create a snapshot on a persistent volume. You will accomplish this by creating the persistent volume that you want to snapshot, then create a VolumeSnapshotClass, which defines a class of storage to support provisioning a snaphot, and finally you will issue a VolumeSnapshot request that uses that VolumeSnapshotClass to create a snapshot on the persistent volume.

Step Action

Examine the YAML file you will use to create a PVC and pod for this exercise:

cat pvcnas-alpine-pod.yaml

Notice that this is a single YAML file with multiple object definitions (PersistentVolumeClaim and Pod).

Create the persistent-volume-claim-nas PVC and the pvpod pod:

kubectl create -f pvcnas-alpine-pod.yaml

Verify the PVC and the existing pod:

kubectl get pod

kubectl get pvc

Connect to the pod and create a test file on the PV:

kubectl exec -it pvpod /bin/sh

/ #

/ # echo "My test file" > /data/file1.txt

/ # exit

In order to create a snapshot, Kubernetes requires a VolumeSnapshotClass object. Examine the YAML file you will be using to create a VolumeSnapshotClass:

cat snap-sc.yaml

Sample output:

apiVersion: snapshot.storage.k8s.io/v1alpha1

kind: VolumeSnapshotClass

metadata:

name: csi-snapclass

snapshotter: csi.trident.netapp.io

A VolumeSnapshotClass object is similar to a StorageClass object. Just as StorageClass provides an abstract definition of storage for provisioning a volume, VolumeSnapshotClass provides an abstract definition of storage for provisioning a volume snapshot.

Create the VolumeSnapshotClass:

kubectl create -f snap-sc.yaml

Verify that VolumeSnapshotClass was created:

kubectl get volumesnapshotclass

Inspect the details of the csi-snapclass VolumeSnapshotClass:

kubectl describe volumesnapshotclass csi-snapclass

Examine the YAML file you will use to request the creation of a volume snapshot:

cat snap.yaml

Sample output:

apiVersion: snapshot.storage.k8s.io/v1alpha1

kind: VolumeSnapshot

metadata:

name: pvc1-snap

spec:

snapshotClassName: csi-snapclass

source:

name: persistent-volume-claim-nas

kind: PersistentVolumeClaim

Create the VolumeSnaphot:

kubectl create -f snap.yaml

Verify that the VolumeSnapshot object was created:

kubectl get volumesnapshots

Inspect the VolumeSnapshot object:

kubectl describe volumesnapshot pvc1-snap

In your PuTTY session to cluster1, list the volumes on svm1:

volume show -vserver svm1

Sample output:

Vserver Volume Aggregate State Type Size Available Used%

--------- ------------ ------------ ---------- ---- ---------- ---------- -----

svm1 registry aggr1 online RW 20GB 18.93GB 0%

svm1 svm1_root aggr1 online RW 20MB 17.33MB 8%

svm1 trident_pvc_a1b6154f_2f72_4710_b77c_41c3dc5db034

aggr1 online RW 1GB 1023MB 0%

svm1 vol_import_manage

aggr1 online RW 2GB 1.90GB 0%

svm1 vol_import_nomanage

aggr1 online RW 2GB 1.90GB 0%

svm1 www aggr1 online RW 5GB 4.75GB 0%

6 entries were displayed.

Still on cluster1, list the snapshots for the new volume:

volume snapshot show -volume <trident volume name>

Sample output:

---Blocks---

Vserver Volume Snapshot Size Total% Used%

-------- -------- ------------------------------------- -------- ------ -----

svm1 trident_pvc_a1b6154f_2f72_4710_b77c_41c3dc5db034

snapshot-eed7b43c-ba50-44f2-accf-51785506aaee

156KB 0% 36%

In your PuTTY session to rhel3, delete the pvpod pod. Remember, the delete operation will take 30-60 seconds:

kubectl delete pod pvpod

Delete the PVC on which you created the snapshot:

kubectl delete pvc persistent-volume-claim-nas

Kubernetes is no longer aware of the PVC.

Verify that no PVs are available:

kubectl get pv

In your PuTTY session to cluster 1, list the volumes on svm1:

cluster1::> volume show -vserver svm1

cluster1::> volume snapshot show -volume <trident volume name>

In your PuTTY session to rhel3, ask Kubernetes to list the VolumeSnapshots:

kubectl get volumesnapshots

List the details of the pvc-snap1 VolumeSnapshot:

kubectl describe volumesnapshot pvc1-snap

Ask Trident to list the snapshots it manages:

tridentctl get snapshot -n trident

Sample output:

+-----------------------------------------------+------------------------------------------+

| NAME | VOLUME |

+-----------------------------------------------+------------------------------------------+

| snapshot-eed7b43c-ba50-44f2-accf-51785506aaee | pvc-a1b6154f-2f72-4710-b77c-41c3dc5db034 |

+-----------------------------------------------+------------------------------------------+

Ask Trident to display more details for the known snapshots:

tridentctl get snapshot -n trident -o json

Ask Trident to list the volumes it manages:

tridentctl get volume -n trident

Display more details for these volumes:

tridentctl get volume -n trident -o json

Task 4: Create a Persistent Volume Claim

In this exercise you will create a persistent volume claim from the snapshot you created in the

preceding exercise. You therefore must have completed the Create a Persistent Volume Snapshot on ONTAP with Trident exercise before you can successfully proceed with this exercise.

Step Action

In your PuTTY session to rhel3, examine a YAML file for issuing a PVC to create a PV from a volume snapshot and to provision an alpine pod that uses it:

cat pod-alpine-clone.yaml

Sample output:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc-from-snap

spec:

accessModes:

- ReadWriteOnce

storageClassName: storage-class-nas

resources:

requests:

storage: 1Gi

dataSource:

name: pvc1-snap

kind: VolumeSnapshot

apiGroup: snapshot.storage.k8s.io

---

kind: Pod

apiVersion: v1

metadata:

name: pvpod-clone

spec:

volumes:

- name: task-pv-storage

persistentVolumeClaim:

claimName: pvc-from-snap

containers:

- name: task-pv-container

image: alpine:3.2

command:

- /bin/sh

- "-c"

- "sleep 60m"

volumeMounts:

- mountPath: "/data"

name: task-pv-storage

This YAML file defines a request to create a PVC named pvc-from-snap from the VolumeSnapshot object named pvc1-snap.

Issue the PVC request:

kubectl create -f pod-alpine-clone.yaml

Verify that the PVC and pod were created:

kubectl get pod

kubectl get pvc

The new cloned PVC is 1Gi, which is the same as the source PV’s 1Gi size.

Inspect the details of the pvc-from-snap PVC:

kubectl describe pvc pvc-from-snap

In your PuTTY session to cluster1, verify the new volume was created on svm1:

volume show -vserver svm1

Connect to the pod and verify that the /data/file1.txt file that you created on the source PV is present on the clone PV as well:

kubectl exec -it pvpod-clone /bin/sh

/ # cat /data/file1.txt

/ # exit

In your PuTTY session to rhel3, delete the pvpod-clone pod and pvc-from-snap PVC, which will in turn delete the associated PV:

kubectl delete pod pvpod-clone

kubectl delete pvc pvc-from-snap

Delete the VolumeSnapshot you used as the source for creating the clone volume PVC:

kubectl delete volumesnapshot pvc1-snap

Verify that Trident deleted the snapshot and finished deleting the source volume:

tridentctl get snapshot -n trident

tridentctl get volume -n trident

Once you deleted the snapshot, there was no longer anything preventing the pending delete operation on the associated volume from continuing, and so Trident deleted that volume as well.

Task 5: resize an nfs Persistent Volume using Trident

Trident supports the ability to expand existing NFS persistent volumes. You can expand NFS persistent volumes even if the PVC is currently attached to a Pod.

Step Action

Create a new persistent-volume-claim-nas PVC:

kubectl create -f pvcfornas.yaml

List the PVCs that exist in the kubernetes cluster:

kubectl get pvc

Enter the following command to begin editing the persistent-volume-claim-nas PVC:

kubectl edit pvc persistent-volume-claim-nas

In the editor, alter the PVC’s size to expand it to 2 Gi by using the following procedure:

  1. Use the arrow keys to navigate to down to the spec.resources.storage line (under the spec section, then the resources subsection), then navigate to the right to position the cursor on top of the “1” digit.

  2. Enter the key sequence r2 key, which will change the value under the cursor from 1 to 2. After this operation, the editor’s contents should look like the gray text box that follows step c.

  3. Enter the key sequence :wq which will save the change and close the editor window.

Sample output:

# Please edit the object below. Lines beginning with a '#' will be ignored,

# and an empty file will abort the edit. If an error occurs while saving this file will be

# reopened with the relevant failures.

#

apiVersion: v1

kind: PersistentVolumeClaim

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 1Gi

storageClassName: storage-class-nas

volumeMode: Filesystem

volumeName: pvc-b8441452-fbf3-41b5-a011-e472443732d9

status:

accessModes:

- ReadWriteOnce

capacity:

storage: 1Gi

phase: Bound

After the editor closes, verify that the update was successful.

Verify that the volume has resized:

kubectl get pvc

The persistent-volume-claim-nas PVC now reports a capacity of 2Gi.

In your PuTTY session to cluster1, examine the size of the volume on svm1 by looking for the volume whose name contains the same UID value you identified in the preceding step:

volume show

Observe that the trident_pvc volume’s capacity is now set to 2 GB.

In your PuTTY session to rhel3, clean up from this exercise by deleting the persistent-volume-claim-nas PVC:

kubectl delete pvc persistent-volume-claim-nas

End of Exercise