"Resizing" a PVC and its PV in Microk8s

"Resizing" a PVC and its PV in Microk8s

With the storage addon enabled, microk8s can automatically provision a PV when a PVC is created. The size of the PV is set according to that of the PVC. However, PVCs cannot be resized after creation. The PVC could be deleted and recreated with a larger size but this would result in the deletion of the PV and, by extension, all the data stored so far in it. This article presents a workaround to resize a PVC and its corresponding PV without any loss of data.

First, let's start by creating a simple PVC with a size of 100Gb in the a Microk8s cluster using the following manifest:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc
  namespace: example
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

After being applied, the manifest creates the desired PVC:

moreillon@home:~/kubernetes$ microk8s.kubectl apply -f example-pvc.yml
persistentvolumeclaim/example-pvc created

The PVC can be listed using kubectl:

moreillon@home:~/kubernetes$ microk8s.kubectl get pvc -n example
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
example-pvc   Bound    pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4   100Gi      RWO            microk8s-hostpath   9s

The same goes for its corresponding PV:

moreillon@home:~/kubernetes$ microk8s.kubectl get pv
NAME                                      CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS  CLAIM                STORAGECLASS
pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4  100Gi     RWO           Delete          Bound   example/example-pvc  microk8s-hostpath     

Now let's assume that we want to change to storage size to 200Gb. Modifying the manifest and re-applying it would result in the following error: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize

Here, one could delete the PVC and recreate it with a larger size, but this would result in the deletion of the PV and thus the data that it contains. Consequently, one must first prevent the PV to be deleted along with the PVC. This can be done by changing the reclaim policy of the PV from "Delete" (Microk8s default) to "Retain". This can be achieved with the microk8.kubectl patch command:

microk8s.kubectl patch pv pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Now, we can see that the reclaim policy of the PV has been changed to "Retain":

moreillon@home:~/kubernetes$ microk8s.kubectl get pv
NAME                                      CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS  CLAIM                STORAGECLASS
pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4  100Gi     RWO           Retain          Bound   example/example-pvc  microk8s-hostpath     

This will prevent the PV to be deleted when the PVC is removed. We can thus go ahead and delete the PVC.

moreillon@home:~/kubernetes$ microk8s.kubectl delete -f example-pvc.yml
persistentvolumeclaim "example-pvc" deleted

As a result, the PV should be marked as Released:

moreillon@home:~/kubernetes$ microk8s.kubectl get pv
NAME                                      CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS    CLAIM                STORAGECLASS
pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4  100Gi     RWO           Retain          Released  example/example-pvc  microk8s-hostpath    

Thus, the PV can be resized, which can also be achieved using the microk8.kubectl patch command:

microk8s.kubectl patch pv pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4 -p '{"spec":{"capacity":{"storage":"200Gi"}}}'

The PV should now have a size of 200Gb.

moreillon@home:~/kubernetes$ microk8s.kubectl get pv
NAME                                      CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS    CLAIM                STORAGECLASS
pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4  200Gi     RWO           Retain          Released  example/example-pvc  microk8s-hostpath    

Before we can re-apply our PVC, we also need to make the PV ready to be bound again, i.e. set its status to "Available". Here again, we use the patch command:

microk8s.kubectl patch pv pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4 -p '{"spec":{"claimRef": null}}'

Which results in the PV beign in the "Available" state:

moreillon@home:~/kubernetes$ microk8s.kubectl get pv
NAME                                      CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS    CLAIM                STORAGECLASS
pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4  200Gi     RWO           Retain          Available  example/example-pvc  microk8s-hostpath 

With that, done, one can apply a modified version of the original PVC manifest. Those modifications include a storage size increase to match that of the PV as well as a reference to the PV to bind. Without such reference, the PVC creation would trigger the automatic creation of a new PV by Kubernetes:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc
  namespace: example
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4
  resources:
    requests:
      storage: 200Gi

After the application of the manifest, we can confirm that our PVC is correctly bound to the original PV, thus ensuring that the data persisted during the PVC update:

moreillon@home:~/kubernetes$ microk8s.kubectl get pvc -n example
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
example-pvc   Bound    pvc-6ed2529b-f1da-46ba-bc32-d9b4bbad23a4   200Gi      RWO            microk8s-hostpath   12s