How to find the maximum number of unique volumes managed by the CSI driver that can be used on a node
Environment
- Red Hat OpenShift Container Platform 4.x.
Issue
- Some CSI drivers set a maximum number of unique volumes managed by the CSI driver that can be used on a node and, when it is reached in all the nodes of the cluster, an event like this one shows up:
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 2 node(s) exceed max volume count
- This situation implies that no pod using that CSI driver can be scheduled.
Resolution
- This setting can be found on .spec.drivers[].allocatable field in the CSINode objects. Example:
$ oc get csinode -o yaml <node_name>
[...]
spec:
drivers:
- allocatable:
count: 20
name: com.mapr.csi-kdf
nodeID: mynode.mydomain.org
topologyKeys: null
- allocatable:
count: 20
name: com.mapr.csi-nfskdf
nodeID: mynode.mydomain.org
topologyKeys: null
- name: openshift-storage.cephfs.csi.ceph.com
nodeID: mynode.mydomain.org
topologyKeys: null
- name: openshift-storage.rbd.csi.ceph.com
nodeID: mynode.mydomain.org
topologyKeys: null
- The way to reconfigure this value depends on the provider of the CSI driver.
- In the case of Red Hat OpenShift Data Foundation, there is no maximum set by default.
- For other drivers or platforms, if any clarification is needed in this regard, the entity which supports them has to be contacted.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments