-
Notifications
You must be signed in to change notification settings - Fork 508
Description
The Readme give an example for pathPattern like:
pathPattern: "{{ .PVC.Namespace }}/{{ .PVC.Name }}"
But this won't clean up correct and leave directories {{ .PVC.Namespace }} behind. Even if the termination script is enhanced to delete the namespace directory if empty, this is not possible, because this directory is the mount point. The helperPod doesn't mount the host filesystem based on the information from the nodePathMap from config.json but by split the full path to the pvc
https://github.com/rancher/local-path-provisioner/blob/master/provisioner.go#L597C1-L598
o.Path = filepath.Clean(o.Path)
parentDir, volumeDir := filepath.Split(o.Path)
Path contains the full path to the PVC and parentDir get mounted into the helpPod for termination.
Below the relevant path from a helpPod with pathPattern "{{ .PVC.Namespace }}/{{ .PVC.Name }}/{{.PVName}}" and basePath=/data
volumeMounts:
- mountPath: /script
name: script
- mountPath: /data/duplicity-backup/duplicity-k8s-local-storage-k8s-node01e-manual-548-75zw9-tmp/
name: data
[...]
volumes:
- hostPath:
path: /data/duplicity-backup/duplicity-k8s-local-storage-k8s-node01e-manual-548-75zw9-tmp/
type: DirectoryOrCreate
name: data
The fix would be to mount basePath into the container, the rest of the script logic should be the same and should work in the same way.
With extending the teardown script like this:
#!/bin/sh
set -eu
# Remove the initial directory (regardless of contents)
echo "Deleting volume: $VOL_DIR"
rm -rf "$VOL_DIR"
# Move to parent and start cleaning up empty directories
dir=$(dirname "$VOL_DIR")
while [ "$dir" != "/" ]; do
if [ -d "$dir" ] && [ -z "$(ls -A "$dir" 2>/dev/null)" ]; then
sleep 1 # wait a moment, to let any pending operations finish
echo "Removing empty directory: $dir"
rmdir "$dir" || break
dir=$(dirname "$dir")
else
break
fi
done
This would clean up correctly and won't leave empty directories behind.
Yes, the change of the mount point exposes more host filesystem to the helperPods, but therefore the fs is not filling with empty directories.
If the is more then one valid baseDir either mount both into the helpPod or choose the right one for the termination pod.
Keep the existing logic for helpPod for setup as type: DirectoryOrCreate is creating all required directories if needed or if baseDir is mounted the setup script should create the dir.
A 2nd option were to launch cleanup job e.g. once a day, which is removing all empty directories under /baseDir which doesn't contain a pvc. But this has the risk, that directories get deleted which are not maintained by the provisioner.