Simora Posted March 23, 2020 Share Posted March 23, 2020 (edited) I have started to use the sia docker container available in the community app store (thank you JCloud for providing the template) and i'm finding that storage management has become a frustration. Sia creates the host siahostdata.dat files as sparse allocated files. Since unraid uses actual disk usage as its measurement for placement of files in shares (fill up, most free, etc) I end up creating files that can grow beyond the available storage capacity on the single disk they have all been created on. Is there a way to change the default behavior to use apparent disk usage instead of actual disk usage for determining where to place newly created files? The problem is clearly illustrated below. Since the disk shows 63G used it will continue to have new files created on it which will clearly become an issue when the sparse files fill up. root@files01:/mnt/disk13/sia-host/0001# df -h /mnt/disk13 Filesystem Size Used Avail Use% Mounted on /dev/md13 2.8T 65G 2.7T 3% /mnt/disk13 root@files01:/mnt/disk13/sia-host/0001# du -sh /mnt/disk13 63G /mnt/disk13 root@files01:/mnt/disk13/sia-host/0001# du -sh --apparent-size /mnt/disk13 2.8T /mnt/disk13 root@files01:/mnt# du -sh --apparent-size disk* 2.8T disk1 1.3T disk10 1.4T disk11 220G disk12 2.8T disk13 22G disk14 1.9T disk2 2.8T disk3 3.1T disk4 3.2T disk5 2.7T disk6 2.5T disk7 1.3T disk8 1.6T disk9 334G disks root@files01:/mnt# du -sh disk* 2.8T disk1 890G disk10 1.4T disk11 129G disk12 63G disk13 22G disk14 1.9T disk2 2.8T disk3 2.8T disk4 2.8T disk5 2.7T disk6 2.3T disk7 1.3T disk8 1.6T disk9 153G disks root@files01:/mnt# df -h /mnt/disk* Filesystem Size Used Avail Use% Mounted on /dev/md1 2.8T 2.8T 17G 100% /mnt/disk1 /dev/md10 1.9T 892G 971G 48% /mnt/disk10 /dev/md11 2.8T 1.4T 1.5T 49% /mnt/disk11 /dev/md12 1.9T 131G 1.7T 8% /mnt/disk12 /dev/md13 2.8T 65G 2.7T 3% /mnt/disk13 /dev/md14 1.9T 24G 1.8T 2% /mnt/disk14 /dev/md2 1.9T 1.9T 12G 100% /mnt/disk2 /dev/md3 2.8T 2.8T 9.2G 100% /mnt/disk3 /dev/md4 2.8T 2.8T 18G 100% /mnt/disk4 /dev/md5 2.8T 2.8T 813M 100% /mnt/disk5 /dev/md6 2.8T 2.7T 85G 97% /mnt/disk6 /dev/md7 2.8T 2.3T 480G 83% /mnt/disk7 /dev/md8 1.9T 1.3T 591G 69% /mnt/disk8 /dev/md9 1.9T 1.6T 273G 86% /mnt/disk9 tmpfs 1.0M 0 1.0M 0% /mnt/disks Edited March 23, 2020 by Simora Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.