I use Arq to back up to a Minio docker container running on Unraid. It was working well with Unraid 6.7.2, but larger backups are failing with Unraid 6.8.0. The versions of Arq and Minio in use haven't changed recently.
Arq is failing due to a GET request timeout:
2019/12/19 00:04:17:767 DETAIL [thread 307] retrying GET /foo/?prefix=713EC506-32A1-4454-A885-19334B4FB242/objects/95&delimiter=/&max-keys=500: Error Domain=NSURLErrorDomain Code=-1001 "The request timed out."
I reproduced the same request using aws-cli. 3m10s seems excessive for getting a listing of ~1000 files.
[REQUEST s3.ListObjectsV1] 05:58:00.385 GET /foo?delimiter=%2F&prefix=713EC506-32A1-4454-A885-19334B4FB242%2Fobjects%2F91&encoding-type=url [RESPONSE] [06:01:10.817] [ Duration 3m10.432524s Dn 93 B Up 388 KiB ] 200 OK Server: MinIO/RELEASE.2019-10-12T01-39-57Z
The Minio container has a user share mapping for backend storage. If I perform essentially the same file listing from an Unraid terminal, it's also pretty slow:
time ls /mnt/user/minio/foo/713EC506-32A1-4454-A885-19334B4FB242/objects/91* | wc -l 1140 real 0m24.676s user 0m0.242s sys 0m0.310s
If I do the same thing using the disk mount point instead, it's several orders of magnitude faster:
time ls /mnt/disk3/minio/foo/713EC506-32A1-4454-A885-19334B4FB242/objects/91* | wc -l 1140 real 0m0.090s user 0m0.069s sys 0m0.026s
There are a lot of files in these folders, but I don't think it's an unreasonable amount (?):
ls /mnt/disk3/minio/foo/713EC506-32A1-4454-A885-19334B4FB242/objects/ | wc -l 278844
Changing the Minio container path mapping to use the disk share instead of the user share works around the issue, but I'll need user shares to span across disks at some point.
I'd prefer not to downgrade to 6.7.2 to gather comparable metrics there, but I can if that would be helpful.
Recommended Comments
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.