minio cluster and disk shares

4 posts in this topic Last Reply

Recommended Posts

I am coming back to unraid and part of my build is that I am going to use MINIO as a s3 backup target for everything I really care about. There is probably going to be a lot of duplicity backups for pictures and other data.

The reason for this is that it has its own controls and management around data loss and bitrot which I want to have for all my family pictures, code and things.


At the moment I am just after if this seems like a logical way to set this up in relation to userShares as this is still a little new to me!


To create a docker container we would do the following..... basically we need to give it 8 mount points, and I want one mount point on every disk in the system so that load/integrity is shared.


docker run -p 9000:9000 --name minio \ -v /mnt/data1:/data1 \ -v /mnt/data2:/data2 \ -v /mnt/data3:/data3 \ -v /mnt/data4:/data4 \ -v /mnt/data5:/data5 \ -v /mnt/data6:/data6 \ -v /mnt/data7:/data7 \ -v /mnt/data8:/data8 \ minio/minio server /data{1...8}


This would offer up 8 disks to MINIO so it can do all the erasure coding and silent corruption management for my critical data/backups (yes into the same array) but I am getting there on why this inst an issue for me.


The goal would be that each mount/virtual disk that is being presented to the container exists only on one drive in my unraid server so I was looking to create a userShare setup like....


zDisk_01               allowed_disk            disk1          no-cache

zDisk_02              allowed_disk            disk2         no-cache

zDisk_03              allowed_disk            disk3         no-cache



Then as part of my docker deploy offer up a "minio" folder inside each of the zDisk user shares...


"In 12 drive example above, with MinIO server running in the default configuration, you can lose any of the six drives and still reconstruct the data reliably from the remaining drives."


The way this works for me also is if the array dies, I only need to copy those folders off as many disks as possible and just run another docker container on a desktop and I can extract all my backups.... 

Does this make sense? Is there an easier way for UNRAID to allow me to manage the disks....

Should these be going via the SHARES or should/how could I do this another way to avoid that layer? (im not worried about performance at all)







Link to post

I have no experience of Minio but some comments that might help:

  • you can directly refer to individual drives by using paths that start with /mnt/diskX (where X is the drive number)
  • referencing a drive directly tends to be more efficient from a performance perspective than going through a user share.
  • any top level folders you create on any drive will automatically be treated as a User Share (if you have that feature enabled) with default settings which you can then (optionally) configure as you want
  • it is not clear to me whether you are actually going to want User Shares at all for your Use Case?    You can control this from Settings -> Global Share Settings.
Link to post

oh. snap!


yeah, going in via the user share layer really is a waste of time and complexity. I will just hit each disk up as its /mnt/disk1 with a top level folder for each of the mappings.. This means its just a folder sitting on the disks and unraid doesn't need to care...


I am going to probably remove it from docker too and just run it as a local service on the OS as it does not then rely on the array to be up for docker to start.... As its just a binary I can pull it back even further.


Thanks for waking me up from my blonde moment.



Link to post
11 hours ago, hoff said:


I am going to probably remove it from docker too and just run it as a local service on the OS as it does not then rely on the array to be up for docker to start.... As its just a binary I can pull it back even further.


I would not recommend this. All applications on unRAID (that dont directly require access to the host OS) should be run in docker or VMs. That isolates them from the host OS to prevent potentially unwanted / destabilizing interactions with other applications. Additionally and more importantly it allows them to run in their preferred os environment rather than requiring you to figure out how to get the application running on unRAIDs custom, stripped down, ram-based slackware variant. 


Also even outside of docker you would still need to wait for the array to be up before running an application that accesses array disks. (even if you arent using the /mnt/user path).

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.