[Support] cheesemarathons repo


Recommended Posts

16 minutes ago, cheesemarathon said:

In theory yes! However you might find that it doesn't last through container reboots. Give it a go installing it and shout if you have any issues. We will try to help.

 

thanks - is it too much to include it in base container?
I mean it will give so much more functionality

 

not sure how I can to it
https://www.mono-project.com/download/stable/

Edited by NLS
Link to comment
On 1/7/2022 at 4:30 PM, trurl said:

Those are not Unraid host paths. Are you running this container on some other OS?

Well, i shortened the paths, just as an example.

 

in real it would be

 

/mnt/user/appdata/minio/.minio.sys:/data/.minio.sys

/mnt/user/minio:/data

 

appdata is on a SSD-Pool only and /mnt/user/minio is on my HDD-Array

Link to comment
On 1/10/2022 at 5:12 PM, afl said:

Well, i shortened the paths, just as an example.

 

in real it would be

 

/mnt/user/appdata/minio/.minio.sys:/data/.minio.sys

/mnt/user/minio:/data

 

appdata is on a SSD-Pool only and /mnt/user/minio is on my HDD-Array

Any Ideas?

Link to comment
On 12/4/2021 at 1:42 PM, cheesemarathon said:

Image was last updated 3 months ago. The last release on github was 8 months ago. Docker image appears to be maintained by the same guy who maintains the github repo. So if you wish for the docker to be updated, i would create an issue on github.

Thanks for the reply. I reached out, and they've been maintaining the package in nightly rolling releases, as opposed to commiting to the master image on GH, which then causes the Docker image to not get these nightly updates. Or so I'm told 😅

  • Thanks 1
Link to comment
  • 1 month later...

Hi All,

 

With the minio docker does anyone know how to activate versioning?

 

https://docs.min.io/docs/minio-bucket-versioning-guide.html

 

I think I canjust configure it with JBOD so in theory I'm guessing I need to provide the docker container 4x data mounts.

 

https://docs.min.io/minio/baremetal/installation/deploy-minio-distributed.html?ref=con

 

Just not sure where I would do this and what config file I would modify to tell minio which disks to use.

 

I'm trying to get versioning working because Splunk requires that functionality for the s3 SMART Store. I got directed to it based on this error in Splunk.

 

03-17-2022 13:13:16.487 +1000 WARN  S3Client [1018538 FilesystemOpExecutorWorker-0] - command=list-version transactionId=0x7f164b275a00 rTxnId=0x7f163edfce60 status=completed success=N uri=http://192.168.64.64:9768/splunk statusCode=501 statusDescription="Not Implemented" payload="<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><BucketName>splunk</BucketName><Resource>/splunk</Resource><RequestId>16DD0C833CC165BD</RequestId><HostId>b41f06a4-5098-478c-8f12-53981d1b3743</HostId></Error>"

 

versioning.png

Link to comment
13 hours ago, phoenixdiigital said:

Hi All,

 

With the minio docker does anyone know how to activate versioning?

 

https://docs.min.io/docs/minio-bucket-versioning-guide.html

 

I think I canjust configure it with JBOD so in theory I'm guessing I need to provide the docker container 4x data mounts.

 

https://docs.min.io/minio/baremetal/installation/deploy-minio-distributed.html?ref=con

 

Just not sure where I would do this and what config file I would modify to tell minio which disks to use.

 

I'm trying to get versioning working because Splunk requires that functionality for the s3 SMART Store. I got directed to it based on this error in Splunk.

 

03-17-2022 13:13:16.487 +1000 WARN  S3Client [1018538 FilesystemOpExecutorWorker-0] - command=list-version transactionId=0x7f164b275a00 rTxnId=0x7f163edfce60 status=completed success=N uri=http://192.168.64.64:9768/splunk statusCode=501 statusDescription="Not Implemented" payload="<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><BucketName>splunk</BucketName><Resource>/splunk</Resource><RequestId>16DD0C833CC165BD</RequestId><HostId>b41f06a4-5098-478c-8f12-53981d1b3743</HostId></Error>"

 

versioning.png


 

Don’t hold me to this as I have not tried it but I think you can creat a variable called MINIO_VOLUMES in the container settings and set it to. https://minio.example.net:9000/data/disk{1...4}/minio

 

you will also need to change the data variable to just /mnt so it can access all the disks

 

however be careful writing to specific disks as I’m not sure how unraid will respond to this.

 

if I had a test unsaid setup I’d give it a try myself but I only have a production server and I don’t want to screw with it 😂

 

Good luck

Edited by cheesemarathon
Link to comment
  • 2 weeks later...

Hi all, I have a small issue with Cloud Commander:

I want to copy files from a SMB share to my cached Array. This is what I did:

  • Set up the shares via the Unassigned Devices plugin and mounted them.
  • Started a copy of 3 large (~5TB each) folders to my Array.

The process aborted with an error stating that the target space was full. I checked and sure enough... the cache pool (240MB) was full.

 

So it seems that a Cloud Commander copy process is not handled via the usual array balancing process.

Can I even do this kind of transfer with Cloud Commander then? If so.. how? :)

 

Thanks and best,

Boergen

Link to comment
13 minutes ago, Boergen said:

Hi all, I have a small issue with Cloud Commander:

I want to copy files from a SMB share to my cached Array. This is what I did:

  • Set up the shares via the Unassigned Devices plugin and mounted them.
  • Started a copy of 3 large (~5TB each) folders to my Array.

The process aborted with an error stating that the target space was full. I checked and sure enough... the cache pool (240MB) was full.

 

So it seems that a Cloud Commander copy process is not handled via the usual array balancing process.

Can I even do this kind of transfer with Cloud Commander then? If so.. how? :)

 

Thanks and best,

Boergen


I don’t personally use cache arrays so I’m not super savvy with the way they work. 

 

So from my understanding you want to move files from an SMB share on a different host to an unRAID share that is cache enabled?

 

If I’m correct then you need to ensure you have the share mapped in cloudcommander and not the specific cache disk. It also believe that the cache settings for the share need to be correct. 
 

I super encourage you to watch this video by space invader one to fully understand all the options.

 

 

Link to comment
24 minutes ago, cheesemarathon said:

So from my understanding you want to move files from an SMB share on a different host to an unRAID share that is cache enabled?

 

If I’m correct then you need to ensure you have the share mapped in cloudcommander and not the specific cache disk. It also believe that the cache settings for the share need to be correct. 

 

Hi cheesemarathon, and thanks for your quick response.

 

I think the cache setup itself is not the issue. I also think that I get how it works (not meant offensively). :)

 

I copied from an external samba share to /mnt/usr/media (media is my media share, set to "high water" with 2 disks). So I did not copy directly to a specific disk. The cache was used (as was intended), but when the cache filled up to 100%, the copy processed stopped with an error. I would have expected that the copy process just continued with writing directly to the actual disks (which would be normal behavior for a copy process to a cached array).

 

The question is... when is it determined where the files are actually written onto?

  1. Start of the copying process for all files
  2. Whenever a new file is copied (that would be the expected behavior).

If 1) is the case, then that probably explains my issue: The first files would fit onto the cache, but the rest won't, so an error occurs.

 

Say you start a copy process for 5TB, each file is 100GB and the target is large enough, but with a 200GB cache. When you normally copy something to the array, the first 2 files are copied to the cache and the rest is written directly to an array disk.

 

With Cloud Commander, it seems like it wants to copy everything where the first file would fit, circumventing (skipping?) the balancing check that usually takes place for each file when you copy something to an unraid share via SMB (e.g. from a Windows PC).

 

It may of course be possible that copying to /mnt/usr/media is not the correct way to do it if I want to use the balancing feature.

Link to comment
1 hour ago, Boergen said:

 

Hi cheesemarathon, and thanks for your quick response.

 

I think the cache setup itself is not the issue. I also think that I get how it works (not meant offensively). :)

 

I copied from an external samba share to /mnt/usr/media (media is my media share, set to "high water" with 2 disks). So I did not copy directly to a specific disk. The cache was used (as was intended), but when the cache filled up to 100%, the copy processed stopped with an error. I would have expected that the copy process just continued with writing directly to the actual disks (which would be normal behavior for a copy process to a cached array).

 

The question is... when is it determined where the files are actually written onto?

  1. Start of the copying process for all files
  2. Whenever a new file is copied (that would be the expected behavior).

If 1) is the case, then that probably explains my issue: The first files would fit onto the cache, but the rest won't, so an error occurs.

 

Say you start a copy process for 5TB, each file is 100GB and the target is large enough, but with a 200GB cache. When you normally copy something to the array, the first 2 files are copied to the cache and the rest is written directly to an array disk.

 

With Cloud Commander, it seems like it wants to copy everything where the first file would fit, circumventing (skipping?) the balancing check that usually takes place for each file when you copy something to an unraid share via SMB (e.g. from a Windows PC).

 

It may of course be possible that copying to /mnt/usr/media is not the correct way to do it if I want to use the balancing feature.

 

No offence taken! Just wanted to make sure you had all the info. One thing you can try is to set the free space for the share larger than the largest file you'll use. That way, the files go to the cache till it's full and then go to the array, in theory.

 

Failing that this might be a question for some of the docker/unRAID experts on the forum 😂

Link to comment
4 hours ago, cheesemarathon said:

Just wanted to make sure you had all the info. One thing you can try is to set the free space for the share larger than the largest file you'll use. That way, the files go to the cache till it's full and then go to the array, in theory.

Yeah. After some painful consideration, I'm inclined to go with this and paint a big "RTFM" on the server case.

 

Copying with disabled cache works as intended: The files get split according to the "high water" setting. I'm pretty sure some media files exceeded the min free space setting for the cache and that's where it went sideways.

 

Sorry for dragging you into this and thanks for the support. ;)

Link to comment
1 hour ago, Boergen said:

Yeah. After some painful consideration, I'm inclined to go with this and paint a big "RTFM" on the server case.

 

Copying with disabled cache works as intended: The files get split according to the "high water" setting. I'm pretty sure some media files exceeded the min free space setting for the cache and that's where it went sideways.

 

Sorry for dragging you into this and thanks for the support. ;)


Not a problem at all. We are all here to help each other regardless of the issue! Glad you got it sorted 😀

Link to comment
On 3/18/2022 at 2:45 AM, cheesemarathon said:


 

Don’t hold me to this as I have not tried it but I think you can creat a variable called MINIO_VOLUMES in the container settings and set it to. https://minio.example.net:9000/data/disk{1...4}/minio

 

you will also need to change the data variable to just /mnt so it can access all the disks

 

however be careful writing to specific disks as I’m not sure how unraid will respond to this.

 

if I had a test unsaid setup I’d give it a try myself but I only have a production server and I don’t want to screw with it 😂

 

Good luck

 

Thanks for the tip. I didn't try the /mnt config but did create the disk1, disk2, disk3 & disk4 into the /data mount.

 

The end result was just 4 new buckets disk1, disk2, disk3 & disk4.

 

So I tried this (see screenshot) and it didn't work either. Happy to do more reading/testing if you can point me in the right direction.

 

 

 

Edit: I think it's close based on this - https://docs.min.io/minio/baremetal/installation/deploy-minio-distributed.html?ref=con

 

 

Screenshot from 2022-03-27 15-08-59.png

Edited by phoenixdiigital
added info
Link to comment
  • 1 month later...

Good Afternoon,

 

Has anyone run a Minio cluster on their Unraid setup? If so, mind sharing the configuration - screen shots, extra parameters used, how its configured in Unraid? This would not be used in production, rather just messing around in a lab env. Thanks in advance.

 

-MW

Link to comment
9 hours ago, mfwade said:

Good Afternoon,

 

Has anyone run a Minio cluster on their Unraid setup? If so, mind sharing the configuration - screen shots, extra parameters used, how its configured in Unraid? This would not be used in production, rather just messing around in a lab env. Thanks in advance.

 

-MW

 No one has successfully as of yet. One guy did try so read back through some previous posts. Apart from that it will be trial and error to see if you can get it to work. Most of the docs are here: https://docs.min.io/docs/distributed-minio-quickstart-guide.html

Please do let us know how you get on.

Link to comment
On 5/12/2022 at 5:54 PM, cheesemarathon said:

One guy did try so read back through some previous posts. Apart from that it will be trial and error to see if you can get it to work.

 

That was me. Yeah I got it working with just one disk/share but wasn't able to make a cluster. 

I was using it to test and get SMART Store working for Splunk. It was working fine for ages but when I looked at the internal Splunk logs it was whinging about. Turns out I needed versioning which was only compatible if you have a cluster or 4x disk mount.

Never managed to get it to work at all sadly so turned off SMART Store for Splunk to reduce the internal log noise. You can see what I tried above on this page. 

Let me know if you have any luck @mfwade I'd really like to get it working properly.

Link to comment
On 5/13/2022 at 4:57 AM, phoenixdiigital said:

 

That was me. Yeah I got it working with just one disk/share but wasn't able to make a cluster. 

I was using it to test and get SMART Store working for Splunk. It was working fine for ages but when I looked at the internal Splunk logs it was whinging about. Turns out I needed versioning which was only compatible if you have a cluster or 4x disk mount.

Never managed to get it to work at all sadly so turned off SMART Store for Splunk to reduce the internal log noise. You can see what I tried above on this page. 

Let me know if you have any luck @mfwade I'd really like to get it working properly.

 

 

I was able to get it (single instance with multiple drives) working with 4 drives - in my case I just created 4 shares... This allows me the ability to create the bucket and tick off versioning, retention, etc.

Post Arguments: server /data{1...4} --console-address ":9001"

 

I deleted the original /data variable and created 4 new variables labeled data1 through data4, mounted to /data1 through /data4 respectively.

 

So /data1 mounts to /mnt/user/test1/, rinse, lather, repeat.

Now, just need to figure out how to add an additional server.

 

Screen Shot 2022-05-17 at 6.59.02 PM.png

 

 

Screen Shot 2022-05-17 at 6.59.57 PM.png

Edited by mfwade
Link to comment
9 hours ago, mfwade said:

 

 

I was able to get it (single instance with multiple drives) working with 4 drives - in my case I just created 4 shares... This allows me the ability to create the bucket and tick off versioning, retention, etc.

Post Arguments: server /data{1...4} --console-address ":9001"

 

I deleted the original /data variable and created 4 new variables labeled data1 through data4, mounted to /data1 through /data4 respectively.

 

So /data1 mounts to /mnt/user/test1/, rinse, lather, repeat.

 

Interesting. I think I got most of it right but still can't get it to work. Here is the result of me adding the container.

 

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='Minio' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'MINIO_ROOT_USER'='admin' -e 'MINIO_ROOT_PASSWORD'='mypassword' -e 'data1'='/data1' -e 'data2'='/data2' -e 'data3'='/data3' -e 'data4'='/data4' -e 'MINIO_VOLUMES'='/data{1..4}' -p '9768:9000/tcp' -p '9769:9001/tcp' -v '/mnt/user/appdata/minio':'/root/.minio':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk2/':'/data2':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk3/':'/data3':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk4/':'/data4':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk1/':'/data1':'rw' 'minio/minio' server /data --console-address ":9001"

 

I deleted the data variable and made individual variables data1, data2, data3 and data4. Then created volume mounts for each. 

 

Still no joy. I think I'm missing one important step with your "Post Arguments". Not sure how to add that for an unraid docker.

 

 

EDIT: Nevermind I found it. I had to tick the "advanced" section for the docker web GUI in unraid.

 

GOT IT WORKING!!!! Thanks for the tips @mfwade

 

Full config here if anyone is interested.

 

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='Minio' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'MINIO_ROOT_USER'='admin' -e 'MINIO_ROOT_PASSWORD'='mypassword' -e 'data1'='/data1' -e 'data2'='/data2' -e 'data3'='/data3' -e 'data4'='/data4' -p '9768:9000/tcp' -p '9769:9001/tcp' -v '/mnt/user/appdata/minio':'/root/.minio':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk2/':'/data2':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk3/':'/data3':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk4/':'/data4':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk1/':'/data1':'rw' 'minio/minio' server /data{1..4} --console-address ":9001"

 

 

 

Edited by phoenixdiigital
Link to comment
7 hours ago, phoenixdiigital said:

 

Interesting. I think I got most of it right but still can't get it to work. Here is the result of me adding the container.

 

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='Minio' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'MINIO_ROOT_USER'='admin' -e 'MINIO_ROOT_PASSWORD'='mypassword' -e 'data1'='/data1' -e 'data2'='/data2' -e 'data3'='/data3' -e 'data4'='/data4' -e 'MINIO_VOLUMES'='/data{1..4}' -p '9768:9000/tcp' -p '9769:9001/tcp' -v '/mnt/user/appdata/minio':'/root/.minio':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk2/':'/data2':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk3/':'/data3':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk4/':'/data4':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk1/':'/data1':'rw' 'minio/minio' server /data --console-address ":9001"

 

I deleted the data variable and made individual variables data1, data2, data3 and data4. Then created volume mounts for each. 

 

Still no joy. I think I'm missing one important step with your "Post Arguments". Not sure how to add that for an unraid docker.

 

 

EDIT: Nevermind I found it. I had to tick the "advanced" section for the docker web GUI in unraid.

 

GOT IT WORKING!!!! Thanks for the tips @mfwade

 

Full config here if anyone is interested.

 

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='Minio' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'MINIO_ROOT_USER'='admin' -e 'MINIO_ROOT_PASSWORD'='mypassword' -e 'data1'='/data1' -e 'data2'='/data2' -e 'data3'='/data3' -e 'data4'='/data4' -p '9768:9000/tcp' -p '9769:9001/tcp' -v '/mnt/user/appdata/minio':'/root/.minio':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk2/':'/data2':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk3/':'/data3':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk4/':'/data4':'rw' -v '/mnt/user/work/minio-s3-emulation/volumes/disk1/':'/data1':'rw' 'minio/minio' server /data{1..4} --console-address ":9001"

 

 

 

 

Check this out!!!

 

I guess I need to write this up, will do later today / tomorrow.

 

Screen Shot 2022-05-18 at 12.25.50 PM.png

Edited by mfwade
Link to comment
17 hours ago, mfwade said:

I guess I need to write this up, will do later today / tomorrow.

 

That would be cool if you could give more details at some stage thanks.

 

I was watching my single Mino docker with multiple mounts lastnight with Splunk and there were constant warnings in Splunk like this

 

05-18-2022 22:32:24.593 +1000 WARN  S3Client [2494457 cachemanagerUploadExecutorWorker-2] - command=put transactionId=0x7ff4e0669000 rTxnId=0x7ff4ec475600 status=completed success=N uri=http://192.168.64.64:9768/splunk/_internal/db/c2/26/837~4C740B3B-CD91-4337-BE47-6EE5143CCE76/guidSplunk-4C740B3B-CD91-4337-BE47-6EE5143CCE76/1649801398-1649690860-5951626816833194436.tsidx statusCode=503 statusDescription="Service Unavailable" payload="<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>SlowDown</Code><Message>Resource requested is unreadable, please reduce your request rate</Message><Key>_internal/db/c2/26/837~4C740B3B-CD91-4337-BE47-6EE5143CCE76/guidSplunk-4C740B3B-CD91-4337-BE47-6EE5143CCE76/1649801398-1649690860-5951626816833194436.tsidx</Key><BucketName>splunk</BucketName><Resource>/splunk/_internal/db/c2/26/837~4C740B3B-CD91-4337-BE47-6EE5143CCE76/guidSplunk-4C740B3B-CD91-4337-BE47-6EE5143CCE76/1649801398-1649690860-5951626816833194436.tsidx</Resource><RequestId>16F0330008CD9A87</RequestId><HostId>3acb2a2c-41e2-45a6-952e-f32626479b3d</HostId></Error>"

 

It performed pretty badly across the board with other warnings too. Probably because all 4x "disk mounts" on the single Minio instance were on the UnRAID array so likely had additional overhead as data was being mirrored by minio and paritied by UnRAID. I ended up turning it off again.

 

Definitely keen to hear of your full setup. Maybe I'll try again.

 

I'm really just doing it so I can test out Splunk configs/behaviour with S3 SMART Store for my day job. Customers use real S3 stores so won't experience the performance issues I've seen.

Link to comment

This setup was used for nothing more than testing / lab use. It is not meant for production use.

 

This not a tutorial on how to set up Unraid, VLAN's, routing, etc. It is expected that the individual setting this up has a basic understanding of how 'things' work.

 

For this exercise I set up a 3 node distributed Minio cluster with 2 drives in each node. Your IP / DNS / Share / Drive / AppData / etc. locations may vary, set accordingly.

 

You will need 3 IP's for this cluster:

192.168.10.241
192.168.10.242
192.168.10.243

 

You will need to set up DNS for this to work correctly:

m1 - 192.168.10.241
m2 - 192.168.10.242
m3 - 192.168.10.243

 

Verify DNS is responding correctly:

ping m1
ping m2
ping m3

They should all respond with the correct IP information

 

Create drive / shares
I creaded the drives / shares as follows:

-minio-test
   -d1
   -d2
   -d3
   -d4
   -d5
   -d6
   

Install the 1st instance of Minio:

*Switch to Advanced View*

Name: m1
WebUI: http://[IP]:[PORT:9001]/
Post Arguments: server --console-address ":9001" http://m{1...3}/data{1...2}
Network Type: Custom: br0.10 -- vlan 10 (your network may vary)
Fixed IP address: 192.168.10.241
Web UI: 9767 (leave as is - not needed in this configuration)
Console UI: 9867 (leave as is - not needed in this configuration)
Config: /mnt/user/appdata/m1 (locate on an SSD / cache drive)
MINIO_ROOT_USER: (username must be the same on ALL instances)
MINIO_ROOT_PASSWORD: (password must be the same on ALL instances)

Remove original /data path variable
Add 2 new additional path variables
/data1: /mnt/user/minio-test/d1/
/data2: /mnt/user/minio-test/d2/

 

Create - do not start at this time

 

Run Command:
--name='m1' 
--net='br0.10' 
--ip='192.168.10.241' 
-e TZ="America/New_York" 
-e HOST_OS="Unraid" 
-e HOST_HOSTNAME="Hulk" 
-e HOST_CONTAINERNAME="m1" 
-e 'TCP_PORT_9000'='9767' 
-e 'TCP_PORT_9001'='9867' 
-e 'MINIO_ROOT_USER'='username' 
-e 'MINIO_ROOT_PASSWORD'='password' 
-l net.unraid.docker.managed=dockerman 
-l net.unraid.docker.webui='http://[IP]:[PORT:9001]/' 
-l net.unraid.docker.icon='https://raw.githubusercontent.com/cheesemarathon/docker-templates/master/images/minio.png' 
-v '/mnt/user/appdata/m1':'/root/.minio':'rw' 
-v '/mnt/user/minio-test/d1/':'/data1':'rw' 
-v '/mnt/user/minio-test/d2/':'/data2':'rw' 
'minio/minio' 
server --console-address ":9001" http://m{1...3}/data{1...2}

 

Install the 2nd instance of Minio:

*Switch to Advanced View*

Name: m2
WebUI: http://[IP]:[PORT:9001]/
Post Arguments: server --console-address ":9001" http://m{1...3}/data{1...2}
Network Type: Custom: br0.10 -- vlan 10 (your network may vary)
Fixed IP address: 192.168.10.242
Web UI: 9767 (leave as is - not needed in this configuration)
Console UI: 9867 (leave as is - not needed in this configuration)
Config: /mnt/user/appdata/m2 (locate on an SSD / cache drive)
MINIO_ROOT_USER: (username must be the same on ALL instances)
MINIO_ROOT_PASSWORD: (password must be the same on ALL instances)

Remove original /data path variable
Add 2 new additional path variables
/data1: /mnt/user/minio-test/d3/
/data2: /mnt/user/minio-test/d4/

 

Create - do not start at this time

 

Run Command:
--name='m2' 
--net='br0.10' 
--ip='192.168.10.242' 
-e TZ="America/New_York" 
-e HOST_OS="Unraid" 
-e HOST_HOSTNAME="Hulk" 
-e HOST_CONTAINERNAME="m2" 
-e 'TCP_PORT_9000'='9767' 
-e 'TCP_PORT_9001'='9867' 
-e 'MINIO_ROOT_USER'='username' 
-e 'MINIO_ROOT_PASSWORD'='password' 
-l net.unraid.docker.managed=dockerman 
-l net.unraid.docker.webui='http://[IP]:[PORT:9001]/' 
-l net.unraid.docker.icon='https://raw.githubusercontent.com/cheesemarathon/docker-templates/master/images/minio.png' 
-v '/mnt/user/appdata/m1':'/root/.minio':'rw' 
-v '/mnt/user/minio-test/d3/':'/data1':'rw' 
-v '/mnt/user/minio-test/d4/':'/data2':'rw' 
'minio/minio' 
server --console-address ":9001" http://m{1...3}/data{1...2}

 

Install the 3rd instance of Minio:

*Switch to Advanced View*

Name: m3
WebUI: http://[IP]:[PORT:9001]/
Post Arguments: server --console-address ":9001" http://m{1...3}/data{1...2}
Network Type: Custom: br0.10 -- vlan 10 (your network may vary)
Fixed IP address: 192.168.10.243
Web UI: 9767 (leave as is - not needed in this configuration)
Console UI: 9867 (leave as is - not needed in this configuration)
Config: /mnt/user/appdata/m3 (locate on an SSD / cache drive)
MINIO_ROOT_USER: (username must be the same on ALL instances)
MINIO_ROOT_PASSWORD: (password must be the same on ALL instances)

Remove original /data path variable
Add 2 new additional path variables
/data1: /mnt/user/minio-test/d5/
/data2: /mnt/user/minio-test/d6/

 

Create - do not start at this time

 

Run Command:
--name='m3' 
--net='br0.10' 
--ip='192.168.10.243' 
-e TZ="America/New_York" 
-e HOST_OS="Unraid" 
-e HOST_HOSTNAME="Hulk" 
-e HOST_CONTAINERNAME="m3" 
-e 'TCP_PORT_9000'='9767' 
-e 'TCP_PORT_9001'='9867' 
-e 'MINIO_ROOT_USER'='username' 
-e 'MINIO_ROOT_PASSWORD'='password' 
-l net.unraid.docker.managed=dockerman 
-l net.unraid.docker.webui='http://[IP]:[PORT:9001]/' 
-l net.unraid.docker.icon='https://raw.githubusercontent.com/cheesemarathon/docker-templates/master/images/minio.png' 
-v '/mnt/user/appdata/m1':'/root/.minio':'rw' 
-v '/mnt/user/minio-test/d5/':'/data1':'rw' 
-v '/mnt/user/minio-test/d6/':'/data2':'rw' 
'minio/minio' 
server --console-address ":9001" http://m{1...3}/data{1...2}

 

Launch all 3 Minio instances. The logs will be littered with communications errors until all 3 devices are online and communicating with each other. 

 

Log example:

 

Screen Shot 2022-05-19 at 2.19.38 PM.png


When all 3 instances have been started  and you see a screen similar to above (log output), log in to any of the Minio instances and click on Monitoring and then Metrics. You should see '3' Servers online and '6' Drives online.

 

Screen Shot 2022-05-19 at 2.11.43 PM.png

 

What's left:

 

**Set up NPM - NGINX Proxy Manager so that when hitting the API or Console ports the requests are answered by any one of the 3 nodes. Right now I can configure to  send to one of the 3 nodes. There is a custom area, just need to figure out how to fill that in.

 

If anyone has experience with this, please chime in with examples / screenshots

 

**Set up Prometheus for additional metrics and alerts. Anyone else what to take a crack at it?

 

**Start messing with the S3 raw access policies - if anyone has experience with this, here is what I am trying to do.

 

I need some help with tailoring the following:

 

Allowing a specific user to log in, create/modify/delete/read/write to a bucket and then view all buckets under their username.

 

As of right now I can have a user log in AFTER I create a username/password and bucket for them however, I have to specifically assign them permission to view the bucket I just created. I would rather just create the user account and then allow the user the ability to create their own.

 

Original policy below that works when I log in as 'mfwade' and have created the bucket called 'mfwade' I can do nothing else...

 

Formatting may be messed up due to copy and paste - context is correct....

 

Policy:

{

"Version": "2012-10-17",

"Statement": [

{

"Sid": "BucketAccessForUser",

"Effect": "Allow",

"Action": [

"s3:DeleteObject",

"s3:GetObject",

"s3:ListBucket",

"s3:PutObject"

],

"Resource": [

"arn:aws:s3:::mfwade/*",

"arn:aws:s3:::mfwade"

]

}

]

}

 

I hope this information helps others to set up something similar. If I can help in any way, please let me know.

Edited by mfwade
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.