JBOD Pool in Unraid


Recommended Posts

Is there an option to have some kind of JBOD pool in Unraid?

Basically I want to have a pool without redundancy, and willing to lose any failed disk data,

but not willing to lose the data of the entire pool.

 

I researched BRTFS single profile pool, but apparently BTRFS breaks up files to 1GB parts, and these parts are spread over the devices of the pool, so if one device fails, all the pool's data might be lost.

Link to comment
5 minutes ago, Gico said:

Any cons?

GUI only allows a share to use one pool, you'd need to manually create the share in the other pools, the share should be configured to use cache=only and you'd need to copy any data to the other pools using the disk shares, but after that all the data in those pools could be access using the user share.

Link to comment
On 5/31/2021 at 7:53 AM, JorgeB said:

GUI only allows a share to use one pool, you'd need to manually create the share in the other pools, the share should be configured to use cache=only and you'd need to copy any data to the other pools using the disk shares, but after that all the data in those pools could be access using the user share.

 

New single pools.

 

I have 1 Parity and 6 data disks.

I have disk 4(already has a Share4), disk 5(already has a Share5), disk 6(already has a Share6).

 

Keep disk 1, disk 2, disk 3 in the already existing array.

 

I want to take disk 4, disk 5, disk 6 out of the main array to each be a single disk pool with no parity.  I want to keep each share from each disk on disk 4, disk 5, disk 6.  I have already moved the data onto disk 4,5,6.

 

I am not clear on the steps required.  I will watch some @SpaceInvaderOne videos to see if the newer videos discuss this.

Link to comment
On 5/31/2021 at 7:53 AM, JorgeB said:

GUI only allows a share to use one pool, you'd need to manually create the share in the other pools, the share should be configured to use cache=only and you'd need to copy any data to the other pools using the disk shares, but after that all the data in those pools could be access using the user share.

 

I'm in a similar situation - not sure if I should make a new topic or leave it here.

 

What would happen if you have a share on the array with cache = Yes (ssd cache) and then manually created the share folder on another pool?  For example:

 

disk1/Example/

disk2/Example/

ssdPool/Example/

singlePool/Example/

 

Example share set to use disk1 & disk2, cache set to Yes, with ssdPool as the assigned cache pool.  Which pool files will get moved to the array?  I'm hoping ssdPool files would be moved, while singlePool files would stay in place.  If so, do you think it's safe to assume mover will always function like that?

 

Presumably, to move data onto singlePool will have to be done directly, rather than through the share.  Any other issues to worry about?

Link to comment
3 hours ago, Paul_Ber said:

I am not clear on the steps required. 

New config, unassign the disks you want to remove from the array and create new pools and assign them there, parity will need to be synced after array start.

Link to comment
44 minutes ago, fritzdis said:

Which pool files will get moved to the array?  I'm hoping ssdPool files would be moved, while singlePool files would stay in place.  If so, do you think it's safe to assume mover will always function like that?

Yes.

 

44 minutes ago, fritzdis said:

Presumably, to move data onto singlePool will have to be done directly, rather than through the share.  Any other issues to worry about?

Not if you always use disk shares to do that.

  • Thanks 1
Link to comment
2 minutes ago, JorgeB said:

Yes.

 

Not if you always use disk shares to do that.

 

Perfect, thanks!

 

Still planning things a bit in my head (haven't upgraded to 6.9+ yet), but good to know how things should work.

Link to comment
Posted (edited)
18 hours ago, JorgeB said:

New config, unassign the disks you want to remove from the array and create new pools and assign them there, parity will need to be synced after array start.

For each individual new pool disk, will the existing data stay intact?

 

Edit:

Ok I went to start and this has got me worried, "This is a utility to reset the array disk configuration so that all disks appear as "New" disks, as if it were a fresh new server.".

 

I do not want my disks treated like new, I want to keep my data, just redo Parity, keep my 2 existing cache SSDs the same, and assigned 3 disks taken out of array as individual Pools.

 

I read the documentation, and it is not clear with what I am trying to do.

 

Ok trying to read the Documentation about "New Config", this is what I get a big blank just titles and no info:

image.thumb.png.f0bb4c1d8367652acf2039df00b78e30.png

Edited by Paul_Ber
Link to comment

Ok in the end I did what you said and it worked.

 

I adjusted three of my shares(each share already took up a whole disk) that are now a pool each

image.png.156727a714926fe116c5ad348f48f1db.png

 

And in Shares changed the cache pool to point to and set prefer.

image.png.670883f88e9fca3b75c461e250d73882.png

And the same type of change to the 2 other shares.

 

Now those three Shares are under User Shares.  I will look into Disk Shares.

Link to comment
10 minutes ago, Paul_Ber said:

I will look into Disk Shares.

I usually recommend NOT sharing disks on the network and only using User Shares.

 

If you need to access individual disks to control which pool disk gets written, then you must be aware that mixing disks and user share when moving or copying files can result in data loss.

 

It is possible to specify the same file as a source and a destination if you mix disks and user shares, and linux won't know you have done this and so will try to overwrite what it is trying to read.

Link to comment
Posted (edited)

Ok got the desired results.  Now can do a Parity Rebuild or Check while reading the 3 pools at the same time.

image.thumb.png.680818baac68508a603776711c30c91b.png

 

The Parity Rebuild is occurring while 3 Pool HDD are being being read at the same time from over the LAN.

image.png.d267965b6c5844238a0c9e0017a77d08.png

Edited by Paul_Ber
Link to comment
4 minutes ago, trurl said:

I usually recommend NOT sharing disks on the network and only using User Shares. If you need to be able to access individual disks over the network to control which pool disk gets written, then you must be aware that mixing disks and user share when moving or copying files can result in data loss. It is possible to specify the same file as a source and a destination if you mix disks and user shares, and linux won't know you have done this and so will try to overwrite what it is trying to read.

Thanks, I'll keep it as 3 pool disks User shares, the read speed is fast enough at 45 seconds to read all 3.   These 3 pool disks will never be written to again.

Link to comment

For WD Red drives like disk1, you need to add SMART attributes 1 and 200 to the custom attributes Unraid monitors (click on the disk to get to its settings).

 

Been a while since you have run an extended SMART test on that disk. Run another one now.

Link to comment
Posted (edited)

Ok started an extended SMART test with attribute 1,200.  So the extended test will take 389 minutes or 6.4hrs?

I have a 6TB Red Pro on order before I knew this, so worst case this 6TB replaces a 5yr plus 3TB.

 

Looking at power on time:

Screenshot_20210604-181647_Chrome.jpg.9928c262b2a71dd309ab0bda5c751a0d.jpg

Edited by Paul_Ber
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.