Gico Posted May 31, 2021 Share Posted May 31, 2021 Is there an option to have some kind of JBOD pool in Unraid? Basically I want to have a pool without redundancy, and willing to lose any failed disk data, but not willing to lose the data of the entire pool. I researched BRTFS single profile pool, but apparently BTRFS breaks up files to 1GB parts, and these parts are spread over the devices of the pool, so if one device fails, all the pool's data might be lost. Quote Link to comment
JorgeB Posted May 31, 2021 Share Posted May 31, 2021 3 minutes ago, Gico said: Is there an option to have some kind of JBOD pool in Unraid? No, unless you count using the array without parity. 1 Quote Link to comment
Gico Posted May 31, 2021 Author Share Posted May 31, 2021 No, I don't 🙂 Thank you. Quote Link to comment
Gico Posted May 31, 2021 Author Share Posted May 31, 2021 I can set up a pool per disk. That will enable them to participate in standard Unraid shares. I know: There is a 35 pools limit, that should be enough. Any cons? Quote Link to comment
JorgeB Posted May 31, 2021 Share Posted May 31, 2021 5 minutes ago, Gico said: Any cons? GUI only allows a share to use one pool, you'd need to manually create the share in the other pools, the share should be configured to use cache=only and you'd need to copy any data to the other pools using the disk shares, but after that all the data in those pools could be access using the user share. Quote Link to comment
Gico Posted May 31, 2021 Author Share Posted May 31, 2021 That's OK: I'm going to fill these drives with data and leave them be. Hmmm...I might try also mergerfs Quote Link to comment
Paul_Ber Posted June 3, 2021 Share Posted June 3, 2021 On 5/31/2021 at 7:53 AM, JorgeB said: GUI only allows a share to use one pool, you'd need to manually create the share in the other pools, the share should be configured to use cache=only and you'd need to copy any data to the other pools using the disk shares, but after that all the data in those pools could be access using the user share. New single pools. I have 1 Parity and 6 data disks. I have disk 4(already has a Share4), disk 5(already has a Share5), disk 6(already has a Share6). Keep disk 1, disk 2, disk 3 in the already existing array. I want to take disk 4, disk 5, disk 6 out of the main array to each be a single disk pool with no parity. I want to keep each share from each disk on disk 4, disk 5, disk 6. I have already moved the data onto disk 4,5,6. I am not clear on the steps required. I will watch some @SpaceInvaderOne videos to see if the newer videos discuss this. Quote Link to comment
fritzdis Posted June 3, 2021 Share Posted June 3, 2021 On 5/31/2021 at 7:53 AM, JorgeB said: GUI only allows a share to use one pool, you'd need to manually create the share in the other pools, the share should be configured to use cache=only and you'd need to copy any data to the other pools using the disk shares, but after that all the data in those pools could be access using the user share. I'm in a similar situation - not sure if I should make a new topic or leave it here. What would happen if you have a share on the array with cache = Yes (ssd cache) and then manually created the share folder on another pool? For example: disk1/Example/ disk2/Example/ ssdPool/Example/ singlePool/Example/ Example share set to use disk1 & disk2, cache set to Yes, with ssdPool as the assigned cache pool. Which pool files will get moved to the array? I'm hoping ssdPool files would be moved, while singlePool files would stay in place. If so, do you think it's safe to assume mover will always function like that? Presumably, to move data onto singlePool will have to be done directly, rather than through the share. Any other issues to worry about? Quote Link to comment
JorgeB Posted June 3, 2021 Share Posted June 3, 2021 3 hours ago, Paul_Ber said: I am not clear on the steps required. New config, unassign the disks you want to remove from the array and create new pools and assign them there, parity will need to be synced after array start. Quote Link to comment
JorgeB Posted June 3, 2021 Share Posted June 3, 2021 44 minutes ago, fritzdis said: Which pool files will get moved to the array? I'm hoping ssdPool files would be moved, while singlePool files would stay in place. If so, do you think it's safe to assume mover will always function like that? Yes. 44 minutes ago, fritzdis said: Presumably, to move data onto singlePool will have to be done directly, rather than through the share. Any other issues to worry about? Not if you always use disk shares to do that. 1 Quote Link to comment
fritzdis Posted June 3, 2021 Share Posted June 3, 2021 2 minutes ago, JorgeB said: Yes. Not if you always use disk shares to do that. Perfect, thanks! Still planning things a bit in my head (haven't upgraded to 6.9+ yet), but good to know how things should work. Quote Link to comment
Paul_Ber Posted June 3, 2021 Share Posted June 3, 2021 (edited) 18 hours ago, JorgeB said: New config, unassign the disks you want to remove from the array and create new pools and assign them there, parity will need to be synced after array start. For each individual new pool disk, will the existing data stay intact? Edit: Ok I went to start and this has got me worried, "This is a utility to reset the array disk configuration so that all disks appear as "New" disks, as if it were a fresh new server.". I do not want my disks treated like new, I want to keep my data, just redo Parity, keep my 2 existing cache SSDs the same, and assigned 3 disks taken out of array as individual Pools. I read the documentation, and it is not clear with what I am trying to do. Ok trying to read the Documentation about "New Config", this is what I get a big blank just titles and no info: Edited June 3, 2021 by Paul_Ber Quote Link to comment
Paul_Ber Posted June 3, 2021 Share Posted June 3, 2021 Ok I looked in FAQ about the new pools feature, am I misreading something because I didn't see anything about the new pools. Will search the "New Config" in the Forums next. Quote Link to comment
Paul_Ber Posted June 3, 2021 Share Posted June 3, 2021 Ok in the end I did what you said and it worked. I adjusted three of my shares(each share already took up a whole disk) that are now a pool each And in Shares changed the cache pool to point to and set prefer. And the same type of change to the 2 other shares. Now those three Shares are under User Shares. I will look into Disk Shares. Quote Link to comment
trurl Posted June 3, 2021 Share Posted June 3, 2021 10 minutes ago, Paul_Ber said: I will look into Disk Shares. I usually recommend NOT sharing disks on the network and only using User Shares. If you need to access individual disks to control which pool disk gets written, then you must be aware that mixing disks and user share when moving or copying files can result in data loss. It is possible to specify the same file as a source and a destination if you mix disks and user shares, and linux won't know you have done this and so will try to overwrite what it is trying to read. Quote Link to comment
Paul_Ber Posted June 3, 2021 Share Posted June 3, 2021 (edited) Ok got the desired results. Now can do a Parity Rebuild or Check while reading the 3 pools at the same time. The Parity Rebuild is occurring while 3 Pool HDD are being being read at the same time from over the LAN. Edited June 3, 2021 by Paul_Ber Quote Link to comment
Paul_Ber Posted June 3, 2021 Share Posted June 3, 2021 4 minutes ago, trurl said: I usually recommend NOT sharing disks on the network and only using User Shares. If you need to be able to access individual disks over the network to control which pool disk gets written, then you must be aware that mixing disks and user share when moving or copying files can result in data loss. It is possible to specify the same file as a source and a destination if you mix disks and user shares, and linux won't know you have done this and so will try to overwrite what it is trying to read. Thanks, I'll keep it as 3 pool disks User shares, the read speed is fast enough at 45 seconds to read all 3. These 3 pool disks will never be written to again. Quote Link to comment
Paul_Ber Posted June 4, 2021 Share Posted June 4, 2021 Will it show this error message until Parity rebuild is complete? I think the answer is yes. Quote Link to comment
Paul_Ber Posted June 4, 2021 Share Posted June 4, 2021 Maybe myself trying this will help someone else. Quote Link to comment
JorgeB Posted June 4, 2021 Share Posted June 4, 2021 1 hour ago, Paul_Ber said: Will it show this error message until Parity rebuild is complete? Yes. Quote Link to comment
Paul_Ber Posted June 4, 2021 Share Posted June 4, 2021 One of the Data disk show 30 errors and not climbing. Quote Link to comment
JorgeB Posted June 4, 2021 Share Posted June 4, 2021 Based on that looks like a disk problem, but difficult to say more without full diags. Quote Link to comment
Paul_Ber Posted June 4, 2021 Share Posted June 4, 2021 5 hours ago, JorgeB said: Based on that looks like a disk problem, but difficult to say more without full diags. unraid-diagnostics-20210604-1436.zip Quote Link to comment
trurl Posted June 4, 2021 Share Posted June 4, 2021 For WD Red drives like disk1, you need to add SMART attributes 1 and 200 to the custom attributes Unraid monitors (click on the disk to get to its settings). Been a while since you have run an extended SMART test on that disk. Run another one now. Quote Link to comment
Paul_Ber Posted June 4, 2021 Share Posted June 4, 2021 (edited) Ok started an extended SMART test with attribute 1,200. So the extended test will take 389 minutes or 6.4hrs? I have a 6TB Red Pro on order before I knew this, so worst case this 6TB replaces a 5yr plus 3TB. Looking at power on time: Edited June 4, 2021 by Paul_Ber Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.