6.9 upgrad and blowing away the array


cdoublejj

Recommended Posts

2 hours ago, trurl said:

The .cfg file for each share is stored on flash in config/shares. For a share to actually exist, though, it must have a top level folder on pool or array named for the share. Top level folders are always shares, but any top level folder that doesn't have a .cfg file has default settings (highwater, split any, minimum 0, include all, cache-no).

 

mover still only moves from array to pool, or pool to array, so unless you have a very large pool it isn't likely to help with moving your array files somewhere.

 

Unassigned Devices can mount external drives so no need to do it over the network. As long as you have a single (possibly empty) array data disk you can start Unraid and use plugins and even dockers and VMs.

 

 

i'm going the other way with a newer smaller parity drive due to cost. i got lucky with my first 10TB but, now all i can afford is an 8TB. i assume, i must delete my array (my new array drives will be about 200gb smaller as well, 1.7TB from 2TB.

 

or can i straight up removes the drives and reattach later and move the data over?

 

can i switch back to the stable branch later? (when doing updates) i'm assuming it will be a long long while before 6.9 is released? Some of my drives are SSD.

 

EDIT: and what is a pool? i guess it's not part of the array. should i use BTRFS?

Edited by cdoublejj
Link to comment
On 9/30/2020 at 12:03 PM, trurl said:

Not sure I understand. Are you saying you currently have 10TB parity but your data disks are only 2TB?

only several SSDs in the storage array. technically you are not supposed to use an ssd as parity. would be hard on it!

 

correct at the time 10TB Prosumer drives had just launched and i hd never expected used enterprise of multi TB to drop to what they have now. soooo at that same time i bought a bunched of used entperise drives. 2TBs were cheap as chips @ under $50. i managed to snag an 8tb and 6tb drive. right now all my data seems to lumped mostly on those drives for some reason.  EDIT (as to say i figured i'd be slapping in bigger and bigger HDDs as time goes on but, now it'll be bigger and bigger SSDs)

 

 

soooooo as you can imagine with automated virtual machines and file shares etc etc running 24/7 i might not be too impressed with my ....usable but, lack luster performance. gee wouldn't have anything to do with multiple share accesses and used crusty abused enterprise drives now would it!?

 

HEY! I know lets spin up shared and INSTALL steam games to them too! that wouldn't work poorly at all!  to be fair that worked better than  expected but, leaves me wanting more. so slapped in a few 2 and 4TB SSDs and set my steam drive shares to map only to those drives and wouldn't ya know it ain't half bad. minus DRM having a conniption when it sees it's not a local drive.

 

so what do you do? well you find out you qualify for paypal credit and make some impulse buys! yes i now i have a STACK of SSDs. screw it, i'm going all flash. i knew and know the risks, and even talked to someone on the unraid forums who has been running an all flash array for a while.

 

too bad i could re use my old 8tb or 10tb drive as cold storage but, even then that would slow down my parity checks again. (remember 24.7 operations)

 

the kicker here is you can NOT go to a smaller parity drive!!! but, i am. You can NOT move to a smaller data drives, but, i am!....    .....only way to do that.....  DELETE!

 

so i figure i back up my data, hit the delete button. shove my drives in, maybe do run preclears on them and build a new array ...easy peasy right? ...right?

 

....welllll then i read about this new more SSD friendly file format in 9.6 beta! sounds helpful! won't show my poor parity SSD any love or save it form certain death buuuutt shure sounds like it'll stretch the life on my storage array SSDs.

 

also heard i could plug in some external 10tb USB drives in to the unraid server to migrate data! i WAS curious on how that was done till i realized it only has USB2.

i thought i had a bunch of questions but, really all i can do i try out the new beta and hope i'm one of the few doesn't get lock ups.

 

i am curious i can switch back to the stable branch one 6.9 launches officially though.

Edited by cdoublejj
Link to comment

Not clear you actually answered my questions, or if you did I couldn't separate out the simple answers to my simple questions from all the other information you posted. And the other information you posted seems like it may include some misconceptions about Unraid.

 

Probably the best way to answer many questions I might have, and what I should have asked for to begin with:

 

Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.

Link to comment

the questions were...

do you have 10tb parity drive? yes

you have have 2tb array drives? yes (however there is also 8tb array drive as well)

you have an ssd parity? no (not yet)

you seem to have some misconceptions!  ...very likley Yes!

your next post should be diag!   ...sure, please see attached

 

 

 

unraid1-diagnostics-20201002-1327.zip

Edited by cdoublejj
Link to comment
38 minutes ago, cdoublejj said:

the two questions were do you have 10tb parity drive? yes

you have have 2tb array drives? yes (however there is also 18tb array drive as well)

Not possible to have a data drive larger than parity, so obviously some misunderstandings.

 

The reason I asked about 10TB parity and 2TB array disks is because I thought you may have had some misunderstanding about how large parity needs to be. No data drive can be larger than any parity drive, but a 10TB parity drive provides redundancy for a large number of data drives, each up to 10TB. So I thought it odd that you had 2TB data drives with 10TB parity.

 

Parity contains no data by itself. Parity is a common concept in computers and communications, and it is basically the same wherever it is used. Parity is just an extra bit that allows a missing bit to be calculated from all the other bits. So, all the other disks are needed to allow parity to calculate the data for a missing disk.

 

38 minutes ago, cdoublejj said:

you have an ssd parity? no (not yet)

Didn't ask that, and in fact, SSDs in general are usually not recommended in the parity array. And there would be little point in having SSDs in an array with HDD parity, since no disk can be written faster than parity. SSDs in the parity array also cannot be trimmed, and there has been some discussion about whether some SSD implementations might actually invalidate parity.

 

Reviewing your diagnostics now.

 

Link to comment

Looks like most of your disks are empty or nearly so. And you have a lot of disks. I always recommend fewer larger disks instead of more smaller disks. Each additional disk is an additional point of failure, each additional disk requires more hardware to attach it and more power. And at some point, more disks requires more Unraid license. Also, larger disks perform better than smaller disks due to increased data density, and larger disks are usually cheaper in terms of $/TB.

 

I also don't recommend using older, smaller disks just because you happen to have them. In order to reliably rebuild every bit of a missing disk, Unraid must be able to reliably read every bit of all other disks. So, untrustworthy disks in the array can actually make it difficult to recover data from other disks.

 

Since you don't seem to need all that storage, if it was me, I would just make an array with the 10TB parity and only the larger HDDs, leaving out the smaller HDDs and leaving out the SSDs from the parity array altogether.

 

Your cache pool looks fine, and you might find a good use for those other SSDs in additional pools, a new feature of the latest betas. I have a cache pool for caching user share writes, and a "fast" pool for things that need fast access, such as dockers and VMs.

 

You don't currently have dockers and VMs enabled, but your configuration for those looks good and the shares they use are already on cache where they belong so they will perform better and not keep array disks spunup.

Link to comment
27 minutes ago, trurl said:

No evidence in diagnostics of any 18TB drive attached to your server, and as noted, it wouldn't be possible to assign a disk that large to an array with only 10TB parity anyway.

Yes! this is because when i depressed the 8 key the keyboard i accidently depressed the 1 key as well changing 8 to 18tb. i edited my post too slowly.

Link to comment
13 minutes ago, trurl said:

Looks like most of your disks are empty or nearly so. And you have a lot of disks. I always recommend fewer larger disks instead of more smaller disks. Each additional disk is an additional point of failure, each additional disk requires more hardware to attach it and more power. And at some point, more disks requires more Unraid license. Also, larger disks perform better than smaller disks due to increased data density, and larger disks are usually cheaper in terms of $/TB.

 

I also don't recommend using older, smaller disks just because you happen to have them. In order to reliably rebuild every bit of a missing disk, Unraid must be able to reliably read every bit of all other disks. So, untrustworthy disks in the array can actually make it difficult to recover data from other disks.

 

Since you don't seem to need all that storage, if it was me, I would just make an array with the 10TB parity and only the larger HDDs, leaving out the smaller HDDs and leaving out the SSDs from the parity array altogether.

 

Your cache pool looks fine, and you might find a good use for those other SSDs in additional pools, a new feature of the latest betas. I have a cache pool for caching user share writes, and a "fast" pool for things that need fast access, such as dockers and VMs.

 

You don't currently have dockers and VMs enabled, but your configuration for those looks good and the shares they use are already on cache where they belong so they will perform better and not keep array disks spunup.

I don't user docker i have an ESXi farm, hence i why i use unraid as the programs folder for some machines.

 

I purchased the utmost license several years ago.

 

my cache pool is very easy to fill up ATM. when transferring large amounts of large files form share to share and VMs

 

i agree about the smaller drives which is why i bought so many 1.7TB and 4TB SSDs that i'm about to install. kind of backwards but, the dolalr per TB for several smaller SSDs was cheaper than the 8tb SSD i just bought.

 

i will google "additonal pools" maybe i can utilize all of my new and old hardware minus the old 2TBs and 3TB HDDs

 

Link to comment
7 minutes ago, cdoublejj said:

my cache pool is very easy to fill up ATM

Don't cache initial data load, cache isn't large enough for that. Mover is intended for idle time, default mover schedule is daily in the middle of the night. Making it run more often won't help anything because it is impossible to move from faster cache to slower array as fast as you can write to faster cache. And mover is just more load on the same disks you are trying to write.

 

So, don't cache initial data load. Your cache looks plenty large enough for typical daily use after the initial load.

 

Some people even leave parity unassigned until after the initial data load since parity slows writes to the array.

  • Like 1
Link to comment
26 minutes ago, cdoublejj said:

i agree about the smaller drives which is why i bought so many 1.7TB and 4TB SSDs that i'm about to install. kind of backwards but, the dolalr per TB for several smaller SSDs was cheaper than the 8tb SSD i just bought.

You agree with my point about not using small drives, so you bought small drives???

 

And as mentioned, SSDs belong in the pools not in the parity array.

Link to comment

Since this whole thread started from 6.9 upgrade...

14 minutes ago, cdoublejj said:

i will google "additonal pools"

multiple pools are the main new feature of the latest betas. Just read the 6.9beta25 and 6.9beta29 threads for more about that.

On 9/30/2020 at 12:44 PM, cdoublejj said:

can i switch back to the stable branch later?

Also, these betas have a better partition alignment for SSDs, though this feature is NOT compatible with previous versions. And changing partition alignment requires reformatting so to use this new feature you have to reformat SSDs.

 

And, there are some additional drivers for some hardware, mostly faster NICs I think, and some known issues for some hardware. All explained in these beta threads, so read them.

 

Link to comment
36 minutes ago, cdoublejj said:

dolalr per TB for several smaller SSDs was cheaper than the 8tb SSD

The sweet spot currently for HDDs seems to be about 8TB in terms of $/TB. Of course it will be different for SSDs, but as noted putting SSDs in an array with HDD parity isn't the best idea. I know some people are running all SSDs in the array, but mixing HDDs and SSDs in the array will impact perceived performance for the SSDs in many situations (writes, parity checks, rebuilds). And they can't be trimmed.

Link to comment
39 minutes ago, trurl said:

Don't cache initial data load, cache isn't large enough for that. Mover is intended for idle time, default mover schedule is daily in the middle of the night. Making it run more often won't help anything because it is impossible to move from faster cache to slower array as fast as you can write to faster cache. And mover is just more load on the same disks you are trying to write.

 

So, don't cache initial data load. Your cache looks plenty large enough for typical daily use after the initial load.

 

Some people even leave parity unassigned until after the initial data load since parity slows writes to the array.

my initial data loading will be probably be indefinite. however you make a good point about performance increasing once the crusty 2TB drives are dropped.

35 minutes ago, trurl said:

You agree with my point about not using small drives, so you bought small drives???

 

And as mentioned, SSDs belong in the pools not in the parity array.

Come one man! Where were you a week ago when i impulse bought 8x 1.7TB SSDs? Where's your crystal ball. However if i had this info days ago i might ahve not been dead set on an all SSD unraid server however pools changes things.

 

AH!!! I think i may be using terminology wrong???? i think of the array as parity drive AND multi storage drives that holds the data for the shares. i know you are not supposed to use an SSD as the parity drive. which i very much want to try. other seems to have quite the permanence boost from it at the cost of increased hardware  read/write wear. However modern SSDs can usually take whoop'n beyond factory MTBF, especially enterprise SSDs 😀

 

21 minutes ago, trurl said:

Since this whole thread started from 6.9 upgrade...

multiple pools are the main new feature of the latest betas. Just read the 6.9beta25 and 6.9beta29 threads for more about that.

Also, these betas have a better partition alignment for SSDs, though this feature is NOT compatible with previous versions. And changing partition alignment requires reformatting so to use this new feature you have to reformat SSDs.

 

And, there are some additional drivers for some hardware, mostly faster NICs I think, and some known issues for some hardware. All explained in these beta threads, so read them.

 

i'll be reading those. THAT is what caught my eye!!! i just bought all these large SSDs and now they can be better formatted ...under beta?

 

can you ...update from a beta to an official release? as to say change updates from beta BACK to official?

 

Pools gives me some stuff to think about. i have all the SSDs to make 20 or TB ALL SSDs array that should be FAST!

 

EDIT: BUT, i could also have pool for the 4,6 and 8TB HDDs.

 

ALSO....

 

kind of curious about a few things. the entire array (forget about pools / cache for now) is limited to the slowest drive no? especially so during writes, long writes?

is the any benefit to having a 10TB parity when the storage drives are 8tb in size?

Edited by cdoublejj
Link to comment
8 minutes ago, cdoublejj said:

i think of the array as parity drive AND multi storage drives that holds the data for the shares.

This is basically correct as far as it goes, but Unraid IS NOT RAID. There is no striping. Each data disk in the parity array is an independent filesystem containing complete files. Reading is at the speed of the single disk containing the file. Unraid parity is realtime, so write speed is somewhat slower because parity must also be updated at the same time. So, if you have SSD in an array with HDD parity, writes to the SSD (with parity update) can't be faster than the HDD parity.

 

User Shares allows folders to span disks, but files cannot. And cache and other pools are also part of user shares.

 

See here for more about the details of parity updates:

 

  • Like 1
Link to comment
20 minutes ago, cdoublejj said:

can you ...update from a beta to an official release? as to say change updates from beta BACK to official?

To use the new SSD partition alignment, those disks must be wiped and repartitioned/reformatted. And that new alignment can't be read on earlier versions so you would have to repartition/reformat them back to the older alignment.

 

All this and more is discussed in those linked threads.

  • Thanks 1
Link to comment
10 minutes ago, trurl said:

To use the new SSD partition alignment, those disks must be wiped and repartitioned/reformatted. And that new alignment can't be read on earlier versions so you would have to repartition/reformat them back to the older alignment.

 

All this and more is discussed in those linked thread.

yeah this is why i'm just going to copy my data off my shares and start over from scratch by deleting the array or to say dissolving it and starting a new.

i should get a write boost with an SSD parity drive. also parity checks should go faster. however it is not supported or advised.

 

thank you for wording it that, i had read the first release and asked too many questions and was asked to post here. i'm guessing once 6.9 is official i could switch back to stable branch of updates.

 

when a ahead an re-read the first one and re-reading the second. i think i'm heading the right direction now, thank you for your help thus far.

Link to comment
1 hour ago, cdoublejj said:

Pools gives me some stuff to think about. i have all the SSDs to make 20 or TB ALL SSDs array that should be FAST!

 

EDIT: BUT, i could also have pool for the 4,6 and 8TB HDDs.

You can only have 1 main array that uses the Unraid traditional separate parity with individual format data drives. The pools can either be single device per pool XFS, or multi device BTRFS RAID volumes, using any RAID level you feel comfortable with.

 

Typically the main array would be your bulk slow storage, all spinning rust of various sizes. SSD's would be arranged in different pools, with different RAID levels to suit each pool's specific purpose. By default all newly defined BTRFS pools are initialized as RAID1, but you can change that with a command line. Hopefully sometime in the next year you will be able to change RAID levels with a drop down selection.

  • Like 1
Link to comment
On 10/2/2020 at 4:45 PM, jonathanm said:

You can only have 1 main array that uses the Unraid traditional separate parity with individual format data drives. The pools can either be single device per pool XFS, or multi device BTRFS RAID volumes, using any RAID level you feel comfortable with.

 

Typically the main array would be your bulk slow storage, all spinning rust of various sizes. SSD's would be arranged in different pools, with different RAID levels to suit each pool's specific purpose. By default all newly defined BTRFS pools are initialized as RAID1, but you can change that with a command line. Hopefully sometime in the next year you will be able to change RAID levels with a drop down selection.

so this means the pools need to be matching drive sizes or single drives. single drive pool sounds like little to no protection?

Edited by cdoublejj
Link to comment
2 hours ago, itimpi said:

No.  BTRFS RAID levels are not quite like traditional ones, but with mismatched sizes this can affect the available space.  You can use this site to see what is the available space for different combinations of disk size and RAID profiles

interesting. the results don't make much sense with a 6tb and 8tb listed. as 2 device 1c raid  EDIT: nope it just wastes the extra space of the larger drive like a normal raid.

Edited by cdoublejj
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.