Is Unraid a better choice than Truenas for my hardware and usage?


Recommended Posts

Apologies if this has previously been answered. I've been using Truenas 12 on my NAS for the last few months and am looking to see if Unraid would be a better solution. This is for my home storage and mostly used for media storage to access from my computers there. I would like to set it up as a home server (Plex or similar) so a massive amount of initial uploading and then mostly ability to access the files from 2-3 computers at a time.  For some reason I've been getting unexplainable pool degredation warnings and frequent copy errors. Hard drives, cables and memory both check good and my assumption is that it's an issue with my current hardware. Been looking to upgrade them for a bit anyway. Recently noticed that hard drive temps are jumping up into the mid-high 40C range during transfers. Thoughts on whether Unraid will be a better solution for my storage would be appreciated.

 

Current setup: 

OS: Truenas 12, disks set up in zfs1 array for 32 gigs of storage

i7 4790K cpu

MSI G87-G45 gaming mobo

7x MaxDigitalData 6TB hard drives

Samsung 850 pro 250Gb Sata SSD (OS)

24 gigs non-ecc memory (8x2, 4x2)

 

Netgear Nighthawk AX-8 router 

 

I'm replacing the 4790K with a new i7 6700 cpu and a GIGABYTE GA-Z170-HD3 motherboard that should arrive this week.

Also replacing my case with an Antec P101 case for proper drive bays and better cooling.

 

I may exand it in the future but currently I'm satisfied with the size of the array. 

 

Given the above, would Unraid be a good solution or should I stick with Truenas?

 

Any comments or suggestions would be appreciated. If I understand Unraid correctly, the size/speed of the cache drive is crucial to speed of large uploads and am willing to pick up a larger ssd or replace it with an NVME one.

 

 

 

Edited by Fencer
Link to comment

CPU is not the real issue unless you want to deal with a lot of VMs and Docker. RAM also is not that important (But I would throw out those 4x2 thingies, supposly they can slow down the whole system, look at their stats. In general ALWAYS use the same type of RAM in all banks).

 

UNRAID or not is mostly a question on how you expect your space demand growing in the future.

 

Now, with you zfs array, you are stuck with the same type and size of disks. This is no problem now, but can become one in a few years when drives start to fail and need to be replaced. You are then forced to get the same size or a bigger one (which is more likely), but even if bigger, the rest will be ignored and the disk only refilled up to the size of the old one.

 

UNraid has a different approach: here you start with buying a parity drive (or take the risk and go without one), it needs to be the biggest of all existing drives (wise advice, it is really a pain in the a²@ to replace it). Then you fill up your array with any combination of smaller (or equal) drives, they are usually independent of each others, if one fails, the data is lost, unless there is a parity drive (or two to play it ultra-safe).

So if a disk fails in a few years, you just pull it out (the content is then emulated by the parity drive), go out and buy a new disk, which can be ANY size (unless smaller or equal to the parity drive), plug it in and sit back. UNraid will automatically regenerate the data within some hours (or days depending on the size, it does not matter if there really is data on the drive, each sector is restored, used or unused). When finished, it will then add the remaining free size (if the new drive was bigger than the failed one) to the array size, actually "growing" the array.

Also, you can add more new data disks any time as long as the total count is covered by the licence you have bought (which can be extended later on too of course).

Unraid also is capable of "spreading the shares" automatically among all drives. If one drive is filled up, it simply creates the same folder onto a new drive and new data will be copied here. For the clients, they just continue to use the same share, they will see all files from all disks of this share at the same time. Thats really nice.

 

If this all sounds to be good to be true, we also need to discuss some drawbacks.

"Cache" Handling is not the brightest. All cache settings result only into a "write cache". New Files are created on this cache drive (supposely ultra fast transfers onto a dedicated NVMe). But this cache is usually not "safe" unless you double it with a 2nd one and create a RAID0 (mirror) combination. The mirror will lower speed of course, also every cache disk is limited. So, sooner or later, you will need to launch the "Mover". This program (usually ran automatically at a certain time of the day) moves files and folders off the cache and puts them onto the real data drives, protected by parity.

From then on, they are not "new" anymore and every change you do from now on to such a file results into the slow access to the real data disk.

Even worse is "read cache". It simply does not exist. Your cache disk will not be used for reading anymore, once a file has been moved out from her.

You could try to fiddle with it, there are settings like "prefer" which advice Unraid to keep files from this share mostly in the cache if possible. But you have no control which ones and also they are back in danger again unless your cache is mirrored. So this may sound nice, but it does not really convince.

Also, most people are dissappointed with the write speed if they write onto an SSD or even NVMe which is part of the array.

They forget that for every write to any data disk the parity drive needs to be updated too. So the overall speed of the array is limited to the speed of the parity drive. Who buys cheap here will suffer later and long.

(but of course, READING from an SSD or NVMe in the array is fast, unless they signal an error, there is no need to ask the parity too).

 

as a conclusion, I would give you some advices:

* 250Gig for the cache NVMe is enough, just make sure that she can sustain here writing speed for a long time (big file, fast LAN)

* buy a new, big and fast drive and use it for the parity. Even if you have just 6 or 8 Tb drives now to convert, consider a 18Tb of the PRO series. It looks like overkill now but will come handy sooner or later

 

You will get trouble transfering you files from your old array to the new UNRAID array. You cant copy one disk after the other because your zfs needs to be very recent to allow shrinking. If it does, you can free disk by disk, pull the empty one out and add it to UNRAID, transfer some data, unless it is filled up, delete the original data, and start over again.

Plan in some weeks for this and, of course, you need to keep up 2 complete systems running. (a wiser approach would be to buy at least 2 18Tb Gigs drives, use one of data and fill it up like described before. then you could shrink your zfs by more disks at once and need to rerun the whole stuff not that often. If all is finished, declare the remaining 18Tb drive as parity, it will take a day or so to catch up)

Anyway, YOU WILL NEED A LOT OF PATIENCE

Good luck 🙂

  • Like 1
Link to comment

Thank you very much Michael Meiszi! Good stuff to be considering.

 

Received my motherboard last night, Antec P101 case should be here this weekend. I ordered a SAMSUNG MUF-32AB/AM FIT Plus 32GB Flash Drive

to use as my boot drive, so now I'm down to my last few decisions:

Cache drive choices:

1TB SK hynix Gold P31 nvme drive for $110 bucks or

1TB Sandisk SSD Plus Sata drive for $80

I'm leaning towards spending the extra $30 for the nvme one as it'll keep my last sata port free to add another drive down the road.

 

CPU choice:

New, unused i7 6700 or a 4 year old-ish i7 7700T from a retired desktop from work. The much lower wattage and heat from the 7700T has me thinking it could be a viable option with this board and I could pick up a decent air cooler for it for $20. Otherwise, I'll likely reuse my Corsair h110 (?) water cooler from the existing build that has several years of mileage on it if I go with the 6700. Only thought with the water cooling is there's 3x120mm intake fans blowing right over my hard drives and the single 120mm exhaust fan on the back so I'm wondering if heat buildup from the 7-8 drives that'll be in there will be an issue if I have the radiator sandwiched in there in a push-pull setup. They run 44-46C in my current, less ideally ventilated case.

 

Thoughts?

Edited by Fencer
Link to comment

There is no real question about nvme vs sata. go for nvme (but not for a "fake one" that is just crippled to sata and occupies a valuable nvme slot. So forget about the Sandisk fast.

The hynix sounds good, but you need to investigate, if she is capable of sustaining high speeds for a long period. Many of those cheap NVMes start with fast transfers, but have very limited internal cache. Once up that limit write speed breaks down dramatically. This is very annoying if you use it as a cache drive and transfer big files like movies regulary. Spending a buck more here can. I can't tell you which one is really a good buy, I recently checked some, and finally took a 2Tb Samsung 980-Pro (NOT EVO or something!!!) for my private data and a 500G Samsung 970-Evo Plus as general Cache for the Array.

I can copy a 100G + File onto them without any pause (10Gb/s LAN).

Reading back a file from NVMes is always very fast, but writing makes the different. And, of course, the ads and shops only tell you the optimum speeds, not telling for how long they can keep up.

(but even the worst NVMe is faster than a SATA SSD)

 

Your case looks good, the 3 fans in the front will surely be enough for your drives (unless you turn them OFF with the switch on top)

But I don't see any chance for the water cooler in your case. There is no opening at the top where you can put the additional fans for the water cooler.

 

CPUs are up to you. I just dumped both of those that you have mentioned, Too old, too greedy, too slow. One is still used as a backup-mirror server for UNRAID, turned off 99% of the time, just comes up and copys the differences from yesterday then shuts itself off again. The other one is still sitting here on the table, no idea where it will end...

 

 

 

Link to comment
40 minutes ago, Fencer said:

I'll likely reuse my Corsair h110 (?) water cooler

I would not recommend to use a AiO in a Server anyways because what if the pump dies, those coolers are most likely not designed for 24/7 usage and I would rather recommend buying a Noctual NH-U12S or NH-U14S or something similar, depending on what you can fit in your case.

Nowadays a good air cooler can keep up with the existing AiOs and they are often quieter and you eliminate a single point of failure by going with an air cooler, the pump...

Maybe you can sell your AiO for a few bucks...

  • Like 1
Link to comment

Hmm... For the AIO, my initial thought was to mount it at the rear exhaust the way it is in my current box. I believe I still have a Coolermaster 212 Evo air cooler that I'll need to find mounting hardware to use (snapped a threaded mount being stupid building a computer for my mom) assuming it fits in the Antec 101. Don't think it will be a problem.

 

If I'm reading the reviews of the SK hynix Gold P31 correctly, it should be a good drive to use for the cache. Link following:

https://www.anandtech.com/show/16012/the-sk-hynix-gold-p31-ssd-review/5

 

Just checked my stash and I actually have 1x16Gig stick of DDR4 ram and 2x4Gig sticks. Would the single 16 gig stick be better or should I use the 2 4 gig ones for half the ram but dual-channel memory? Normally I'd say the doubled ram in single-channel but I'm up for suggestions. Probably end up playing around with both ways to see how it goes.

 

I guess the only other question remaining will be the Realtek GbE network chip on the motherboard. I've been reading that I might be better off getting a dedicated Intel card in the near future. Hopefully there's drivers for my particular board's flavor.

 

Assuming I pull the trigger on the SK hynix SSD and taking into account the suggestions here, my final build will be something like this:

Antec P101 Silent case

Gigabyte GA-Z170-HD3 motherboard

i7 6700 cpu (cooling to be determined)

1x16Gig ddr4 memory

1 1TB SK hynix Gold P31 nvme cache drive w/heatsink

7x 6TB MaxDigitalData hard drives

 

That should be good to build a decent Unraid box I think with enough room to add another 6TB down the line. Thank you for your inputs.

Any thoughts on the current build plan are welcome!

 

 

 

 

Link to comment
28 minutes ago, Fencer said:

For the AIO, my initial thought was to mount it at the rear exhaust the way it is in my current box.

Liquid cooling is great for desktop gaming machines that are never left running unattended.

 

Running without supervision there is too much risk for major part damage if something fails and isn't caught immediately. Air cooled massive pieces of copper and aluminum heat up relatively slowly when a fan fails, and many times can run with just convection and case air replacement from other fans for an extended period of time, plenty of time to notify you that something is wrong before major damage happens. When a water loop loses circulation temperatures can skyrocket in seconds, not to mention what happens if the liquid is no longer contained.

 

Set it and forget it systems need passive cooling whenever possible, and redundant airflow options where necessary. Consumer grade liquid cooling is best left for eye candy gaming machines.

Link to comment
1 hour ago, Fencer said:

If I'm reading the reviews of the SK hynix Gold P31 correctly, it should be a good drive to use for the cache. Link following:

reading your test I am not conviced about that drive. They only have tested short bursts like you need for a usual desktop or laptop usage. And they have focused on the power consumption. File servers usually focus on different things.

 

Also, I go with JonathanM. Liquid cooling 24/7 is not what would make me sleep well too. Usually I design my boxes that even a broken CPU Fan does not really kill it instantly. The other fans should be able to compensate for a longer period. So it can raise an alert and wait for the Admin to come and fix it. But of course, my fans are hot-pluggable and not in a normal pc midi tower.

 

The RAM question is more difficult to answer. 16Gb would surely be better for caching and even would allow a small VM to be run. But double banks are usually faster because 64Bits is "native" to the cpus today. I assume both options are the same speed? So you can pick between size and speed... its up to you.

 

Link to comment

thanks all, quick update: All my stuff with the exception of the mounting hardware for the Coolermaster 212 have arrived so I'll use the existing water cooler until that comes in, looks like a few weeks with the backorder. I have my current existing NAS completely backed up to external usb drives that I'll be loading the new one off of. 

As I understand it, zeroing out all the drives in the unit is best practice and I'm guessing that that will take a week or so with 7 6TB drives, does that sound about right?

 

I have 32TB of data to transfer in once that's done. I've read that there's several different methods to accomplish this, including not enabling the cache drive until the data is transferred, not setting up a parity drive until the data's transferred, and loading the drives individually from my desktop system to take advantage of the faster internal transfer speeds, moving them to my Unraid box and setting up the parity drive to look for them. It's a bit confusing. 

 

Can someone point me to a tutorial on setting up and accomplishing this? I know that transferring data to my existing zfs array took a little over a week to accomplish and I'm hoping that similar times can be done here (excepting the time zeroing out the disks of course.)

 

 

Link to comment

if you really start from zero, you can save a lot of time:

 

* dont use parity until all disks are filled (doing it ONCE is faster than doing for every disk)

 

* "zeroing out" is not required if you do not use parity (yet). It will be required later if your array is running and you want to add a new drive. The zeros are needed so that the existing parity does not change if you add a drive. But if there is no parity, zeros don't make sense. The only other thing that make zeros sensibel (a bit) is that if you have no real faith in your drive, zeroing will do an intensive check of it by writing every sector. Bad spots may be detected and hopefully the drive will use replacement sectors for them. But you need to decide if you want to spend all this time for just zeros. I would not do that to new drives. BTW, I think (dunno for sure) that UNRAID can handle your drives in parallel. So it won't take 7* the time for 7 drives, maybe a multiplyer of 2 or 3 is more realistic. BTW2: it make absolutely no sense to zero out the parity drive before. During parity generation it will be rewritten anyway, so the possible sector check will happen already.

 

* "leaving out the cache drive" is not a bad option, but optional. If you copy over your 32TB, the cache will be filled up first and further data will directly go to the array until you free him again with the mover. So there is no real time saving if using a cache during the initial phase. Turn it off, transfer the data, then turn it (and the parity) on.

 

* moving over the old data is your choice. You could fondle around, attach external disks to unraid and copy over locally. This would surely give you a speed improvement compared to 1Gbe LAN Transfer. For me, with my 10Gbe LAN, it makes no sense because the LAN is faster than any normal Hard Drive. So I would just copy the files over from my windows box and save me a lot of mounting and unmounting...

 

Link to comment

Thank you for that Michael! I'm planning on zeroing out the drives to double-check them for errors due to the fact that in my existing, definitely not-optimized Truenas box within a month of building a pool I've consistently gotten a mysterious data corruption error that said my pool was degraded. Did extended smart checks on each disk that came up clean but more recently I've gotten 3 airflow temperature warnings (2 of those were not well-ventilated, since corrected) and querying the smart status showed one of my disks had 4 errors at once a few months ago that haven't repeated. I think the disks are good but I'd feel better with a fresh read and replacing the one drive with the previous errors if need be before proceeding. They're still under warranty so if one is bad I can always get started with 6 drives and add it later. 

 

I'll go dig for an Unraid setup tutorial and educate myself a bit before proceeding. Haven't had the time to read up on it all yet and it's pretty different from what I'm used to. My usual thing has been to create one share that covers the whole pool, setting it as a drive letter and creating folders in there for my various stuff, movies, tv shows, other data stuff, etc. 

The SK hynix gold nvme arrived and will go in the system. If I'm unhappy with it I'll shell out the bucks when I can and replace it with something better suited. I have a laptop that would love some bigger nvme love, lol.

 

Once I've copied a season or two of the few series my family can't seem to live without to my media pc, I'll build this box and get started. I'm a little concerned about the Realtek LAN chip in this motherboard as I've read that some folks are having issues with it. Hopefully the drivers will be there and it'll work well enough. If not I'll pick up an Intel 10Gb card as at some point I know I'll end up making the move to 10Gb ethernet but that's pie in the sky for me at the moment. Moving from SoCal to Vegas in a few months and I can't justify the cost yet to my lovely lady...

 

 

Edited by Fencer
Link to comment
47 minutes ago, Fencer said:

My usual thing has been to create one share that covers the whole pool, setting it as a drive letter and creating folders in there for my various stuff, movies, tv shows, other data stuff, etc. 

That's what Unraid does without even asking for your permission 🙂

The whole Array is your pool. Subdirectories here are automatically considered to be a "share".

 

But you do not need to think WHERE the files of the shares are going to. Just create a folder (or use the new share gui), assign LAN Permissions to it. connect from your windows pc to that share and copy over the files for this share. UNRAID will automatically split them among the drives as needed. (you CAN controll which drives to be used and which not, but that is normally not needed and can also happen later on in case you get more aqauinted and like to optimize. But for the beginning, just fire & forget)

 

 

Link to comment

Well, it's been... Interesting so far with a ways to go.

The good: 

-finally got it booting, that took some doing.

-The SK Hynix Gold P31 I got for $99 seems to be doing well and reviewed well for sustained writes, etc. per this review: https://www.techpowerup.com/review/sk-hynix-gold-p31-1-tb/17.html I think I'll pick up another for my main laptop.

- I definitely like the interface and hardware info displayed. I'm guessing I'll be liking it more when the preclearing is done and I've created an array to play with

 

 

The not-so-good:

-The 32 gig Samsung Fit I thought was a recommended drive was incompatible so I ended up going with a 16gig Sandisk Cruzer

-The Gigabyte mobo I thought had 9 usable sata ports only had 6, the other 3 are sata express and looked to be a pain to work with. I picked up a SuperMicro AOC-SASLP-MV8 8-Channel/300MB/s SAS/SATA RAID Adapter Card that the site said was compatible so I'll either move all the drives to that or go half and half or something. Thoughts on that would be appreciated.

-And the big one: 3 of my 6 month old drives are showing SMART errors (reallocated sector counts) and the temps even with excellent airflow right across them have 4 at 45-47C and the other two at 39C steady. They're rebranded Seagate Constellation drives (2 are Exos drives) and as I understand it the Constellation ones run a bit hot.

 

So if these drives all pass the preclear with the pre- and post- checks should they be good to go or should I be hitting up my vendor for some warranty replacements? I'm a little wary of seeing some thumbs-downs starting out (see pic)

 

unraid drives.jpg

Link to comment
19 minutes ago, Fencer said:

3 of my 6 month old drives are showing SMART errors (reallocated sector counts)

 

20 minutes ago, Fencer said:

should I be hitting up my vendor for some warranty replacements?

 

Go replacement for relocated sector error, don't waste time on disk clear.

Link to comment
20 minutes ago, Fencer said:

The Gigabyte mobo I thought had 9 usable sata ports only had 6, the other 3 are sata express and looked to be a pain to work with. I picked up a SuperMicro AOC-SASLP-MV8 8-Channel/300MB/s SAS/SATA RAID Adapter Card that the site said was compatible so I'll either move all the drives to that or go half and half or something. Thoughts on that would be appreciated.

Personally I would avoid those SAS cards... Mainly because most of them are already outdated and use antique PCIe speeds ("waste of slot").

With my new server I've kicked them out and went to cheap 4*SATA controllers which use 1x Slots only (they are ususally free and cannot be used for serious things. Keep an eye open! there are also boards which offer more ports but they use multiplex chips which slows down the transfers immens. Use the 4 or six boards from your Mobo and the other up to 4 ports from the card in the slot.

UNRAID needs "dumb" cards, so your fancy SAS controller has to be flashed down to dump "IT" mode before it can be used at all. You lose all the nifty features that you have payed for... (this is not a bad thing)

 

28 minutes ago, Fencer said:

And the big one: 3 of my 6 month old drives are showing SMART errors (reallocated sector counts) and the temps even with excellent airflow right across them have 4 at 45-47C and the other two at 39C steady. They're rebranded Seagate Constellation drives (2 are Exos drives) and as I understand it the Constellation ones run a bit hot.

yeah, sounds not too healthy in the long run. But you need to look into the specs of your drives for the suggested operational temperature. You can then adjust the warning/error levels within UNRAID. For instance, my NVmes never go below 40°, but they are allowed up to 90°. So without adjustments, they would always signal "too hot".

Also, it helps to find out which drives becomes hotter than the others and then rearrange the drives in the cages in the way that a hot drive is surrounded by cool drives. Many hot drives tightly together will give additional heat pain to their neighbors.

I only have 6 drives spread within 16cages currently, so there is plenty of free slots between them

like

grafik.thumb.png.15b4c43e82df7c131fc161afb7902b85.png

Why are yours so hot even being off-line???

 

But I would be more concerned about the reallocated blocks. Keep an tight eye on them, if they rise and rise, something is utterly wrong and the drive will fail soon.

 

Link to comment
7 hours ago, JonathanM said:

Where did you see it was recommended? It was, many years ago for version 5 and earlier, but now it can cause issues.

 

Ugh, wonderful. Nothing like clicking on what turns out to be an old link without realizing it... All I need is a 2-port card to stick my last 2 drives on. Only picked up the Supermicro becauseI found a refurb for $16. Looks like I'll be building an array with my existing good drives and adding to it when I get the others replaced and a decent expansion card. Live and learn...

 

@Michael: Those temps are from a few hours into a preclear so they were pretty active for a while when I took that screenshot. They're still holding steady at 45-46 55% through the zeroing part. Higher than I'd like them but from what I've read pretty normal for those particular drives. Interesting that both of the cooler-running drives, the Exos ones both show reallocation errors. Turning into more of a project than I'd hoped but I'll keep plugging...

Link to comment
7 hours ago, Squid said:

Interesting.  The SAS2LP is recommended to NOT use, but the SASLP is listed as being OK.

Not really surprising if you look at the stats. The "2" device is the older one, it uses 8 lanes PCIe 2.0.

The "no number" one is the followup type, using 4 lanes of PCIe 3.0 already.

Basically they are using the same transfer speed, but most consumer boards do not have any 8x Slots (they LOOK like they have, but wired are only 4 lanes, so the "2" devices crawls around with half speed and cannot handle all 8 drives concurrently).

 

But he has an old server board, that should have a real 8x slot, so the "NOT use" warning does not apply to him. Both controllers will behave the well.

 

For modern boards one should avoid any 2.0 cards if possible. In the days of PCIe 5.0 the speed stepdowns are getting complicated and I guess, teh backward compatibility will drop someday... But its a "waste of lanes" today already. And many companies brought out new versions of their cards (SAS/SATA controllers, Network Cards and so on) with fewer lanes and higher speed bus interfaces already.

 

Link to comment
40 minutes ago, Squid said:

You got that backwards.

yes and no 😁

Depends on what you are looking for. If you go for drive speed, you are right, if you go for bus speed and version, I am correct. But I guess, there are also 6G Versions with 4 lanes 3.0 out there in the wild...

 

Anyway, he won't need any of them.

 

Link to comment
On 4/16/2022 at 3:50 AM, Fencer said:

-The 32 gig Samsung Fit I thought was a recommended drive was incompatible so I ended up going with a 16gig Sandisk Cruzer

Just want to throw out that I use the 32GB FIT and haven't had any issues with it, might be worth circling back to once you get all your other issues sorted out:

image.thumb.png.cc3e8b1eabfe0766301ab6d176ae478e.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.