How I fell in love with unRAID by Grumpy Reviewer


DesertSweeper

Recommended Posts

Speaking of using cache and not using cache, I'm currently moving files off of my youngest drive to convert it to a hot spare since I now have an excess of space. Copying from one array drive to another was slow to say the least, I was lucky to hit 25 MB/sec. In my case, utilizing the cache has sped things up considerably. I excluded drive 5 that I'm clearing in the drive settings, fill up the 1 TB cache drive, then invoke the mover with the fast write enabled to move them back to the array spread out on the other drives. Run fstrim on the cache, rinse, recycle until drive 5 is clear. Both copying to the cache and back to the array are happening at near the speed of the drives.

Link to comment

In regards to data dumping onto the new system, shares let you fill up a drive before it moves onto the next but you can configure them to use the drive with the most amount of free space. If you're copying files that are best kept together such as TV seasons when you spin down drives, check out the split level settings. But also note that utilizing split level on really fat directories could cause an out-of-space issue if you force a directory together that exceeds the capacity of the drive it's assigned to.

Link to comment
8 minutes ago, jbartlett said:

In regards to data dumping onto the new system, shares let you fill up a drive before it moves onto the next but you can configure them to use the drive with the most amount of free space. If you're copying files that are best kept together such as TV seasons when you spin down drives, check out the split level settings. But also note that utilizing split level on really fat directories could cause an out-of-space issue if you force a directory together that exceeds the capacity of the drive it's assigned to.

On the other hand, if you leave Allocation at the default High-water, and let it split anything, then most of the time things that belong together have a pretty good chance of winding up together anyway simply because it doesn't switch disks as often.

 

Split Level has precedence over Allocation Method when deciding which disk to choose, and if you are allowing it to split, it will not keep trying to put things on the same disk when it gets too full (as defined by Minimum Free).

 

Also, if you go with Most Free, then when free space is evened out it will be switching a lot just to keep them evened out.

 

Lots of ways to set these up, it depends on what is most important to your use.

Link to comment

I just thought of something about existing files & the cache drive - if file abc.txt already exist in the array (not in cache) and you copy a new version of abc.txt to overwrite, it will be written to the same drive the existing abc.txt resides and will not use the cache drive. In situations like this, using a tool such as Beyond Compare to delete the NAS version prior to copying the files will let the files write to the cache drive.

 

It strikes me as highly unlikely to happen but if you have a 1 byte file and then replace it with a much larger version that exceeds the available space, it won't copy successfully.

Link to comment

One argument for using the minimum free space allocation method is how much effort would it take to replace the files in the event of multiple simultaneous drive failures. I convert all the movies & TV discs I buy to HEVC and store it on my nas for Plex. Spreading them out minimizes the work involved of recreating it in such a disaster scenario.

 

Spreading them out also allows the faster areas of the drives to be utilized before the slower areas.

Link to comment

If you are using the File Integrity plugin to create checksums of all your files and exporting the results to the flash drive then you can use the .hash files created to also provide a list of exactly which files were on each array disk in the event of any serious problems.

Link to comment
On 8/3/2019 at 6:02 AM, DesertSweeper said:

back to NAS4FREE for me.

Since it seems you may be sticking around after all, would you mind changing the title of your first post in the topic to something less dramatic? Maybe, "Why I decided to give Unraid a try" or something like that.

Link to comment

However you distribute (or don't) the files is just different ways of playing the odds. Without more specifics I don't think you can say which is better.

 

For example, if you have a specific case where different users might access different sets of files simultaneously, then it might make sense to put one users files on one disk, and the other users files on another disk, so they won't be competing for the same disk to access their files.

 

If you put all your baby pictures on 1 disk of 5, with no backup and no parity, then all other things equal you have only a 20% chance of losing 100% of your baby pictures. If you spread them out over all 5 disks, then the chance becomes 100% that you will lose 20% of them.

 

Of course, in actuality, you will have parity and backups, and actual file loss isn't that frequent. What loss there is seems to be user error in many cases.

Link to comment
10 hours ago, jonathanm said:

Since it seems you may be sticking around after all, would you mind changing the title of your first post in the topic to something less dramatic? Maybe, "Why I decided to give Unraid a try" or something like that.

I did actually try to do that but there does not seem to be a way of doing it

Link to comment

So the main issue I have run into now with unRaid has nothing to do with unRaid...as I said when I started, my journey began with a failed NAS4FREE server that ran flawlessly for many many years. And it only died because one day I copied a huge chunk of 4K GoPro footage to it without realising that the 24TB of RAID-5 space was maxed out. It fell over and corrupted the array. Took me a few weeks to recover all the data which I did and then re-formatted the little box with a ZFS array which is apparently more robust. That created a new problem - the HP Microserver N40L (AMD 2.2Ghz Turion) did not have enough juice to handle ZFS. Despite have 16GB of ECC RAM it was maxed out dealing with ZFS issues. I could either go RAID-5 again or look at an alternative host. So I dug an old HP Proliant ML110-G7 out of my store that had been decommissioned and chucked out by a client. It has a decent Quad-core Xeon (E30-1220) and 24GB of ECC RAM. That would surely have enough power to deal with ZFS on NAS4FREE. I transferred the 5 6TB WD Red drives and added a 6th since there was enough space in the mini-tower. And then I visited a friend who uses unRaid and that got me thinking - it makes so much sense. Which is where my grumpy opening statement came from. Misguidedly yes.

But all that aside this ML110-G7 is a noisy power-hog. I do not need any VM's or anything else fancy. I have a small and powerful ESXi hypervisor box that does that for me. I just want a low-powered high-capacity NAS that sucks up my GoPro footage quickly (hence the cache concept).

If I chuck it all back into the baby N40L that hosted the NAS4FREE all these years, will it perform ok for pure SMB-share on that low-end CPU?

Edited by DesertSweeper
typo
Link to comment
1 hour ago, DesertSweeper said:

If I chuck it all back into the baby N40L that hosted the NAS4FREE all these years, will it perform ok for pure SMB-share on that low-end CPU?

I built a storage-only (with some plugins) unraid server using a 1.2GHz Atom CPU and it handled it with ease.

Link to comment
20 minutes ago, jbartlett said:

I built a storage-only (with some plugins) unraid server using a 1.2GHz Atom CPU and it handled it with ease.

Thank you JB, was that a large'ish array like mine of 6 x 6TB drives? Does it make any difference like it does with ZFS? I guess it just the question of parity writing...

Link to comment
57 minutes ago, DesertSweeper said:

Thank you JB, was that a large'ish array like mine of 6 x 6TB drives? Does it make any difference like it does with ZFS? I guess it just the question of parity writing...

Whoops, it was a 1.86GHz CPU. Eh, still low powered. 5x4TB drives (4 data, 1 parity). XFS I believe.

Link to comment

Parity2 calculation seems to require a little more horsepower than parity. I don't have any personal experience since I have been running on an i5 since parity2 was introduced. Parity2 probably isn't worth doing until your drive count gets up there, as long as you are careful and diligent.

 

Careful: double-check all connections when you are mucking about in the case.

 

Diligent: make sure you address any problem as soon as it occurs so you don't wind up with multiple problems. Setup Notifications to alert you by email or other agent as soon as a problem is detected.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.