Jump to content

Best Practices Questions


orangestorm87

Recommended Posts

So I have been thinking about an Unraid build for a long time and getting ready to take the plunge.  But I want to make sure I avoid as many bad choices/steps as I can before I start.

 

Primary purpose for Unraid would be:

 

1) Backup of files (Photos, Movies, etc)

2) Access to files remotely

3) Stream Media to networked Devices

 

Now to the questions

 

1) When putting in new hard drives, how do most users typically test them?  Simply do a pre-clear?

2) When putting in new memory, how do most users typically test for bad sectors/reliability?  Is there any plugin/docker for this test?

3) When installing/using dockers I sometimes see different reposertories having the same docker app (like plex).  Is there a way to know which repository to use for which docker?

4) What are some basic control and diagnositics plugin/dockers that every Unraid system should have installed?  I have looked at the pre-clear plugin but I am sure there are others for backup, etc I am missing.

5) Is there a guide for the best intial setup of unraid?  Such as best practices for what shares to create, creating users, allowing users to access only certain shares, etc?  I am just afraid I will make poorly designed file structure that will hamper me in the future.

6) Any other general recommendations or gotcha's to look out for?

 

 

Thanks in advance for any help answering my questions and I don't expect any one person to be able to answer them all.

 

Link to comment

My take on a few of these:

 

1) Yes, pre-clear is the way to go, I run 3 passes before adding to the array

2) memtest is build into unraid (catch the boot menu when you first start up), typically running for 24 hours or so is plenty, you should have 0 errors

 

Hope this helps.

Link to comment

Welcome! Here are some short answers to your short questions.

 

1) Preclear

2) Memtest is selected from the boot menu before unRAID boots.

3) Read the support threads for each. If there are any functional differences they will usually be spelled out in the 1st post. It is often possible to change from one to another without losing anything.

4) Preclear, Community Applications, Unassigned Devices, choose what you want from the Dynamix plugins. Browse the 6.1 plugins thread to see what else seems interesting. As for dockers, mostly depends on what you want to do.

5) Some parts of the wiki are a little out-of-date but most of the NAS functionality hasn't changed all that much over the years so have a look there.

6) Main recommendation is to use the forums. Lots of help here.

Link to comment

Building on the previous replies...

3) For Plex specifically, see https://lime-technology.com/forum/index.php?topic=41562.0 

4) I'd also look at the Recycle Bin and Open Files plugins.

5) Create separate shares for Movies, TV, and Music and then organize the files according to Plex's requirements: https://support.plex.tv/hc/en-us/categories/200028098-Media-Preparation .  Another option would be to create a single Media share with subdirs for each of the above, but that will give you less control over what files go on what disks.

Link to comment

When you say.

2) Access to files remotely

 

Do you mean from out side your network? If so honestly its something I wouldn't recommend for security reasons. Your data is to valuable to dangle out there and take the chance of somebody else tinkering with.

Meant to address this myself.

 

You will need a VPN. Many newer routers have this builtin. Another possibility is TeamViewer running on another machine on your network. I use both depending on what I want to do.

Link to comment

Ok everything should be coming in today to build it tonight.

 

I have looked at the dynamix Plugins and I don't really see one for preclear.

 

Is preclear built into v6.1 or is there some other plugin I should use?  Since I am not able to find said plugin, anybody with a link to it?

 

Preclear is not built in. Link

Link to comment

So I want to say thank you all so far, you have really helped.

 

My array passed memtest and is now pre-clearing my two 4TB drives.

 

While this is going on I thought of another question.

 

Since this is a brand new array and I want to start transferring my data from all my random locations, the ammount to be transferred will be larger than my 250GB cache drive.  Is there a way to bypass the cache drive when initially writing files?

 

Doing a search as suggested only turns up writing directly to the disk share and not the user share.  Is it possible to simply unmount my cache disk until my intial data copied?  Or is there some better solution I don't know about?

 

Thanks again!

Link to comment

So I want to say thank you all so far, you have really helped.

 

My array passed memtest and is now pre-clearing my two 4TB drives.

 

While this is going on I thought of another question.

 

Since this is a brand new array and I want to start transferring my data from all my random locations, the ammount to be transferred will be larger than my 250GB cache drive.  Is there a way to bypass the cache drive when initially writing files?

 

Doing a search as suggested only turns up writing directly to the disk share and not the user share.  Is it possible to simply unmount my cache disk until my intial data copied?  Or is there some better solution I don't know about?

 

Thanks again!

Each user share can be set to use or not use (or only use) cache. Any share setting you make only applies when writing to the share. You can set a share to not use cache and later set it to use cache.
Link to comment

It's worth noting that the use of the cache drive to cache writes to the array isn't something that everyone does.  Writes directly to the array are reasonably fast and immediately protected by parity.  Writes to the cache drive are faster, but only protected after they are written to the array over night, or if a BTRFS cache pool is used.

 

The cache drive is very commonly used, though.  It's the location you want to install your Docker image and various Docker apps.  Whether you also use it to cache writes to the array is up to you - some folks do, some don't.  I started out that way and eventually decided that write caching wasn't really needed for my purposes.

Link to comment

It's worth noting that the use of the cache drive to cache writes to the array isn't something that everyone does.  Writes directly to the array are reasonably fast and immediately protected by parity.  Writes to the cache drive are faster, but only protected after they are written to the array over night, or if a BTRFS cache pool is used.

 

The cache drive is very commonly used, though.  It's the location you want to install your Docker image and various Docker apps.  Whether you also use it to cache writes to the array is up to you.  I started out that way and eventually decided that write caching wasn't really needed for my purposes.

I don't cache user share writes either, just use my cache pool for dockers. Almost all writes to my array are unattended and automatic, such as scheduled backups from other computers or downloads by my dockers, so I am not waiting anyway.
Link to comment

It is also worth noting that even with shares enabled to use the cache, once the amount of free space on the cache drive falls below the Min Free space setting you set for the cache then subsequent files bypass the cache anyway.  In such a scenario you want the Min Free space setting to be at least as much as the largest file you are going to copy.

Link to comment

So I want to say thank you all so far, you have really helped.

 

My array passed memtest and is now pre-clearing my two 4TB drives.

 

While this is going on I thought of another question.

 

Since this is a brand new array and I want to start transferring my data from all my random locations, the ammount to be transferred will be larger than my 250GB cache drive.  Is there a way to bypass the cache drive when initially writing files?

 

Doing a search as suggested only turns up writing directly to the disk share and not the user share.  Is it possible to simply unmount my cache disk until my intial data copied?  Or is there some better solution I don't know about?

 

Thanks again!

 

If you are planning on moving a lot of data to UnRAID, and you are writing directly to the array, it will be a slower transfer than normal as parity is continuously being written/updated. Many people start the array without the parity disk assigned, copy the data across at normal speeds, and then enable the parity disk and do a parity build and check.

 

It's a matter of preference on which path you take, but since the data is not protected currently wherever it's held most people don't see it as an additional risk to copy to UnRAID unprotected to get it there quickly, and then turn on protection. With parity enabled it's a 50% hit in data transfer I think.

Link to comment

I would have to agree with the statement that I don't mind my data being "unprotected" on my intial load.  So if/when my pre-clear is done and passed, I will just mount the data disk and then start transferring files.

 

However, I am still looking for a question on CPU usage.  I have an Intel I3-4170.  During the writing zeros phase of preclear my cpu usage was between 50-80%.  Now at post read its at about 50%.

 

The reason I bring it up is because whenever I log onto my unraid server through the webgui it intially is very slow to load pages or does not load images until going to another menu screen and coming back, though eventually it becomes more responsive.

 

Is this a fact of the pre-clear or should I be wondering about my cpu?

Link to comment

Ya, it looks like it was a problem with Hasawell processors.  I needed to add intel_pstate=disabled in my syslinux config file.  Now the cpu seems to behaving correctly.

 

But now my next question.

 

Finally done with setting up my server and transferring my first files over.

 

I am copying over wireless, but only getting like 17mb/s.  I will have to do some more testing on a wired connection, but when people are posting writes in the 50-100 range, I start to get suspicious of what I did wrong  :D

Link to comment

Finally done with setting up my server and transferring my first files over.

 

I am copying over wireless, but only getting like 17mb/s.  I will have to do some more testing on a wired connection, but when people are posting writes in the 50-100 range, I start to get suspicious of what I did wrong

You need to be clear wither you are talking about Mbps (bits) or MBps(bytes).  You also need to mention what type of wireless connection you have.  I would think that 17Mbps is about right for older wifi technology while 17MBps is not unreasonable for newer  wifi technology.  Anything faster than that would be a bonus.  I think you will find that most users are connected via a wired Gigabit connection to get higher speeds.
Link to comment

Yes I understand, wireless will be slower than wired.

 

But onto my next and hopefully final questions.

 

1) I have all of my data now on my array.  However, I did this without the parity disk.  Now I stopped the array, and added the parity disk, it has immediately started a parity check.  The only problem is, this disk has never been "formatted" after the preclear.  So in the main dashboard it shows no file system or used vs free statistics.  Should i allow this check to continue?

 

2) I am trying to setup users to allow access to certain folders.  When I attemp smb security settings of Hidden and Private (along with picking a user to have read/write access) when I attempt to go to that folder location I just keep getting the login popup.  I have checked the creditials and made new users but still not letting me log into the folder.

Link to comment

Ok that makes sense.

 

For question two, looking at the Unraid v6 manual, it doesn't seem to mention the "Private" setting.

 

Is there another way I am supposed setup my share so that a user has to "login" before given access to a folder?

Whether or not a user has to login depends on whether or not that user has a password, and whether or not the user is already logged in. Windows only allows one login to another computer at a time. See here.
Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...