harmser

Members
  • Posts

    97
  • Joined

  • Last visited

Posts posted by harmser

  1. Quick question. I've read through your post a few times and through the forum, but in actual use i seem to be totally misunderstanding something.

     

    I have 2 unraid servers and am trying to move content from one to the other. I want to have the following structure with user shares on the new unraid. New unraid is currently 3 3TB drives, and all user shares set to high water.

     

    Movies->Movie_Name->Movie Files  (with all the movie files remaining on the same drive and the Movie_Name folders residing on any drive)

    With this structure and the blog post I was using split level 1, but after starting the copy disk1 when well beyond 50% usage and didn't start placing on disk2 until I changed split level to 2. It then copied new movies to disk2, switched split level back to 1 and it again started copying files only to disk 1. What should the setting be and how am I getting this so wrong?

     

    My next share will be:

    TV_Shows->Show_Title->Season_#->Season Files (again I want to split on Season_# (keeping all files for season together), so I thought this would be split level 2. Is this correct?

     

    Is there a tool/website that will determine the proper split level based on custom inputs? Could there be?

     

    Thanks!

     

    From what you shared above, split-level 1 is correct.  With the share name being 'movies' and the next levels being 'movie_name' and 'movie files', this will ensure that the movie files stay on the same disk as the movie name. 

     

    A couple of questions for you:

    1. How big is each movie directory?

    2. During the copy process, when did you decide it was doing the wrong thing?  (i.e., can you quantify "well beyond 50%"?)

     

    If your first disk was filled to 1.449TB, it would have copied the next file before switching to the next disk.  In other words, the high-water mark (in this case 1.5TB) doesn't trigger until it adds a file that goes above it.  That next movie directory file could have been a large one (e.g. 20GB).

     

    If you don't think this is what happened, please let me know and we can look into this further.

     

    On your TV show hierarchy, split-level 2 is correct.

     

    Thanks,

    TomH

  2. All very good ideas.  The two major areas are:

    1. How can I go one level further beyond a "thank you" to someone who has helped me out or provide recognition for a great post; and

    2. How can we contribute to a developer (unRAID customer) on a plugin we like.

     

    We have discussed this internally in the past and will discuss more soon, but, as you can imagine, the focus right now is on getting the next releases out.  Meanwhile, it's great to see your ideas, including the pros and cons.

     

    Thanks,

    TomH

  3. If you don't know, we'll share a command line here shortly to help you find out.

    To find out if your server is 64-bit capable, do the following:

    1. Log into your unRAID consul.

    2. Type "grep --color lm /proc/cpuinfo" [from WeeboTech's sig]

    3. Look for "lm" in the flags (which indicate it is 64-bit capable).

     

    TomH

  4. This is a great article and i wish i had this when i first setup my unraid.

    when i first setup my unraid i used this guide http://tinyurl.com/k6pglbp because i was going to use all of those apps.

     

    However recently i change my cache to a bigger drive and I hosed it. So i started essentially new. I realize that i had setup my cache and apps incorrectly and they only thing using my cache was the apps.

     

    so my question is this, once i set a share name say "media" and create my sub folder (movies music etc) and choose the setting use cache *YES* do i have to create the same file structure on the cache drive as the share? so when SB or CP move files to their final resting place on the cache drive  mover will move them to the array from cache?

     

    I might just start fresh since my data will still be there. with no plugins and go from there after the parity finishes.

    I'm glad you found this post useful. 

     

    Once you enable the Cache on your "Media" Share, it will be automatically replicated on the Cache drive (preferred method).  While you can duplicate the Shares on the Cache, make sure there aren't any typos/misspellings or else the data won't get transferred.

     

    As you capture data from SB or CP, it will automatically go to the Cache drive first.  The Mover will take care of moving any new data to the array at 3:20 am (default setting), unless you change the time.  Once the transfer is successfully completed, the original files on the Cache will be deleted.

     

    TomH

  5. I like your Blog writeups.

     

    On this one, the graphics are a bit out of sync...

    about 2/3 of the way down, at:

    EXAMPLE AVS-10/4...

    There's a graphic with 5 slots/4slots/5 slots.

    Each slot is shown filled with a drive, and the drive size is shown.

    In the next graphic, the webGUI shot, shows a different setup...only six disks, and the sizes of the installed disks are different between the two graphics.

    But what actually intrigued me most was the comment about using 'Slot 9' for cache...I fear that will spawn all kinds of questions about 'why'?

    --Is that just a best practice to keep things organized?

    --is there a technical requirement for using slot 9?

    --and on my generic-86 box, I only have 5 slots, so which slot should *I* use?

     

    I know, I know, 'Picky Picky'.  ::)

    Thanks Dale, great catch.  I just changed the first diagram and used the the concept of slots instead of disks.  Slots 1-5 are for 3.5" drives on the left, slots 6-10 are for 3.5" drives on the right, and slots 11-14 are for the 2.5" drives in the middle.

    The reason we have the far right slot allocated for the Cache disk is just due to how TomM likes to wire the AVS-10/4.  The motherboard has two SATA3 ports and four SATA2 ports.  He has the far left and far right slots attached  to the SATA3 ports as he uses the far left for Parity and the far right for Cache, and they can benefit from the extra speed.  The other four SATA2 ports are attached to slots 2-5.  The remaining 8 slots are attached to a SAS2/SATA3 add-on card.  Slots 11-14 are 2.5" and can accommodate SSD Flash drives, also great for using as Cache due to their speed ("Cache Pool" feature coming soon  :)).

     

    TomH

  6. An additional comment (which you are free to ignore!):

     

    Some time ago, someone commented that it seemed surprising that so many of the oldest veterans, including many moderators, were NOT using User Shares.  I have to include myself, as one who has never been interested in turning them on, apart from possibly better familiarizing myself with them for the sake of helping others.  I can't speak for others, but I think they are a great UnRAID feature, just not for me.  I like more control over my data, knowing exactly where everything is.  I place my various categories of data on specific disks, and as yet haven't found further data abstraction[1] to be helpful.  I keep a separate index to all of my files, 'Bones' is all on Disk 2, 'Masterpiece' and similar PBS is on Disk 4, movies and history-related are on Disk 8, backup copies of photos and music and other collections are on Disk 10 with an even older backup on Disk 1, etc etc.  In addition to preferring complete control, I don't like adding additional and unnecessary software layers (such as Fuse), that may add additional bugs and vulnerabilities and performance hits.

     

    I'm in no way wanting to knock User Shares, or even suggesting that you need to accommodate users like me (and others).  But if you think it useful, you might consider adding a comment or two that User Shares aren't the only way, that some prefer to store directly.

     

    1. I'm not against all data abstractions.  I would hate to go back to direct sector addressing, and messing with drive geometries!

    Good point RobJ.  I added a note at the beginning.

     

    Thanks,

    TomH