Jump to content

zonderling

Members
  • Posts

    39
  • Joined

  • Last visited

Posts posted by zonderling

  1. On 8/18/2017 at 4:36 PM, Tybio said:

    Might not help, but I got sick of cache limitations when I was doing major updates (Upgrading a season of a big show to Bluray when released etc) so I finally just bit the bullet and used an old 240G SSD as an unassigned device for NZB downloading and an old 4T spinner to host my torrents.  It was just less painful than making the other choice of having a SSD cache with enough room (Large $$$ commitment) or a large spinner that is slow.

     

    Obviously, I had the spare SATA ports to make this work, but I've found it's MUCH more stable...now my NZB downloads don't impact my production array even if I get a set of Remux-1080p downloads that don't post-process properly.  I don't have to spend $$$ on large SSD space for torrents that would see no benefit from them.

     

    Obviously the solution depends on situation, but think of it more like finding the trade-offs that work for you rather than trying to stick to one method of operation...just my recommendation.

    I had a similar limitation to my first setup.  My 250 GiB cache drive would fill up 100%

    Now I have configured to have the  SabNZB "incomplete" on the cache drive and the "complete" folder is on another share on the array.

    For very big downloads, all the reconstructive work ( par, unrar, repair, rename ) is done on the SSD and final copy is done on a share on the array.  I've called mine "scratchdisk".  Last step on my scratchdisk Sonarr and Radarr pijck up the payload and move them to the difinite media store.

     

    This workflow works for me, but as the saying goes, there are many roads to Rome ;)

  2. On 8/19/2017 at 6:17 AM, brando56894 said:

     

     

    As the other guys said, there's no reason to cache torrents and I have to seed mine. I currently have 2 TB of seeding torrents. So Usenet will be cache-only and torrents will be no cache since they will be written directly to downloads and then copied to either movies or shows.

    Thx guys.  Sure sounds like an optimized workflow to me.

  3. 7 hours ago, RJAPerkins said:

    Hoping someone might be able to help me with a pretty basic question.  Over the past few days I've been getting error messages which says "Unable to write to Docker Image" and "Docker Image either full or corrupted.

    I'm a complete newbie, have never written a line of code in my life and installed unRAID from youtube videos.  So I'd really appreciate if you could respond in the simplest possible language.

    Much appreciated.

    my advice would be to check if you still have some free space on your cache drive.  Imo its full or nearly full.  Dockers need to write logs, so some free space on your app share ( typically found on the cache drive ) .  Hope that helps?

  4. 2 hours ago, brando56894 said:

    I understand what you're saying, and I guess I'll give it a try, I just have to reconfigure my shares since downloads is one share and I would have to split the into usenet and torrents.

    Sure, whatever works for you :)

    Just out of curiosity ... why would you split usenet and torrents into different shares? Why not make the distinction on a folder level? (so a folder for your usenet and a folder for your torrents both on the same "downloads" share? ) Just out of my personal interest in the subject, i only use usenet, no torrents) 

  5. 5 hours ago, brando56894 said:

    Thanks for the suggestion. My problems seemed to be with how I had the cache setup on the shares. I understand why you have yours setup the way that you do but I think having caching enabled on my multimedia shares is useful so that the whole process is sped up (moving from download to media is quick since it's just going from one sector on the SSD to another) and Sonarr/Radarr/NZBget can quickly move on to the next on, meanwhile the mover will be doing it's job in the background to clear space from the cache. I don't think I will be running out of cache space soon since I have most of the gigantic things downloaded and it will only be downloading maybe one or two at a time now, which definitely won't fill up 500 GB.

    I think I understand your set up more or less ... i don't quite get the part you describe how your process its sped up by enabling cache; "moving from one sector to another on the SSD."

    They way I do it, I keep all of the, lets call it "workload" on the cache drive, all the time.  Cache witch is typically a SSD device.  Your process has the workload part on the cache, part on the array because each time the mover moves, it moves some of your unprocessed rar files to the slower spinners.

     

    Next phase when Sabnzb (or whatever other docker you use for that matter) starts with unpacking its not on your super fast SSD but on the slower spinners. To make matters worsen your CPU needs to calculate parity witch causes even more load you can easily avoid.

     

    Final point is that unRAID really excels when you can organize its writes to the array sequential.  In other word one by one. My workflow does that.

     

    I'm not saying yours is no good, I'm only offering a way to optimize ;)

     

    edit: when reading it back, i come to the conclusion bjp999 said the same as me :)

  6. On 8/15/2017 at 9:14 AM, brando56894 said:

    At a few points there, IDK if the mover got confused or what, but it looked like it was moving data back into the cache almost as quickly as I was moving stuff off of it...and it wasn't stuff that I was currently downloading or post-processing (for example it had the whole series of South Park on there for some reason) :S I don't know if this is a result of cache=yes, so I changed it to cache=prefer and cleared out all everything in downloads, movies and shows (the majority of the data, the other shares equal maybe 50 GB total) and let it continue to download things for a few hours. I checked on it hours later and once again the cache drive was 100% full....

     

    Right now my only solution is to disable caching on the downloads, movies and shows shares, which sucks because those are the most active shares and would definitely benefit from the write cache. I just had Radarr post-process 11 HD/UHD movies and it took 2 freaking hours to do so!

     

     

    The trick is to make a share to handle your downloads, and set the Share Settings - Use cache disk to "only".  On this share configure all your post download automation task. ( run par, unrar, repair, unzip, rename ) should all be done in a folder on this cache only share.  It will be unprotected but lighting fast and wont put a strain on the protected array.

     

    Secondly train your Sonarr, Radarr, to go fetch the finalized product on the "download" share and save it on the "media share".  I have no mover enabled on my media share; It does not make any sence to do so.

     

    Thats how I have set it up and imho; the good way to do it.

    • Upvote 1
  7. It would only make sense in the parity drive to invest in high RPM. In the situation when you have multiple writes simultaneously.

    For reading operations, even multiple concurrent, the first bottleneck will be your GB NIC not your HD.  One decent 5400 RPM disk is enough to saturate one GB NIC.  So unless you look at uprating your LAN to 10 GB ethernet, not worth in investing in 7900 RPM spinners.

     

     

  8. 11 minutes ago, brando56894 said:

    Your unRAID server really shouldn't be accessible to the public internet, the only way it should be able to be accessed securely when outside your network is via VPN.

    But what about people using the immense popular docker Plex?  The purpose is that it can serve all your media files over the internet.

    Is there a safe way we can have Plex on the internet ( the manual instructs to set up port forwarding for this ) and "lock down" every and each other access other than VPN?

     

  9. 40 minutes ago, garycase said:

    FWIW, if you DO want an 8TB WD Red, and don't mind "shucking" the drive from an external unit, there's a very good deal at the moment where these are actually cheaper than the Seagate archives:   Best Buy has the 8TB "EasyStore" drives for $180 ... these have 8TB Reds inside, which you can remove and use in an array.   Note that there are warranty implications of doing this -- you'd likely need to replace the drive in the case if you ever needed warranty service (not hard to do as long as you're CAREFUL and don't damage the case when removing the drive); and the warranty is 2 years vs. the 3 years that a bare drive comes with.   Nevertheless, it's not a bad option:  http://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401

    Ahhhh the same dilemma for me he here in Europe.  Harvesting a WD red 8 TB from external cost identical to the Seagate 8 TB archives drives.  Now I don't know anymore because money isn't the driving factor anymore in this comparison.

     

  10. 5 minutes ago, johnnie.black said:

     

    Doesn't change the fact that this is your "feeling", no evidence of this whatsoever.

    My "feeling" is usually right lol :) but Ok; agreed, no numbers to stand me by, only common sense because mechanical ware and tear because of the way The shingled technology works. More movements of the head an all ...

     

    But the more important thing remains:  performance issue on writes.  I always want my most performant spinner as parity to avoid bottleneck.

     

  11. 22 minutes ago, johnnie.black said:

    I only objected to your "will fail fast" statement, mtbf doesn't really mean much to me, e.g., WD stopped using that (https://support.wdc.com/knowledgebase/answer.aspx?ID=665)

     

    I don't dispute the WD RED is probably more reliable, although we'll still need to wait some time to see if there are issues with helium leaking , but IMO for the price, the Seagate Archive is a very good option for unRAID, including for parity.

     

    If the user is going to be making simultaneous writes to more than one data disk and/or work with a lot of small files, then I would agree that a non SMR drive would be a better choice.

    I'm sorry, but I feel you are neglecting the context of my statement.  I think they "will fail fast" as parity disk.  (the last context is essential ) just because it will sustain a lot of small writes each time something is written on the array.

     

    For general purpose, I like them a lot, don't get me wrong.  Best bang for the buck money wise.  

  12. 19 minutes ago, johnnie.black said:

     

    unRAID is not RAID, and even if you use with RAID it may not be the best performer but there's no evidence it will fail faster.

    I new you where going to say that :)

     

    I know its not RAID (hence the name ) but it is similar in the sence that parity disk is aways written to, similar to RAID arrays

     

    But if you need cold figures there is a 1/5 difference in expected life cycle.

     

    MTBF (hours) 1000 000 WD RED
    MTBF (hours) 800 000 Seagate Archive

     

    Also you would agree for every write action on the platter, the disk needs to do 3 actions.  Normal wear and tear would suggest it will have an effect on expected MTBF

  13. 8 minutes ago, johnnie.black said:

     

    This is based on what? alternative facts?

     

    From other users on the forum an my own experience this is not true, I have 5 of theses disks (one of them as parity) for over a year with 0 issues, and see no evidence that these disks have an higher fail rate than any other disk.

     

    http://www.storagereview.com/seagate_archive_hdd_review_8tb

     

     

    RAID Usage with SMR

    With the attractively low price per TB that the Seagate Archive 8TB HDD has, it can be difficult to not consider purchasing a set for NAS storage. StorageReview strongly recommends against such usage, as at this time SMR drives are not designed to cope with sustained write behavior. Many contend that NAS shares tend to be very read-focused during normal operation. While that's true, the exception is when a drive fails and a RAID rebuild has to occur. In this case the results clearly show that this implementation of SMR is not a good fit for RAID. 

  14. 7 minutes ago, BRiT said:

    It doesnt matter, the seagate archive drives function perfectly fine for parity drives even for main array operations where you wont be randomly modifying the data. The drives have been in operations on numerous arrays ever since they were available and not one person has ever hit the write penalty SMR wall.

    Ive got and use 8TB Seagate disk as well and I use them for what they are intended. Archiving .... Cold storage. For me this is to be taken literary.  I take backup by external docking bay, then remove it for offline, cold storage.

     

    I do notice a big performance hit when writing a lot of sequential data ( ie backup).  When its new, clean and formatted I get write speed in excess of 100MB/s when its 3/4 full it drops down below 30MB/S

     

    Ive had it a few days in my "production" rig as parity at i remember slow performances as well ( again 30 MB/s all the time.

     

    Articles about this drive shingling seems to confirm my theory.  

     

    I'm not saying it will not work, I'm sure a lof of use cases will be here where it works, I'm only answering to the topic starter request to optimize his setup.

     

  15. On 6/8/2017 at 0:00 PM, volume said:

    Unfortunately i can't afford to buy 6 x WD RED 8TB hard drives, but if its a good idea to have one WD RED 8TB for parity instead of the Seagate archive 8TB, please let me know.

    It makes sense.  Motivation in my previous post.  Basically Seagate Archives are not good for writing a lot to.  They will get slower, slowing down your array, and they will fail fast.

  16. On 6/7/2017 at 3:04 PM, volume said:

     

    Could you please explain the benefits that i will gain if i have the WD RED 8TB for parity?

    it will give me faster rebuilt times?

     

    Should i go with dual for an array of 48TB+ (6 x Seagate 8TB) ?

    WD RED or even GOLD or any other disk that is not based on the shingled magnet recording technology.

     

    Your parity drive is the only drive in your array that will sustain heave write operations. Each bit that is written in the array makes a write on this parity disk.

    This type of drive leverages a lot on its internal cache ( about 20 GB ) to do houskeeping on incomming writes.  A sort of temporarily parking spot for data that needs to be written to disk.  By design in parks the data in cache, reads the data on another spot on the disk, then writes boht data as a sort of combined striped data.  That's how Seagate gets so mucht TB on the platter.

     

    By design your parity disk will always be 100% full so performance will degrade.  And since the array data disk speeds is determined by the write speed of your parity disk, it would make sense to have a normal 8 TB for parity and leave your archive disk for what they are intended for.  Few writes, many reads.

     

×
×
  • Create New...