zonderling

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by zonderling

  1. I dont know if it will be related but i had the same issue this holiday. Turned out to be cable issue. The thing is, most of us, when we power down the server, its to move it. So don't panic immediately
  2. I think Plex also supports DLNA so you can use any plex docker for your purpose.
  3. I had a similar limitation to my first setup. My 250 GiB cache drive would fill up 100% Now I have configured to have the SabNZB "incomplete" on the cache drive and the "complete" folder is on another share on the array. For very big downloads, all the reconstructive work ( par, unrar, repair, rename ) is done on the SSD and final copy is done on a share on the array. I've called mine "scratchdisk". Last step on my scratchdisk Sonarr and Radarr pijck up the payload and move them to the difinite media store. This workflow works for me, but as the saying goes, there are many roads to Rome
  4. Thx guys. Sure sounds like an optimized workflow to me.
  5. my advice would be to check if you still have some free space on your cache drive. Imo its full or nearly full. Dockers need to write logs, so some free space on your app share ( typically found on the cache drive ) . Hope that helps?
  6. Sure, whatever works for you Just out of curiosity ... why would you split usenet and torrents into different shares? Why not make the distinction on a folder level? (so a folder for your usenet and a folder for your torrents both on the same "downloads" share? ) Just out of my personal interest in the subject, i only use usenet, no torrents)
  7. I think I understand your set up more or less ... i don't quite get the part you describe how your process its sped up by enabling cache; "moving from one sector to another on the SSD." They way I do it, I keep all of the, lets call it "workload" on the cache drive, all the time. Cache witch is typically a SSD device. Your process has the workload part on the cache, part on the array because each time the mover moves, it moves some of your unprocessed rar files to the slower spinners. Next phase when Sabnzb (or whatever other docker you use for that matter) starts with unpacking its not on your super fast SSD but on the slower spinners. To make matters worsen your CPU needs to calculate parity witch causes even more load you can easily avoid. Final point is that unRAID really excels when you can organize its writes to the array sequential. In other word one by one. My workflow does that. I'm not saying yours is no good, I'm only offering a way to optimize edit: when reading it back, i come to the conclusion bjp999 said the same as me
  8. The trick is to make a share to handle your downloads, and set the Share Settings - Use cache disk to "only". On this share configure all your post download automation task. ( run par, unrar, repair, unzip, rename ) should all be done in a folder on this cache only share. It will be unprotected but lighting fast and wont put a strain on the protected array. Secondly train your Sonarr, Radarr, to go fetch the finalized product on the "download" share and save it on the "media share". I have no mover enabled on my media share; It does not make any sence to do so. Thats how I have set it up and imho; the good way to do it.
  9. Cool. I'm happy to hear it sorted out. I've had that before, the way 1 see it 2 possible scenario's. A. ASRock is covering it up, for whatever reason. B. One of the connectors on the board is a bit loose, physical shocks make it work intermittently.
  10. It would only make sense in the parity drive to invest in high RPM. In the situation when you have multiple writes simultaneously. For reading operations, even multiple concurrent, the first bottleneck will be your GB NIC not your HD. One decent 5400 RPM disk is enough to saturate one GB NIC. So unless you look at uprating your LAN to 10 GB ethernet, not worth in investing in 7900 RPM spinners.
  11. Why would you have your music on an SSD / SSHD ? Cant think of any scenario where this would make sense?
  12. But what about people using the immense popular docker Plex? The purpose is that it can serve all your media files over the internet. Is there a safe way we can have Plex on the internet ( the manual instructs to set up port forwarding for this ) and "lock down" every and each other access other than VPN?
  13. Why over complicate things? I would just copy / paste over LAN and let it sit.
  14. Ahhhh the same dilemma for me he here in Europe. Harvesting a WD red 8 TB from external cost identical to the Seagate 8 TB archives drives. Now I don't know anymore because money isn't the driving factor anymore in this comparison.
  15. My "feeling" is usually right lol but Ok; agreed, no numbers to stand me by, only common sense because mechanical ware and tear because of the way The shingled technology works. More movements of the head an all ... But the more important thing remains: performance issue on writes. I always want my most performant spinner as parity to avoid bottleneck.
  16. I'm sorry, but I feel you are neglecting the context of my statement. I think they "will fail fast" as parity disk. (the last context is essential ) just because it will sustain a lot of small writes each time something is written on the array. For general purpose, I like them a lot, don't get me wrong. Best bang for the buck money wise.
  17. Linus explains it better then myself, might be you guys find it interesting.
  18. I new you where going to say that I know its not RAID (hence the name ) but it is similar in the sence that parity disk is aways written to, similar to RAID arrays But if you need cold figures there is a 1/5 difference in expected life cycle. MTBF (hours) 1000 000 WD RED MTBF (hours) 800 000 Seagate Archive Also you would agree for every write action on the platter, the disk needs to do 3 actions. Normal wear and tear would suggest it will have an effect on expected MTBF
  19. http://www.storagereview.com/seagate_archive_hdd_review_8tb RAID Usage with SMR With the attractively low price per TB that the Seagate Archive 8TB HDD has, it can be difficult to not consider purchasing a set for NAS storage. StorageReview strongly recommends against such usage, as at this time SMR drives are not designed to cope with sustained write behavior. Many contend that NAS shares tend to be very read-focused during normal operation. While that's true, the exception is when a drive fails and a RAID rebuild has to occur. In this case the results clearly show that this implementation of SMR is not a good fit for RAID.
  20. Ive got and use 8TB Seagate disk as well and I use them for what they are intended. Archiving .... Cold storage. For me this is to be taken literary. I take backup by external docking bay, then remove it for offline, cold storage. I do notice a big performance hit when writing a lot of sequential data ( ie backup). When its new, clean and formatted I get write speed in excess of 100MB/s when its 3/4 full it drops down below 30MB/S Ive had it a few days in my "production" rig as parity at i remember slow performances as well ( again 30 MB/s all the time. Articles about this drive shingling seems to confirm my theory. I'm not saying it will not work, I'm sure a lof of use cases will be here where it works, I'm only answering to the topic starter request to optimize his setup.
  21. It makes sense. Motivation in my previous post. Basically Seagate Archives are not good for writing a lot to. They will get slower, slowing down your array, and they will fail fast.
  22. WD RED or even GOLD or any other disk that is not based on the shingled magnet recording technology. Your parity drive is the only drive in your array that will sustain heave write operations. Each bit that is written in the array makes a write on this parity disk. This type of drive leverages a lot on its internal cache ( about 20 GB ) to do houskeeping on incomming writes. A sort of temporarily parking spot for data that needs to be written to disk. By design in parks the data in cache, reads the data on another spot on the disk, then writes boht data as a sort of combined striped data. That's how Seagate gets so mucht TB on the platter. By design your parity disk will always be 100% full so performance will degrade. And since the array data disk speeds is determined by the write speed of your parity disk, it would make sense to have a normal 8 TB for parity and leave your archive disk for what they are intended for. Few writes, many reads.
  23. I would invest in another non archive HDD for the parity function. For example WD Red 8 Tb. CPU and ram are sufficient for your purpose. Sent from my iPhone using Tapatalk
  24. Western Digital is introducing their 10 TB HDD RED series (designed for NAS use). Who's going to buy it? Link to the source article https://www.wdc.com/products/internal-storage/wd-red.html#WD100EFAX
  25. I prefer not to use them, without there is less noise. With them airflow gets obstructed a bit and that causes background noise. Obviously only delete them if you can be fairly sure nothing will get stuck in them.