zonderling

Members
  • Posts

    39
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

zonderling's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I dont know if it will be related but i had the same issue this holiday. Turned out to be cable issue. The thing is, most of us, when we power down the server, its to move it. So don't panic immediately
  2. I think Plex also supports DLNA so you can use any plex docker for your purpose.
  3. I had a similar limitation to my first setup. My 250 GiB cache drive would fill up 100% Now I have configured to have the SabNZB "incomplete" on the cache drive and the "complete" folder is on another share on the array. For very big downloads, all the reconstructive work ( par, unrar, repair, rename ) is done on the SSD and final copy is done on a share on the array. I've called mine "scratchdisk". Last step on my scratchdisk Sonarr and Radarr pijck up the payload and move them to the difinite media store. This workflow works for me, but as the saying goes, there are many roads to Rome
  4. Thx guys. Sure sounds like an optimized workflow to me.
  5. my advice would be to check if you still have some free space on your cache drive. Imo its full or nearly full. Dockers need to write logs, so some free space on your app share ( typically found on the cache drive ) . Hope that helps?
  6. Sure, whatever works for you Just out of curiosity ... why would you split usenet and torrents into different shares? Why not make the distinction on a folder level? (so a folder for your usenet and a folder for your torrents both on the same "downloads" share? ) Just out of my personal interest in the subject, i only use usenet, no torrents)
  7. I think I understand your set up more or less ... i don't quite get the part you describe how your process its sped up by enabling cache; "moving from one sector to another on the SSD." They way I do it, I keep all of the, lets call it "workload" on the cache drive, all the time. Cache witch is typically a SSD device. Your process has the workload part on the cache, part on the array because each time the mover moves, it moves some of your unprocessed rar files to the slower spinners. Next phase when Sabnzb (or whatever other docker you use for that matter) starts with unpacking its not on your super fast SSD but on the slower spinners. To make matters worsen your CPU needs to calculate parity witch causes even more load you can easily avoid. Final point is that unRAID really excels when you can organize its writes to the array sequential. In other word one by one. My workflow does that. I'm not saying yours is no good, I'm only offering a way to optimize edit: when reading it back, i come to the conclusion bjp999 said the same as me
  8. The trick is to make a share to handle your downloads, and set the Share Settings - Use cache disk to "only". On this share configure all your post download automation task. ( run par, unrar, repair, unzip, rename ) should all be done in a folder on this cache only share. It will be unprotected but lighting fast and wont put a strain on the protected array. Secondly train your Sonarr, Radarr, to go fetch the finalized product on the "download" share and save it on the "media share". I have no mover enabled on my media share; It does not make any sence to do so. Thats how I have set it up and imho; the good way to do it.
  9. Cool. I'm happy to hear it sorted out. I've had that before, the way 1 see it 2 possible scenario's. A. ASRock is covering it up, for whatever reason. B. One of the connectors on the board is a bit loose, physical shocks make it work intermittently.
  10. It would only make sense in the parity drive to invest in high RPM. In the situation when you have multiple writes simultaneously. For reading operations, even multiple concurrent, the first bottleneck will be your GB NIC not your HD. One decent 5400 RPM disk is enough to saturate one GB NIC. So unless you look at uprating your LAN to 10 GB ethernet, not worth in investing in 7900 RPM spinners.
  11. Why would you have your music on an SSD / SSHD ? Cant think of any scenario where this would make sense?
  12. But what about people using the immense popular docker Plex? The purpose is that it can serve all your media files over the internet. Is there a safe way we can have Plex on the internet ( the manual instructs to set up port forwarding for this ) and "lock down" every and each other access other than VPN?
  13. Why over complicate things? I would just copy / paste over LAN and let it sit.
  14. Ahhhh the same dilemma for me he here in Europe. Harvesting a WD red 8 TB from external cost identical to the Seagate 8 TB archives drives. Now I don't know anymore because money isn't the driving factor anymore in this comparison.
  15. My "feeling" is usually right lol but Ok; agreed, no numbers to stand me by, only common sense because mechanical ware and tear because of the way The shingled technology works. More movements of the head an all ... But the more important thing remains: performance issue on writes. I always want my most performant spinner as parity to avoid bottleneck.