• Content Count

  • Joined

  • Last visited

Community Reputation

7 Neutral

About jebusfreek666

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Where is the setting in Sonarr/Radarr to have it make plex update its library?
  2. I have been thinking about setting my disks to spin down after a certain amount of time. I realize that I will have to turn off periodic checks in plex as this will keep the disks spinning. My question is, I had read that plex will kick out errors if the disk that the media is on has to spin up. I don't know if this is an old issue or something that has been resolved some way in recent builds. Is this still the case? I have like 10 data disks and my media is spread across all of them as this is primarily a plex server. I will probably have to skip doing spin downs if my plex users are going to
  3. Yes, that is correct. I was wondering because in your previous response you said: Which I took to mean writing to a dedicated disk in the array.
  4. If you set it up to one of your array disks wont you just be right back in the situation of constant writes to the array? That's why I wanted to do a raid 1 cache pool, for redundancy. Depending on how much space 30 days worth of video used, I could just leave it on the HDDs I guess. But I only have a few spare 6Tb drives, and think it is going to be a lot more data than that. What do you mean it might not be viable to move to the array every night? Do you mean there would be too much data to transfer in that time span? As for the total size used by my cams, I do not h
  5. I guess pool size and redundancy would be the most important. I am not sure how important write speed is really in this instance, since NVRs routinely have used HDD. I am not sure on the exact size of drive I will need, but rough estimates have put it well over the 2Tb range daily unless I am doing this wrong. I could have it just write directly to the array, but I feel like this would be a waste as it would be writing not only the data but also dual parity 24/7. I thought it would be nice to have mirrored copies as a cache pool and then set it to move the files over while I slept.
  6. I know that you can use them, I am just wondering if my use case is a good idea or not? I want to use one as a location for my CCTV. I will have around 15 cams recording 1080p 24 hours a day. I want to write to a cache device so avoid the constant writes to the array. And since the largest SSD's are crazy expensive, I figured I could just throw a 6Tb HDD in there for this purpose. Then have mover run and put the videos on to the array overnight for storage. I would then have a script (I assume?) to remove files from the array when they reach 30 days old. So, basically I am wondering if the use
  7. Fairly certain it is static. So I could probably just change the MAC of my router and it would go through. But I figured setting up the VPN would allow for a little more privacy. And since I already have it anyway with deluge, it would be no cost.
  8. No, it currently is not set up to use a VPN and I want to set it up to use PIA. I get the error when I try to access Nyaa without any VPN.
  9. Is there a tutorial somewhere on how to setup jackett to route through a VPN? I have PIA, and Deluge goes through it. But one of the sites jackett searches for me is Nyaa for anime. For the past 3 days I am getting an error 429 too many requests. Not sure what triggered this as I haven't downloaded anything except a couple of episodes in the time frame. I can still access the site through the browser when I turn on the VPN client. So I was hoping to just set up jackett to use the VPN so I can still get my anime.
  10. Is this just something I manually do with a stop watch, or is there something in the logs I would look at?
  11. I recently upgraded to 6.9.2 from 6.8.x. At the same time I went through the process of encrypting all my data drives. I have run a recent parity check and know it is valid. But now whenever I reboot my server it is autostarting a parity check. I type in my passphrase and go to click start, but the only option is to start with a parity check. I have been canceling them at this point, but I am wondering if this is expected behavior? Is this because unraid started with the drives unmounted? Is there any way around this, or do I have to keep canceling the parity check after each reboot?
  12. Thank you sir. It occurs to me that I had read before that the primary hang up with using SSD's in the array was something to do with the way it handled trim. Is this possibly foreshadowing for this feature being made more possible in the near future?
  13. That is the fastest response I have ever gotten. Has the recommendation changed on what filesystem to use for cache/pool drives? I was always told btrfs only for multiple drives, otherwise XFS due to stability.
  14. I think I remember seeing somewhere that the trim plugin, and scheduling trim is no longer necessary or recommended after upgrading to 6.9. I am not sure if I actually read this, or not. Hoping someone could verify this info for me.
  15. I figured it out just now. After the update to UnRaid fried my container for some unknown reason, I lost the extra parameter --runtime=nvidia. In fairness though, I followed the steps in Q3 of the FAQ you have posted and this was not mentioned. You might want to add it to avoid more of the same questions from popping up. Thanks alot for the response and offer to help though. As always, you are very much appreciated!