• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jebusfreek666's Achievements


Explorer (4/14)



  1. Not really. I am pretty sure this can be handled with a user script, though I am not an expert by any means. In my case, I am starting to think I will just use dedicated unassigned disks or possibly a cache pool instead. Depending on the number of cameras I end up with and the quality of the video, this could be a rather large amount of data. And expecting mover to handle that much every day might be asking a bit much.
  2. I am trying to keep my disks spun down as much as possible, and narrowed down the biggest offender to be bazarr. It is downloading subs and spinning up the disks very frequently. I do not see a way to adjust how often it searches or downloads. If someone know how to do this it would be great. Or, better yet, if someone knows a way to make it download to cache instead (still same path though, ie. mnt/user/media/movies but on the cache) then it would never spin up the drives and instead the subs would be accessible right away and get moved over with mover.
  3. Logs are completely filled with this: Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: Discover page displays nothing. Searches return only TV shows, movie posters look like they are trying to load but never do.
  4. I have been having an issue in Ombi for a while now. It doesn't seem to mark things as available after they are added to plex. All requests go through fine to sonarr/radarr. I went to settings to test the connection to plex and it connects successfully. Not sure where the issue is.
  5. Where is the setting in Sonarr/Radarr to have it make plex update its library?
  6. I have been thinking about setting my disks to spin down after a certain amount of time. I realize that I will have to turn off periodic checks in plex as this will keep the disks spinning. My question is, I had read that plex will kick out errors if the disk that the media is on has to spin up. I don't know if this is an old issue or something that has been resolved some way in recent builds. Is this still the case? I have like 10 data disks and my media is spread across all of them as this is primarily a plex server. I will probably have to skip doing spin downs if my plex users are going to get errors all the time.
  7. Yes, that is correct. I was wondering because in your previous response you said: Which I took to mean writing to a dedicated disk in the array.
  8. If you set it up to one of your array disks wont you just be right back in the situation of constant writes to the array? That's why I wanted to do a raid 1 cache pool, for redundancy. Depending on how much space 30 days worth of video used, I could just leave it on the HDDs I guess. But I only have a few spare 6Tb drives, and think it is going to be a lot more data than that. What do you mean it might not be viable to move to the array every night? Do you mean there would be too much data to transfer in that time span? As for the total size used by my cams, I do not have this info yet. It will take a while before it is up and running. It is still sort of in the planning phase.
  9. I guess pool size and redundancy would be the most important. I am not sure how important write speed is really in this instance, since NVRs routinely have used HDD. I am not sure on the exact size of drive I will need, but rough estimates have put it well over the 2Tb range daily unless I am doing this wrong. I could have it just write directly to the array, but I feel like this would be a waste as it would be writing not only the data but also dual parity 24/7. I thought it would be nice to have mirrored copies as a cache pool and then set it to move the files over while I slept.
  10. I know that you can use them, I am just wondering if my use case is a good idea or not? I want to use one as a location for my CCTV. I will have around 15 cams recording 1080p 24 hours a day. I want to write to a cache device so avoid the constant writes to the array. And since the largest SSD's are crazy expensive, I figured I could just throw a 6Tb HDD in there for this purpose. Then have mover run and put the videos on to the array overnight for storage. I would then have a script (I assume?) to remove files from the array when they reach 30 days old. So, basically I am wondering if the use of a HDD is better for this instance (or possibly 2 in raid 1) or would it be better to use multiple SSD in raid 0?
  11. Fairly certain it is static. So I could probably just change the MAC of my router and it would go through. But I figured setting up the VPN would allow for a little more privacy. And since I already have it anyway with deluge, it would be no cost.
  12. No, it currently is not set up to use a VPN and I want to set it up to use PIA. I get the error when I try to access Nyaa without any VPN.
  13. Is there a tutorial somewhere on how to setup jackett to route through a VPN? I have PIA, and Deluge goes through it. But one of the sites jackett searches for me is Nyaa for anime. For the past 3 days I am getting an error 429 too many requests. Not sure what triggered this as I haven't downloaded anything except a couple of episodes in the time frame. I can still access the site through the browser when I turn on the VPN client. So I was hoping to just set up jackett to use the VPN so I can still get my anime.
  14. Is this just something I manually do with a stop watch, or is there something in the logs I would look at?
  15. I recently upgraded to 6.9.2 from 6.8.x. At the same time I went through the process of encrypting all my data drives. I have run a recent parity check and know it is valid. But now whenever I reboot my server it is autostarting a parity check. I type in my passphrase and go to click start, but the only option is to start with a parity check. I have been canceling them at this point, but I am wondering if this is expected behavior? Is this because unraid started with the drives unmounted? Is there any way around this, or do I have to keep canceling the parity check after each reboot?