jebusfreek666

Members
  • Posts

    226
  • Joined

  • Last visited

Everything posted by jebusfreek666

  1. It sounds to me like this is more or less a scaled down version of unBalance, but from within the webUI. Is that correct? It is obviously missing some of the functionality of unBalance, like gather, but the way it works essentially seems the same. Edit: actually, this might have more functionality than unBalance. I don't remember, but I don't believe that had share to share moves?
  2. @Squid Just a heads up, the script linked in the OP for spinning up all disks at a certian time is no longer valid. Might want to remove it so it doesn't send others to incorrect info. I read another post about another script to do this, but I am not sure how it panned out.
  3. I figured, but wanted to make sure Also, I know it is not built into this plugin, but I was wondering if it is possible to have turbo write invoked anytime writes go to certain shares or come in from a certain user (for a windows PC). I have a few shares that are set to not use the cache as the files are sensitive and I would prefer they get written to the encrypted array ASAP.
  4. Does invoking turbo write spin all the drives up itself? What I mean is, if I have it set to invoke turbo write when 4 drives are spinning (6 spun down of 10 total), will it automatically spin all the drives up even if no writes are happening? So If I have 4 people streaming plex, and all 4 people happened to pic media that is on 4 different drives causing them to spin up I know this will change it to turbo write. I just want to make sure that it wont also spin up all the drives until one of them is actually writing. I read through the explanation of turbo write, and this entire post but didn't see this specifically called out.
  5. If you wanted to enable/disable Auto mode would it be write method 2? Edit: Did a little more digging and found out auto is not what I thought it was, so the question is kind of meaningless now. 🤷‍♂️
  6. Not really. I am pretty sure this can be handled with a user script, though I am not an expert by any means. In my case, I am starting to think I will just use dedicated unassigned disks or possibly a cache pool instead. Depending on the number of cameras I end up with and the quality of the video, this could be a rather large amount of data. And expecting mover to handle that much every day might be asking a bit much.
  7. I am trying to keep my disks spun down as much as possible, and narrowed down the biggest offender to be bazarr. It is downloading subs and spinning up the disks very frequently. I do not see a way to adjust how often it searches or downloads. If someone know how to do this it would be great. Or, better yet, if someone knows a way to make it download to cache instead (still same path though, ie. mnt/user/media/movies but on the cache) then it would never spin up the drives and instead the subs would be accessible right away and get moved over with mover.
  8. Logs are completely filled with this: Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: Discover page displays nothing. Searches return only TV shows, movie posters look like they are trying to load but never do.
  9. I have been having an issue in Ombi for a while now. It doesn't seem to mark things as available after they are added to plex. All requests go through fine to sonarr/radarr. I went to settings to test the connection to plex and it connects successfully. Not sure where the issue is.
  10. Where is the setting in Sonarr/Radarr to have it make plex update its library?
  11. I have been thinking about setting my disks to spin down after a certain amount of time. I realize that I will have to turn off periodic checks in plex as this will keep the disks spinning. My question is, I had read that plex will kick out errors if the disk that the media is on has to spin up. I don't know if this is an old issue or something that has been resolved some way in recent builds. Is this still the case? I have like 10 data disks and my media is spread across all of them as this is primarily a plex server. I will probably have to skip doing spin downs if my plex users are going to get errors all the time.
  12. Yes, that is correct. I was wondering because in your previous response you said: Which I took to mean writing to a dedicated disk in the array.
  13. If you set it up to one of your array disks wont you just be right back in the situation of constant writes to the array? That's why I wanted to do a raid 1 cache pool, for redundancy. Depending on how much space 30 days worth of video used, I could just leave it on the HDDs I guess. But I only have a few spare 6Tb drives, and think it is going to be a lot more data than that. What do you mean it might not be viable to move to the array every night? Do you mean there would be too much data to transfer in that time span? As for the total size used by my cams, I do not have this info yet. It will take a while before it is up and running. It is still sort of in the planning phase.
  14. I guess pool size and redundancy would be the most important. I am not sure how important write speed is really in this instance, since NVRs routinely have used HDD. I am not sure on the exact size of drive I will need, but rough estimates have put it well over the 2Tb range daily unless I am doing this wrong. I could have it just write directly to the array, but I feel like this would be a waste as it would be writing not only the data but also dual parity 24/7. I thought it would be nice to have mirrored copies as a cache pool and then set it to move the files over while I slept.
  15. I know that you can use them, I am just wondering if my use case is a good idea or not? I want to use one as a location for my CCTV. I will have around 15 cams recording 1080p 24 hours a day. I want to write to a cache device so avoid the constant writes to the array. And since the largest SSD's are crazy expensive, I figured I could just throw a 6Tb HDD in there for this purpose. Then have mover run and put the videos on to the array overnight for storage. I would then have a script (I assume?) to remove files from the array when they reach 30 days old. So, basically I am wondering if the use of a HDD is better for this instance (or possibly 2 in raid 1) or would it be better to use multiple SSD in raid 0?
  16. Fairly certain it is static. So I could probably just change the MAC of my router and it would go through. But I figured setting up the VPN would allow for a little more privacy. And since I already have it anyway with deluge, it would be no cost.
  17. No, it currently is not set up to use a VPN and I want to set it up to use PIA. I get the error when I try to access Nyaa without any VPN.
  18. Is there a tutorial somewhere on how to setup jackett to route through a VPN? I have PIA, and Deluge goes through it. But one of the sites jackett searches for me is Nyaa for anime. For the past 3 days I am getting an error 429 too many requests. Not sure what triggered this as I haven't downloaded anything except a couple of episodes in the time frame. I can still access the site through the browser when I turn on the VPN client. So I was hoping to just set up jackett to use the VPN so I can still get my anime.
  19. Is this just something I manually do with a stop watch, or is there something in the logs I would look at?
  20. I recently upgraded to 6.9.2 from 6.8.x. At the same time I went through the process of encrypting all my data drives. I have run a recent parity check and know it is valid. But now whenever I reboot my server it is autostarting a parity check. I type in my passphrase and go to click start, but the only option is to start with a parity check. I have been canceling them at this point, but I am wondering if this is expected behavior? Is this because unraid started with the drives unmounted? Is there any way around this, or do I have to keep canceling the parity check after each reboot?
  21. Thank you sir. It occurs to me that I had read before that the primary hang up with using SSD's in the array was something to do with the way it handled trim. Is this possibly foreshadowing for this feature being made more possible in the near future?
  22. That is the fastest response I have ever gotten. Has the recommendation changed on what filesystem to use for cache/pool drives? I was always told btrfs only for multiple drives, otherwise XFS due to stability.
  23. I think I remember seeing somewhere that the trim plugin, and scheduling trim is no longer necessary or recommended after upgrading to 6.9. I am not sure if I actually read this, or not. Hoping someone could verify this info for me.
  24. I figured it out just now. After the update to UnRaid fried my container for some unknown reason, I lost the extra parameter --runtime=nvidia. In fairness though, I followed the steps in Q3 of the FAQ you have posted and this was not mentioned. You might want to add it to avoid more of the same questions from popping up. Thanks alot for the response and offer to help though. As always, you are very much appreciated!
  25. I also switched from 6.8.x to 6.9.2. When I did, it had the binhex-plex docker as an orphaned image. I redownloaded the docker and had to input my GPU info again, along with change where my media was in the template. I do have the new Nvidia driver plugin and deleted the old one. I have restarted the docker, docker engine, and the server multiple times now but still can't get hardware acceleration to work. I am not sure where to go from here.