jebusfreek666

Members
  • Posts

    229
  • Joined

  • Last visited

Posts posted by jebusfreek666

  1. Sorry, I am sure this is beginner level stuff. But I have never installed a docker from anywhere but CA store. I am tring to add a new docker from github, specifically this one. Is there a walk through somewhere on how to do this? It would need to be broken down pretty simple as I have no experience besides downloading a template and filling out some fields up to this point. 

  2. Just noticed that for the past 2 weeks or so, Sonarr and Radarr are no longer doing anything. I had errors connecting to deluge and jackett in both of them. Jackett seems to be on the way out so I switched to prowlarr. I am fairly sure that is set up correctly and I have it loaded in both Sonarr and Radarr. But nothing has changed with deluge. I can open the webui, but it will not connect to sonarr or radarr. Has something changed recently?

     

    Update:

    Actually, Deluge is messed up somehow. I can open the webui, but all the settings are blank and it keeps timing out. I can not add or edit anything.

     

    Update 2:

    Never mind. A server reset seems to have fixed it. 

  3. On 11/27/2021 at 11:11 AM, bonienl said:

    Unlike generic file managers, this file manager is Unraid aware and takes the specialties of Unraid in to consideration.

    Within the file manager the user performs either a disk-to-disk operation or a share-to-share operation, this is specifically done to protect the user against the so-called "Copy" bug.

     

    This file manager version is a first iteration, and further development will be done based on user feedback, hence the availability as a plugin, which allows for continuous updates.

     

     

    It sounds to me like this is more or less a scaled down version of unBalance, but from within the webUI. Is that correct? It is obviously missing some of the functionality of unBalance, like gather, but the way it works essentially seems the same. 

     

    Edit: actually, this might have more functionality than unBalance. I don't remember, but I don't believe that had share to share moves?

  4. 14 minutes ago, Squid said:

    No.  They will only spin up when a write to any of them happen

     

    I figured, but wanted to make sure

    Also, I know it is not built into this plugin, but I was wondering if it is possible to have turbo write invoked anytime writes go to certain shares or come in from a certain user (for a windows PC). I have a few shares that are set to not use the cache as the files are sensitive and I would prefer they get written to the encrypted array ASAP.

  5. Does invoking turbo write spin all the drives up itself? What I mean is, if I have it set to invoke turbo write when 4 drives are spinning (6 spun down of 10 total), will it automatically spin all the drives up even if no writes are happening? So If I have 4 people streaming plex, and all 4 people happened to pic media that is on 4 different drives causing them to spin up I know this will change it to turbo write. I just want to make sure that it wont also spin up all the drives until one of them is actually writing. I read through the explanation of turbo write, and this entire post but didn't see this specifically called out. 

  6. On 7/24/2016 at 11:55 AM, Squid said:

    Enable / Disable Turbo Write Mode

     

    Enable

     

    #!/bin/bash
    /usr/local/sbin/mdcmd set md_write_method 1
    echo "Turbo write mode now enabled"
     

     

    Disable

     

    #!/bin/bash
    /usr/local/sbin/mdcmd set md_write_method 0
    echo "Turbo write mode now disabled"
     

     

     

    turbo_writes.zip 1.02 kB · 84 downloads

     

    If you wanted to enable/disable Auto mode would it be write method 2?

     

    Edit: Did a little more digging and found out auto is not what I thought it was, so the question is kind of meaningless now. 🤷‍♂️

  7. On 9/10/2021 at 1:50 AM, Flemming said:

    Bumping this because I have the exact same usecase.

    Did you find anything @jebusfreek666?

    Not really. I am pretty sure this can be handled with a user script, though I am not an expert by any means. In my case, I am starting to think I will just use dedicated unassigned disks or possibly a cache pool instead. Depending on the number of cameras I end up with and the quality of the video, this could be a rather large amount of data. And expecting mover to handle that much every day might be asking a bit much. 

  8. I am trying to keep my disks spun down as much as possible, and narrowed down the biggest offender to be bazarr. It is downloading subs and spinning up the disks very frequently. I do not see a way to adjust how often it searches or downloads. If someone know how to do this it would be great.

     

    Or, better yet, if someone knows a way to make it download to cache instead (still same path though, ie. mnt/user/media/movies but on the cache) then it would never spin up the drives and instead the subs would be accessible right away and get moved over with mover.

  9. On 6/18/2021 at 12:50 PM, jebusfreek666 said:

    I have been having an issue in Ombi for a while now. It doesn't seem to mark things as available after they are added to plex. All requests go through fine to sonarr/radarr. I went to settings to test the connection to plex and it connects successfully. Not sure where the issue is. 

     

    Logs are completely filled with this:

    Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value:

    Discover page displays nothing. Searches return only TV shows, movie posters look like they are trying to load but never do. 

     

  10. I have been having an issue in Ombi for a while now. It doesn't seem to mark things as available after they are added to plex. All requests go through fine to sonarr/radarr. I went to settings to test the connection to plex and it connects successfully. Not sure where the issue is. 

  11. 3 hours ago, hoppers99 said:

    I use Sonarr/Radarr to update the library itself when they add new content as they will be my main source moving forward. They seem much more efficient at just telling Plex "here's this new thing I just added for you" rather than Plex having to rescan everything. 

     

    Where is the setting in Sonarr/Radarr to have it make plex update its library?

  12. I have been thinking about setting my disks to spin down after a certain amount of time. I realize that I will have to turn off periodic checks in plex as this will keep the disks spinning. My question is, I had read that plex will kick out errors if the disk that the media is on has to spin up. I don't know if this is an old issue or something that has been resolved some way in recent builds. Is this still the case? I have like 10 data disks and my media is spread across all of them as this is primarily a plex server. I will probably have to skip doing spin downs if my plex users are going to get errors all the time. 

  13. 38 minutes ago, limawaken said:

    then i will change the cache setting for my share to "yes", then during the day cameras will write the recordings onto the pool, at night mover will move the recordings over to the array.

     

    Yes, that is correct. I was wondering because in your previous response you said:

     

    3 hours ago, limawaken said:

    then i'll assign it to only one disk in my array.

     

    Which I took to mean writing to a dedicated disk in the array.

  14. 2 hours ago, limawaken said:

    i'll let it run a few weeks and see how much space I'll need for it then i'll assign it to only one disk in my array. i also wanted to see how data would need to be moved every day.

    i have a feeling that it may not be viable to move all the recordings over to the array every night... so i'll probably make it a 2 disk pool for redundancy.

     

    just curious, how many gigs does your 15 1080p cameras use every 24 hours?

     

    If you set it up to one of your array disks wont you just be right back in the situation of constant writes to the array? That's why I wanted to do a raid 1 cache pool, for redundancy. Depending on how much space 30 days worth of video used, I could just leave it on the HDDs I guess. But I only have a few spare 6Tb drives, and think it is going to be a lot more data than that. 

     

    What do you mean it might not be viable to move to the array every night? Do you mean there would be too much data to transfer in that time span?

     

    As for the total size used by my cams, I do not have this info yet. It will take a while before it is up and running. It is still sort of in the planning phase. 

  15. 2 hours ago, ChatNoir said:

    Are you looking for:

    • write speed
    • pool size
    • redundancy
    • limit the spin up time of the Array
    • something else
    • a mix of the above ?

     

    I guess pool size and redundancy would be the most important. I am not sure how important write speed is really in this instance, since NVRs routinely have used HDD. I am not sure on the exact size of drive I will need, but rough estimates have put it well over the 2Tb range daily unless I am doing this wrong. I could have it just write directly to the array, but I feel like this would be a waste as it would be writing not only the data but also dual parity 24/7. I thought it would be nice to have mirrored copies as a cache pool and then set it to move the files over while I slept. 

  16. I know that you can use them, I am just wondering if my use case is a good idea or not? I want to use one as a location for my CCTV. I will have around 15 cams recording 1080p 24 hours a day. I want to write to a cache device so avoid the constant writes to the array. And since the largest SSD's are crazy expensive, I figured I could just throw a 6Tb HDD in there for this purpose. Then have mover run and put the videos on to the array overnight for storage. I would then have a script (I assume?) to remove files from the array when they reach 30 days old. So, basically I am wondering if the use of a HDD is better for this instance (or possibly 2 in raid 1) or would it be better to use multiple SSD in raid 0?

  17. 13 minutes ago, binhex said:

    sorry i was skim readng and it was early :-), ok so no vpn in play, does your isp allocate you a static ip or is it dhcp?.

     

    Fairly certain it is static. So I could probably just change the MAC of my router and it would go through. But I figured setting up the VPN would allow for a little more privacy. And since I already have it anyway with deluge, it would be no cost.

  18. 2 hours ago, binhex said:

    most probably this is due to somebody who had previously used your currently allocated vpn ip 

     

    No, it currently is not set up to use a VPN and I want to set it up to use PIA. I get the error when I try to access Nyaa without any VPN.

  19. Is there a tutorial somewhere on how to setup jackett to route through a VPN? I have PIA, and Deluge goes through it. But one of the sites jackett searches for me is Nyaa for anime. For the past 3 days I am getting an error 429 too many requests. Not sure what triggered this as I haven't downloaded anything except a couple of episodes in the time frame. I can still access the site through the browser when I turn on the VPN client. So I was hoping to just set up jackett to use the VPN so I can still get my anime. 

    • Like 1
  20. I recently upgraded to 6.9.2 from 6.8.x. At the same time I went through the process of encrypting all my data drives. I have run a recent parity check and know it is valid. But now whenever I reboot my server it is autostarting a parity check. I type in my passphrase and go to click start, but the only option is to start with a parity check. I have been canceling them at this point, but I am wondering if this is expected behavior? Is this because unraid started with the drives unmounted? Is there any way around this, or do I have to keep canceling the parity check after each reboot?

  21. 3 minutes ago, JorgeB said:

    XFS is the best choice for most users using single device cache, unless you need the btrfs features.

     

    Thank you sir. It occurs to me that I had read before that the primary hang up with using SSD's in the array was something to do with the way it handled trim. Is this possibly foreshadowing for this feature being made more possible in the near future?

  22. 1 minute ago, JorgeB said:

    It's not necessary for btrfs pools, still is if you use XFS.

     

    That is the fastest response I have ever gotten. Has the recommendation changed on what filesystem to use for cache/pool drives? I was always told btrfs only for multiple drives, otherwise XFS due to stability.