Jump to content

JonathanM

Moderators
  • Posts

    16,684
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. /mnt/cache is your cache drive or pool. If you don't have one assigned, then that location is in RAM. You must actually HAVE a cache drive assigned for that to work properly.
  2. That's the problem. Sonarr is looking in /downloads, not /data. Somewhere in one of your mappings you have that set wrong.
  3. ^From your Sonarr ^From your Radarr ^From your Deluge See the difference? They must match.
  4. First, make sure plex is not set to auto start. Assuming your appdata folder will fit on your cache drive, then yes, you can just use the mover. Set the appdata share to Cache: Prefer. Before you run the mover, you MUST be sure there are no open files in the appdata tree, as mover won't touch open files. The easiest way to do that is to be sure the Docker service is not running, so there is no Docker item visible in the GUI list of pages. Settings, Docker, enable Docker: NO. After the mover is done, enable the docker service. Before starting plex, edit the config path in the plex docker so instead of /mnt/disk3/appdata/plex it's /mnt/cache/appdata/plex Verify stability in that state before moving on, as that is the premise behind the test.
  5. Read this. https://forums.unraid.net/topic/54882-630-how-to-setup-dockers-without-sharing-unraid-ip-address/ Specifically this.
  6. Link? Has it been litigated successfully in a similar application?
  7. Well, there goes that theory. You are right, a storage system shouldn't corrupt data. However, it's not the software primarily, it's some interaction between the software and a subset of hardware, or some combination of software and settings, or hardware and settings, that is the issue. Not everyone has the problem, so it's difficult to pin down what exactly is the issue. Earlier in the thread the developers asked for a way to duplicate the issue, but what was put forward doesn't duplicate it on all systems, so isn't especially helpful. I personally have 4 running unraid servers, all on 6.7.0, all with different hardware configurations, none of them have the issue. Without a way to trigger the issue predictably, isolating the cause is very difficult.
  8. Disclaimer: I don't use NUT, but perhaps the mechanism is similar to APCUPSD, which I do use. The following is how APCUPSD works. 1. Power is lost. Master server running with the UPS directly connected detects outage, starts timer and monitors battery level. 2. Slave units receive power outage notice, start their individual timers, and receive periodic battery level updates from master. 3. Each individual client and master make their own decisions based on their own config files as to when to start the shutdown routine. So, if the clients don't come to the conclusion that it's time to shut down because their individual conditions aren't met before the master shuts down, they will never shut down. They don't rely on the master to signal imminent shutdown, they act on their own conditions, either time on battery or battery level remaining. Personally, all my clients (including server hosted VM's) are set to begin shutdown within 3 minutes of the power out event. The master server waits for 6 minutes, then begins shutdown. Battery backups are best used for as little as possible, preserving the battery level so as not to prematurely wear them out. Discharging below 50% capacity severely shortens lead acid battery life. If you need long runtimes during power outages, you either need a generator to kick in to take over from consumer level backups, or enterprise battery backups with add on extended battery packs.
  9. It checks for matches to the database, so it will detect anything in the database. At the file level, it's all just ones and zeros being checked. It's not like a full featured A/V inside your OS, that looks at critical run entries and such, all it does is scan the files.
  10. All of you that have this as a frequent reocurring issue, could you check Settings, Global Share Settings, Tunable (enable Direct IO): and set it to No instead of Auto or Yes, and see if that changes anything?
  11. Great, that means your proxy is working correctly! Just to be sure, is the network type on the docker configuration for both delugevpn and jackett the same?
  12. Try configuring a browser to use the proxy, and see what happens. To see if it's working, try going to a site in that browser that tells you your external IP. You should be able to see your real external IP with the proxy setting OFF, and a VPN provided IP with the proxy setting ON in the browser.
  13. Are you connected to the local wifi on the same lan as the server? Not a guest ssid or some other segment?
  14. @bonienl should easily be able to get that changed going forward.
  15. Here is what the help says now. The only settings that mention mover are yes and prefer. How should it be worded so as to be more clear?
  16. Does your VPN support incoming open ports? Only a few VPN providers do, and of those, some don't support it on all endpoints.
  17. Huh? That doesn't make any sense. Also, as itimpi said, make sure docker and vm services are set to no in settings. If there is still a docker and VMS tab on the webGUI, they aren't off yet.
  18. If you mean global exclude list, yes. Those disks are globally excluded from participating in user shares.
  19. So you want him to rewrite the app just for you, to work the way you want it to, when it works just fine using the recommended methods? It's honestly not as simple as you think it is, and you are the only person I've heard of to set up remote access this way. You are pretty much asking to commission a custom version of the app just for you.
  20. You are trying to bend the app to work the way you think it should, not the way it does. It's not simply a pass through mobile interface to the unraid GUI, it's a completely separate UI that works through the companion plugin. Normally you would set up a VPN from your mobile device to your home network, use ControlR or the normal webUI that way. Until limetech finishes securing the webUI, it's not recommended to expose it to the internet. They are working on it, but until they say it's ok, I wouldn't.
  21. I'd rather lose a parity volume than a data volume. They are both needed to recreate another missing data volume, but if one of the dead volumes is parity, you haven't lost any data. If you lose 3 data volumes but your parity is intact, it's worthless.
  22. For what? Please describe what you need to accomplish.
  23. Why are you trying to use the OpenVPN AS docker? Pfsense has OpenVPN functionality pretty much out of the box.
×
×
  • Create New...