Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. Shared doesn't bother me if you have that set. Next release of FCP will not complain if you have shared set instead of slave
  2. The MyServers plugin is awesome for keeping backups up to date, even if you don't use it's remote access features Without any current backup, what you're going to have to do is assign every drive (except what's obviously a cache drive) to the array as a data drive. If you only had 1 parity drive, 1 drive will come up as unmountable. If you had 2 drives, 2 will come up as unmountable If you have more drives come up as unmountable than you had parity disks, then stop whatever you're doing and post your diagnostics. Tools - New Config and then assign the drives accordingly Your existing docker containers should come up as normally, but without the templates you won't be able to edit them (or reinstall them) until such time as you get around to recreating the paths / ports you've had assigned to them (The docker tab will basically give you most of the information you need to recreate)
  3. Because with some recent changes to UD that test has been added back into FCP for the most trouble free experience
  4. 100% correct. I didn't consider (but you did) that the OP might have also wanted the data to be wiped clean.
  5. Delete the contents of /config on the flash drive (except for the .key file and the go file) and reboot But save a backup of it first just in case there does wind up being something you want or need
  6. Anything outside of the array is a "Cache Pool". Previously you would accomplish something like what you want via mounting the drive in Unassigned Devices. But, it's easier to let the OS itself manage it as a cache pool. No physical disconnecting is required. You're just rearranging how the system thinks the drives are assigned. Your new config allows you to remove the drive from the array (parity will need to be rebuilt). And you hit add pool to add another cache pool that is specific to the security drive. Writes etc to that drive won't involve parity at all, and you can still access it via the user shares, so no adjustment to any VM's or Docker containers is required. Just the shares that exist on that drive you set to be "Cache Only" and to use that new cache pool.
  7. You would have to adjust the share settings to point specifically to that new cache pool, and (presumably) set the Use cache settings to be only.
  8. What you would want to do is go to Tools - New Config (keep all assignments) Then from Main unassign the security drive and then re-assign it to a new Cache Pool You will need to re-build parity (it is no longer valid)
  9. Can you reach it via using the IP address? https://ipAddress (NOTE: you must actually enter in https)
  10. Unfortunately, I'm out of ideas here. Anyone else have any suggestions?
  11. No. You have one or more path mappings in the Plex app that have a host path of /mnt/disks/.... You need to edit them and select for the access mode one of the SLAVE modes for the most reliable and trouble-free connection to that path within Plex
  12. The last time I checked this out, when I used Chrome I indeed saw exactly what you saw. While installing a container on one tab I couldn't do anything on another tab. However I also tried doing an install on Chrome, and then browsing around and doing something else with Firefox / Brave. And that worked perfectly. This *implies* that it's a browser issue somewhere along the line. Either way, this is a known issue
  13. It's this task "http-nio-8080-A" No idea what it is, but I did notice you're running crashplan. To put it mildly, CP is a pig bordering on ignorant Try this on it
  14. Generally, piHole / AdGuard and the like you would have running on their on dedicated IP address so that there is no problem. The docker run command's error reflects every port, not simply a port another container is doing. If you've switched it to bridge mode, then there are other ports already in use for one reason or another via the OS and all conflicting ports would need to be resolved (eg: 53)
  15. Without SSH access, there's pretty much nothing you can do without rebooting.
  16. No. Its because the board is reporting to the OS that it supports 64G max. Doesn't really mean anything, because as you've seen it is simply informative.
  17. Not particularly. You could however use a script akin to docker stop firstName docker stop secondName and run it through user scripts
  18. Actually, try 26a update. You probably upgraded to > 6.9.0 from a previous version and never had a reason to adjust those share settings
  19. https://forums.unraid.net/topic/57181-docker-faq/#comment-566084
  20. Your permissions seem strange on /mnt (write access, but no read access) Disable Docker in Settings - Docker, Disable VM's in Settings - VM Settings, and reboot into safe mode
  21. Post your diagnostics so I can also understand
  22. Will only really be useful to plugin authors, but if you have a requirement for another app and you detect that it's not installed and should be etc then you can add an appropriate link or button that will bring up CA's search results directly for the term eg: <a href='/Apps?search=unassigned%20devices%plus'>Install Unassigned Devices Plus</a> Will bring up the search results for Unassigned Devices Plus automatically in CA (This will take effect on any CA version > 2022.01.26)
  23. Just as an FYI, this release adds in some shares vs cache pool settings. For those who have been waiting for it, Selectable Tests has now bubbled up to be the next item on this plugin's agenda.
×
×
  • Create New...