bumblebee21

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by bumblebee21

  1. SOLVED! For posterity's sake: On a whim, I switched from defining the path in terms of unRAID shares to defining the paths directly to the cache drive (e.g., /mnt/cache/docker/omada/data instead of /mnt/user/docker/omada/data) and sure enough the application started up. Must be some issue with the application handling unRAID's shares. Here is my config for future reference: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='omada-controller' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -v '/mnt/cache/docker/omada/data':'/opt/tplink/EAPController/data':'rw' -v '/mnt/cache/docker/omada/work':'/opt/tplink/EAPController/work':'rw' -v '/mnt/cache/docker/omada/logs':'/opt/tplink/EAPController/logs':'rw' -e SMALL_FILES=true -p 8043:8043 -p 8088:8088 -p 27001:27001/udp -p 27002:27002 -p 29810:29810/udp -p 29811:29811 -p 29812:29812 -p 29813:29813 'mbentley/omada-controller' 89479fd2df50e363b88e624f1b2bcea6cad7713e442c0248642a367ba0056cb4 The command finished successfully!
  2. I just tried recreating the paths in the container to a brand new directory in the Docker share. The docker log then says: WARNING: owner or group (99:100) not set correctly on '/opt/tplink/EAPController/data' INFO: setting correct permissions And when I checked in the unRAID terminal with ls -l, then new directories got 508:508 permissions assigned successfully and I can see files in the directories. So, sure doesn't seem like a permissions issue.
  3. Thanks for your reply! I did initially get an error message about lack of permissions, but after 'chown -R 508:508' in the terminal for '/mnt/user/docker/EAPcontroller/data', logs, and work, I no longer got the lack of permissions errors. But the application still doesn't start.
  4. I'm trying to setup this docker to manage my wifi APs: https://hub.docker.com/r/mbentley/omada-controller. I can get everything to work great if I don't try to setup any paths, but the issue is that the docker doesn't have persistent data/config files (as noted on the docker page). So, when I try to add paths per the docker page's instructions (e.g., -v /mnt/user/docker/EAPcontroller/data:/opt/tplink/EAPController/data), the application within the docker won't start. Any ideas? Docker config without paths configured: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='omada-controller' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -v '/mnt/user/docker/EAP/copy':'/opt/tplink/copy':'rw' -p 8043:8043 -p 8088:8088 -p 27001:27001/udp -p 27002:27002 -p 29810:29810/udp -p 29811:29811 -p 29812:29812 -p 29813:29813 'mbentley/omada-controller' f9a73537e853ec8d5d5d13b7973d0faeb9c26311873cc8a9f24c5d4a3b262aa1 The command finished successfully! Docker log without paths configured: INFO: Time zone set to 'America/New_York' INFO: Starting Omada Controller as user omada startup... May 30, 2020 10:58:06 AM org.hibernate.validator.internal.util.Version <clinit> INFO: HV000001: Hibernate Validator 4.3.1.Final Omada Controller started Docker config with paths configured: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='omada-controller' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -v '/mnt/user/docker/EAP/copy':'/opt/tplink/copy':'rw' -v '/mnt/user/docker/EAPcontroller/data':'/opt/tplink/EAPController/data':'rw' -v '/mnt/user/docker/EAPcontroller/work':'/opt/tplink/EAPController/work':'rw' -v '/mnt/user/docker/EAPcontroller/logs':'/opt/tplink/EAPController/logs':'rw' -p 8043:8043 -p 8088:8088 -p 27001:27001/udp -p 27002:27002 -p 29810:29810/udp -p 29811:29811 -p 29812:29812 -p 29813:29813 'mbentley/omada-controller' 8062e8d47c6b04ff04bf539e361160137c76f9ca9ffaf3df70bfa91d6517556f The command finished successfully! Docker log with paths configured: INFO: Time zone set to 'America/New_York' INFO: Starting Omada Controller as user omada SLF4J: com.tp_link.eap.util.m.a SLF4J: The following set of substitute loggers may have been accessed SLF4J: during the initialization phase. Logging calls during this SLF4J: phase were not honored. However, subsequent logging calls to these SLF4J: loggers will work as normally expected. SLF4J: See also http://www.slf4j.org/codes.html#substituteLogger startup... May 30, 2020 10:48:15 AM org.hibernate.validator.internal.util.Version <clinit> INFO: HV000001: Hibernate Validator 4.3.1.Final Failed to start omada controller, going to exit
  5. For those still having issues, download the latest version of Pulseway for Slackware. As of Pulseway 6.1, they added support for newer libssl, which seems to have fixed issues. May have to update your symlinks, as well.
  6. https://forums.geforce.com/default/topic/973624/gamestream/cannot-stream-when-game-data-is-on-network-drive/ Looks like this is a known issue. Sounds like NetDrive may be an option, but it requires a license.
  7. I'm having the same issue. I suspect it has something to do with gamestreaming not liking the games on a share, because I don't get the error for locally installed games (i.e. on the C drive). I tried a symlink to the share, but that didn't help. What a bummer.
  8. I'm trying to install this HBA to add a few more drives. I've tried installing the card in either PCIE slot, and it doesn't seem to be recognized in either slot. Specifically, when I look at PCI devices in the Unraid console or use lspci, it doesn't show up. I can see one green LED on the card. I searched around and couldn't find a lot of documentation. So, I'm hoping you folks might have some ideas. Is it a bum card? Incompatible mobo? Hardware Gigabyte GA-B75M-D3H Intel i5-3470s 16gb DDR3
  9. Jcloud, you're my hero. I tried pretty much everything else in diskpart before, but not clean! It worked like a charm. Thanks so much.
  10. So, this is a bit of an odd one, but hoping someone has tackled this issue before. I have an old 1TB drive that was in my unraid server until I swapped it out for a larger drive. I just put it in a new Windows machine (not a new unraid server) and tried to format. I can format successfully and assign a drive letter. But, when I try to access the drive, I get the message "Location is not available. D:\ is not accessible. The device is not ready." Everything I could find about that error seems to be about USB drives and hasn't been helpful. Any ideas?
  11. Also, found this after searching reading through a few dozen pages of the PMS docker thread. Will give this a shot, as well.
  12. Thanks for your reply. Interesting that it could be Plex. Just got an update for the PMS docker, so I'll give that a shot.
  13. Background unRAID version: 6.3.5 Plugins: Community Applications, CA Backup, CA Cleanup Appdata, Turbo Write, CA Auto Update, Dynamic Cache Dirs, File Integrity, SSD TRIM, System Buttons, System Info, Fix Common Problems, Tips and Tweaks, Unassigned Devices, Dockers: Plex Media Server, jackett, Sickrage, Transmission Hardware: i5-3470s, Gigabyte GA-B75m-D3H, 16gb RAM, 1 x 240gb SSD cache drive, 5 x 1TB data drives VMs: None Problem System has been hanging regularly (every other day or so) for past ~1-2 weeks. By hang, I mean unresponsive—cannot telnet, no dockers, no network shares, etc., but the system is still on. Usually happens in early morning. Finally captured logs and diagnostics (attached). Need help interpreting the logs. FCPsyslog_tail.txt tower-diagnostics-20170905-0419.zip
  14. Good catch. I installed mcelog to check it out. Logs reported it to be an "internal parity error." In googling around, it looks like this is actually a benign error (false positive). Intel has actually released an erratum saying that these errors may be falsely reported, but can be safely ignored.
  15. The saga continues. Less than 24 hours after booting up the rig with new PSU (and unRAID 4.3), I got another lock up. Syslog and diagnostics attached. Again, I don't see anything in them that presages failure or lock up. I'd really like to avoid going without my primary VM, but that may be my only option at this point. FCPsyslog_tail.zip tower-diagnostics-20170208-0213.zip
  16. Yeah, definitely not the best for troubleshooting. I wanted to upgrade given the security patches it had, not necessarily to fix issues. The PSU I'm hoping may actually help. At any rate, I found a few references in other linux distros to turning off ACPI in bios to address the hpet issues. Sure enough, with ACPI off, I no longer see those interrupts. So, I guess now I'll leave it in troubleshooting mode and wait for another lock up. Thanks very much for your help, John.
  17. Welp, I may have made things worse. I'd really like to avoid losing my main VM for weeks to see whether the system crashes again. So, I swapped out the PSU with a new one a buddy had handy. At the same time, I also upgraded to 6.3.0. Since then, the system has yet to crash (though I haven't had enough uptime to say that it's stable), but I'm now getting repeated 'lost rtc interrupts' messages in the syslog. Specifically messages like, 'kernel: hpet1: lost 522 rtc interrupts,' Any thoughts? I only found a few mentions of this error on the unraid forums.
  18. Yeah, shutdown was a bad title. What happens is this: the VM goes dark (nothing on the screen, no response from mouse/keyboard), the web-based GUI does not load (unreachable), Telnet loses its connection and cannot reconnect, and the onscreen output from unRAID through the iGPU is still there but no longer updates. Meanwhile the rig itself is still on, fans spinning, lights on, etc. Any other ideas on hardware that might cause this? The only component that isn't relatively new is my PSU, which is a solid, reputable unit, but pushing 5-6 years old. At the same time, I would think a failing PSU would totally shut the system down, not just make it go unresponsive. Thanks again for your help.
  19. First DIM/slot for 13 hours. 6 passes, 0 errors. Running the other DIM/slot now.
  20. No, never. It's sat unresponsive for at least 10 or 12 hours without any response.
  21. Thanks for looking. I haven't run a memtest in a while, but will plan to do that tonight.
  22. John, thanks for the reply. The syslog I posted was in the /logs/ directory of the boot flash, along with all the diagnostics. I looked in the /config/ directory but did not see a logs directory. Screen cap below.