Monteroman

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by Monteroman

  1. This did the trick for me! Thank you for spending time on figuring this out.
  2. I would if I was a programmer, but I'm not. I spent a couple of hours trying to figure out where that was coming from, but I was unable to find the issue, but it seems to be in kernel.php.
  3. App: LibreNMS I've installed and configured the LibreNMS docker and the database generated perfectly and I was able to add devices, but when I run the validate scripts, it's failing on the poller stating that it isn't configured. When I get into the cli of the docker and run an "artisan schedule:list", it produces the following: */5 * * * * Closure at: app/Console/Kernel.php:75 0 1 * * 0 php artisan maintenance:fetch-ouis --wait I'm going to take a leap here and guess that the first line should be going to /opt/librenms/poller-wrapper.py or something else?? If I run the poller-wrapper.py manually, it does execute a poll like it should, but for some reason, the built-in schedule to run it is trying to run "Closure at: app/Console/Kernel.php:75" instead. Any thoughts?
  4. Hi JorgeB. I did attempt what you suggested and it was just hanging. I have a backup of the cache drive so I reformatted the drive back to btfs which has been stable for me. I only recently converted it to ZFS because I wanted to take advantage of snapshots of my VMs before doing updates. Thanks for the reply.
  5. I am able to bring up the system in safe mode, but as soon as I bring the array online, I get the error again. I was able to do a zfs scrub on my single spinning disk formatted in ZFS, but when I try to do it on the ZFS formatted cache drive, it hangs. The cache drive is relatively new (within a few months). As another test, I was able to mount the array without the cache drive and it came up clean and started all services. Is this telling me i have a corrupted ZFS cache drive? If so, is there any way to repair it without reformatting it?
  6. Hi all. My Unraid server froze up today and I had to hard boot it. When it tried to start the array, I ended up seeing this on the console: Aug 22 16:05:15 Tower kernel: VERIFY3(range_tree_space(smla->smla_rt) + sme->sme_run <= smla->smla_sm->sm_size) failed (281447194996736 <= 4294967296) Aug 22 16:05:15 Tower kernel: PANIC at space_map.c:405:space_map_load_callback() The system will not allow me to start any plugins or services. It is stuck at starting services when the array comes up. Thanks for any help anyone an offer! tower-diagnostics-20230822-1628.zip
  7. APP: Backblaze Blackblaze just released v8.5 and I just wanted to share my experiences with upgrading manually. With the Portainer container running, I went into Portainer and opened the shell console using the "app" user. From there, I ran the commands from a previous post: cd /config/wine/dosdevices/c\:/ curl -L "https://www.backblaze.com/win32/install_backblaze.exe" --output "install_backblaze.exe" wine64 "install_backblaze.exe" The installer popped up over the running client. I clicked Install and waited a few seconds for something to happen, but nothing appeared to happen. Don't panic! it's actually installing. When done, it presented me with a screen to show it was finished, then it promptly locked up. I went into the container settings and changed the size to 1024 x 768 (Variables Display height and Display width). This is because the UI got bigger on this update. And I had previously had USER_ID set to 0 and GROUP_ID set to 0 to try to remove the dreaded popup of the permissions issues everyone's been having with 8.0 (including myself). Those group and user permissions never seemed to help with 8.0, but I left them in the container settings when updating to 8.5. The first startup of 8.5 seemed to lock up so I ran through the 3 commands to reinstall again and after that, surprisingly, the UI came up. Guess what?! No more bad bz_permissions popups! I don't know if it was 8.5 or a combination of 8.5 and the group/user ID settings, but I haven't seen one pop up yet and I usually do within 20 seconds or so and its been an hour. Hopefully those who run into this will have decent luck updating. I did have to use Portainer though because you can't "su app" from the Unraid container. ----- EDIT ---- Another note about this 8.5 update is that if you restart your container, the app will boot back up and the UI will appear to be locked up with part of the UI not showing. It doesn't mean that it's not backing up though. It seems to be a Control Panel UI issue. If you go to the shell and do a "ps x" and look for "C:\Program Files (x86)\Backblaze\bzbui.exe -noqiet", find the process # associated with it, then kill -9 <process#>, it kills the GUI, then inside the UI, you can relaunch it from the bar at the left - drop down and launch the BackBlaze Control Panel and the UI comes up normally. ---- EDIT 2 ---- So after letting it run for a bit, the bzbui.exe UI tends to lock up, but the back-end processes are still backing up. I don't know if it's missing something or not, but the only way to get it to come back is to kill it at the command line and then restart it from the icon. Buggy, but it works. I'd be curious to know how others fair.
  8. I’ve been running 6.10.3 RC1 all day since working with @JorgeB and it has been 100% fine. I have been running my docker apps all of my drives are BTFS and I have not had a single issue all day. The log has been devoid of any errors from the previous 6.10.x versions.
  9. So I'll have to restore whatever was on disk2 then. Cache, I knew I'd have to restore from backup. I've gone through that before. Well, it's going to be a long restore from crashplan then.. fun times ahead. I've reverted back to 6.9.2. It's been stable on the MicroServer Gen8.
  10. So does this mean that I will indeed have to restore the array from a backup?
  11. Hey there. While this may be a huge coincidence, but my server was running fine on 6.9.2 with all drives running normally including the cache drive. When I upgraded and rebooted, the system came up, but the cache drive was in read only mode and 1 of my 4 drives was unreadable (btrfs on all of my drives). I also noticed that the /var/log folder filled up (syslog filled the volume). I deleted the file, then rebooted the system again. When it came back up, Drive 2 was still offline and then the cache drive was dead. It ended up having to get reformatted and restored as well as the disk that wouldn't mount. I tried running a btrfs check on it, but it wasn't working. There was something about the log on the disk being corrupted so I formatted both the cache and that drive. I thought the array would rebuild the drive, but it isn't rebuilding it. Before I formatted, I tried to remove the drive from the array, but couldn't so the only option left to me was to reformat it. Is there a command to force an array rebuild or at this point, did I lose all of my data on that drive? The diagnostics attached are before I formatted the drive. Any help would be appreciated. Thanks! tower-diagnostics-20220519-0703.zip
  12. I'm having the same issues with 2022.01.25. I restarted "unraid-api" from the console a few times and logged in and it just goes back to Sign in. When I go to the Management Access settings, the page keeps refreshing every second as it's trying to sign in over and over. I'm not using any beta release of Unraid. This is on the 6.9.2 release.
  13. Thank you for suggesting the OZNU homebridge project. I was having issues with the HomeBridge with GUI docker recently and came here looking. Following the instructions from the project's web site, I was able to get my HB instance back up and running and it's never been more stable.
  14. @Siwat2545 I had made a change to my HomeBridge with Gui instance to add another plugin, but when the docker app rebuilt, it reverted the homebridge executable to pre 1.0 and it caused my xiaomi plugin to fail. What's the latest binary version in your repository? Thanks!
  15. I just added one in my HP MIcroServer Gen8 and I'm also seeing that as well. My temperature warning hits at 115F. The drive is rated for 70c (158F). How does UnRaid determine the default settings for the SSD drives?
  16. With the latest update to this docker image that came out over the weekend (10/7, 10/8), the docker image won't stay powered up. It keeps powering down after you try to restart it. This is what the log says: [s6-init] making user provided files available at /var/run/s6/etc...exited 0.[s6-init] ensuring user provided files have correct perms...exited 0.[fix-attrs.d] applying ownership & permissions fixes...[fix-attrs.d] done.[cont-init.d] executing container initialization scripts...[cont-init.d] 00-app-niceness.sh: executing...[cont-init.d] 00-app-niceness.sh: exited 0.[cont-init.d] 00-app-script.sh: executing...[cont-init.d] 00-app-script.sh: exited 0.[cont-init.d] 00-app-user-map.sh: executing...[cont-init.d] 00-app-user-map.sh: exited 0.[cont-init.d] 00-clean-tmp-dir.sh: executing...[cont-init.d] 00-clean-tmp-dir.sh: exited 0.[cont-init.d] 00-set-app-deps.sh: executing...[cont-init.d] 00-set-app-deps.sh: exited 0.[cont-init.d] 00-set-home.sh: executing...[cont-init.d] 00-set-home.sh: exited 0.[cont-init.d] 00-take-config-ownership.sh: executing...[cont-init.d] 00-take-config-ownership.sh: exited 0.[cont-init.d] 10-certs.sh: executing...[cont-init.d] 10-certs.sh: exited 0.[cont-init.d] 10-nginx.sh: executing...ERROR: No modification applied to /etc/nginx/default_site.conf.[cont-init.d] 10-nginx.sh: exited 1.[cont-finish.d] executing container finish scripts...[cont-finish.d] done.[s6-finish] syncing disks.[s6-finish] sending all processes the TERM signal.[s6-finish] sending all processes the KILL signal and exiting.[s6-finish] sending all processes the KILL signal and exiting. Any thoughts?