djgizmo

Members
  • Posts

    95
  • Joined

  • Last visited

Everything posted by djgizmo

  1. NM, discovered the issue. After the .conf file is created... This container tries to chown all folders below the 'downloads' folder. If this is pointed at broader folder (like it was for me), then this either borks and dies, or takes a very long time. I fixed this by changing the downloads entry point.
  2. Here's my docker run docker run -d --name='nzbget-ng' --net='br0.55' --ip='10.69.55.104' --privileged=true -e TZ="America/New_York" -e HOST_OS="Unraid" -e HOST_HOSTNAME="IRONMAN" -e HOST_CONTAINERNAME="nzbget-ng" -e 'TCP_PORT_6789'='6789' -e 'NZBGET_USER'='nzbget' -e 'NZBGET_PASS'='tegbzn6789' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:6789]/' -l net.unraid.docker.icon='https://avatars.githubusercontent.com/u/121837341?s=48&v=4' -v '/mnt/cache/appdata/nzbget-ng/':'/config':'rw' -v '/mnt/user/':'/downloads':'rw' 'nzbgetcom/nzbget' 010a99e4f9cbd21f813075ac6a7ab592185d713d341e697efc3f4d3ffc21d8fb The command finished successfully! Checked logs from Unraid / Dozzel, nothing shows. The nzbget.conf file is created, but I can't tell if the webserver is running or anything else.
  3. Is this still supported? I'm able to pull / install it on Unraid, and it shows running, but webpage never loads.
  4. It's because this docker template is boogered. LibreNMS docker container (for the past year) install now expects a full docker stack now, which runs 2 installations of LibreNMS, with one of them as the dedicated poller. A76 refuses to update his template to accommodate. https://github.com/librenms/docker
  5. Attached is diagnostics In summary: I upgraded to 6.12.4, after reboot, my Home Assistant VM would not boot. It kept booting into the UEFI Interactive Shell. Rebooting the host had no affect. Luckily I backup my libvert everday as well as my vms and had a backup copy. I restored my libvert to yesterdays version, rebooted the host, and the HA vm booted normally. I'd like to know whats going on with my libvert to better understand why HA won't boot with any other version of libvert. If I need to roll to the 'non working' libvert and reboot and test HA again, please let me know and I'll do so. ironman-diagnostics-20230904-2124.zip
  6. Have you looked up the last time you updated your LibreNMS xml.... I do not think you understand my frustration. Have a good one.
  7. Is there a reason why Libvert image isn't backed up? If the version of libvert doesn't match the version of the VMS, they will fail to launch.
  8. That's a poor attitude towards the community. I'm not asking you to fix the underlying container, I'm asking you to fix your own template. It's out of date with how LibreNMS does their container stack now. If you would have responded within a day or two, sure, I'd contribute to your repo, but you ghosted the community for months. My efforts will be put towards another repo or my own template for LibreNMS. Yes, it's literally the requirement to create a support thread. If a support is created..... one in the community would think support would be available, normally from the author. from Squid's How Do I Create My Own Docker Templates? post.... I'm going to insist you create a <Support> thread. Just create it anywhere (as you won't be able to create in the docker Containers) and a moderator will move it. Update the XML with the support link As for updated regularly.... Precedent has been set by others in the community. binhex, linuxserver.io, Taddeusz, even clowrym.... have given continued support. When someone doesn't provide support (either via forum, discord, github, or otherwise), it makes for a frustrating user experience. I get it, you're not a developer of the apps you put templates together... you have little reason to invest in creating/supporting the templates (other than for yourself) and showing it to others back when you did. If you would have responded, "Hey, I don't have LibreNMS installed anymore... what needs to be adjusted and I'll look into it later this week or hey, go here to my github to open a PR".... instead, you ghosted me, and the community for months. your last comment was November 9, 2022. Until your comment... I expected you abandoned the forums altogether. I'm done ranting... thanks for listening... have a good night/day.
  9. Did so, said it couldn't read the log. Followed the Spaced Invader video on XFS repair, and used the -L and the file system has been repaired. Now that I have my base data back, I need to know why this happened and how I can prevent from happening again.
  10. done, and now the syslog file is created on disk. Thank you. Ran an XFS_repair -N on the ssd. root@IRONMAN:~# xfs_repair -n /dev/sdi1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... invalid start block 1627196033 in record 208 of cnt btree block 2/704825 invalid start block 1627196033 in record 234 of cnt btree block 2/704825 invalid start block 1627196033 in record 236 of cnt btree block 2/704825 agf_freeblks 2545590, counted 2545587 in ag 2 agi unlinked bucket 26 is 125929626 in ag 2 (inode=662800538) sb_icount 841152, counted 841408 sb_ifree 4855, counted 4673 sb_fdblocks 20284605, counted 20803245 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (2,14441748-14441748) only seen by one free space btree free space (2,14447755-14447755) only seen by one free space btree free space (2,14447890-14447890) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 2 - agno = 0 - agno = 1 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 662800538, would move to lost+found Phase 7 - verify link counts... would have reset inode 662800538 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting. Do you recommend that I try to mound the ssd manually via command line?
  11. k, I have set this to a share on my array. I don't see any files / folders created for syslog on this share.
  12. Attached diagnostics and logs that were stored in ram. I've started the local syslog server. Should I be mirroring this to flash drive?
  13. I’ve been using Unraid at home since 2018. Started with basic. Then moved to plus. And then upgraded to pro. About 6 month apart for each upgrade. Started with a DellR710 and had to reboot monthly which was fine for stability. After I made a custom box with a Rosewill Case, a SuperMicro motherboard, Intel 4770, 16Gb of ram, SSD for cache / VMs/containers.. things seemed a bit bettee, I’d only have to reboot every 2-3 months. Now for the past 6 months, stability has been garbage garbage. VMs and containers randomly crashing, not being able to start. So I suspected a bad SSD. Swapped SSD. Same issue. Started to reboot weekly and was fine for a bit. Now I’m getting out of memory and errors and frankly, I’m unhappy. Then today after a reboot, I see ‘unrecognizable file system” error on my SSD and all of my VMs and containers are gone of course. (Luckily I’ve made backups of my container data on my array) I can’t use my unraid box for more than a basic NAS at this point. I’ve memtested my ram for 6 hours, all passes and no errors. I’m not sure what to do now. I need to fix this stability issue. I don’t have a spare motherboard or cpu, to verify that that as possible issue. PSU is a Corsair 500 watt psu, so in theory power is stable. ironman-diagnostics-20230807-0804.zip Logs from RAM.txt
  14. I did try. For days. Before I came to the forums looking for support to only find the author abandon his support thread.
  15. Without testing polling is functional during normal interval hours, the test is incomplete.
  16. Are you still getting updated statuses from it?
  17. Have you verified that polling is actually function?
  18. @Squid I'm not talking about LibreNMS support... because their support is just fine. I'm talking about support for the Unraid docker template. LibreNMS container has changed from what the initially was using a single container instance to now requires several side car containers to work properly. At minimum, it needs a 2nd instance of LibreNMS working as a dispatcher node. None of which is stated in the template or the support documents for this docker container. Here are the services it needs.... database (mysql) redis msmtpd librenms dispatcher syslogng snmptradpd It's not just a simple 'whoops I forgot to tell you you need a database'. I get it, you're hesitant to pull down any apps that you might find have a lot of use. The whole point of Unraid docker section was to introduce docker containers to home labbbers who may not have had experience before Unraid. I was one of them. The author has basically not responded to anything in this thread about the LibreNMS template in over a year. (and has been since Nov '22 since he's replied to anything at all on this thread) In any case, I'd be happy if the author cared enough to say, "hey guys 'yea things have changed, now these template might not work"
  19. The author himself has basically abandoned his support thread. LibreNMS template is completely broken as LibreNMS requires multiple containers to do what it needs to do. All of which is not documented in the template. Either someone from the mod team needs to nudge this template developer or the templates to be removed until someone else can take over.
  20. If you're done giving support for your containers, maybe find someone that will support them? Straight up abandoning the agreement that you made when you listed them is just wrong.
  21. Fun. i've ditched BTRFS (single drive) for XFS as this isn't the first time I've had this kind of with BTRFS.