Bait Fish

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bait Fish's Achievements

Noob

Noob (1/14)

3

Reputation

  1. This is probably a no brainer for some, maybe not others. This app has been blacklisted in CA with no apparent updates, while the project page shows continuing update progress. I changed the container settings below and appear to have the latest version now. Repository: ghcr.io/swing-opensource/swingmusic:latest Registry URL: https://ghcr.io/swing-opensource/swingmusic:latest
  2. I had trouble viewing the GUI for the vnStat container. Defaults were left in the template during install except Network Type. I had set that to my custom network. The problem after first install was that clicking the WebUI link in Unraid would bring up a blank page using Chrome on my PC. This was the only machine/browser I tried. The URL listed was something link blocked/blank. Sorry. I did not capture the exact detail. This behaved differently from other containers. I tested using my machine's local IP and the container port. That too was not successful. I added this to the container and was able to reach the GUI. Click Add another Path, Port, Variable, Label or Device and set the port details. Config Type: Port Name: Host Port for 8685 Container Port: 8685 Host Port: 8685 Default Value: 8685 Connection Type: TCP Once that was done it worked for me. Note that I had changed one other container before this success. I adjusted the WebUI setting (under advanced view). The default was "http://[IP]:8685/". I changed it to "http://[IP]:[PORT:8685]" to match other functioning containers on my system. I'm not sure if that had any bearing on getting the GUI to show. One last change was it had an inactive interface as far as I can tell. So after issuing these commands, my only active interface, eth0, was left. The two command issued start with vnstat. / # vnstat --iflist Available interfaces: tunl0 eth0 (10000 Mbit) / # vnstat -i tunl0 --remove --force Interface "tunl0" removed from database. The interface will no longer be monitored. Use --add if monitoring the interface is again needed. / # Hope this helps others.
  3. I issued the commands in an unraid command terminal first. Then install the container.
  4. Worked for me like B4rny metioned. The 'Before you install' instructions listed in overview section of the container settings. One possible issue. The container defaults to appdata to /mnt/user/appdata/swing-music/ (hyphenated) which is different from the instructions ("cd swingmusic"). So, changed appdata to match where it built to.
  5. I ran manually mover probably 10 pm, just hours before the troubles seemed to start. I'm thinking that's not a bad number.
  6. Docker service started without complaints. Containers appear to have auto started successfully as well! You mebtioned BTFRS corruption. I haven't tried that yet. Should I tackle that next? And, thanks for walking me thorough all this.
  7. VM Manager started without any obvious issues. Started a Win10 VM successfully. Seems good.
  8. Uh oh. I have the VM autobackup itself on schedule. BUT I recall the morning the system went bad, that the VM backup executed. I may not. edit: libvirt.img is not in the backup location . . . I do not recall making any backup of it elsewhere manually. x2: and i did not have a location set in Appdata Backup/Restore for backing up libvirt.img
  9. root@Homer:/mnt/cache_nvme/system# ls -lah /mnt/cache_nvme/system/docker total 40G drwxrwxrwx 2 nobody users 24 Oct 11 16:32 ./ drwxrwxrwx 4 nobody users 35 Nov 16 2021 ../ -rw-rw-rw- 1 nobody users 50G Jan 1 01:39 docker.img root@Homer:/mnt/cache_nvme/system# ls -lah /mnt/cache_nvme/system/libvirt total 104M drwxrwxrwx 2 nobody users 25 Nov 16 2021 ./ drwxrwxrwx 4 nobody users 35 Nov 16 2021 ../ -rw-rw-rw- 1 nobody users 1.0G Dec 31 17:00 libvirt.img root@Homer:/mnt/cache_nvme/system#
  10. root@Homer:~# ls -lah /mnt/cache_nvme/system total 0 drwxrwxrwx 4 nobody users 35 Nov 16 2021 ./ drwxrwxrwx 5 nobody users 50 Jan 3 16:48 ../ drwxrwxrwx 2 nobody users 24 Oct 11 16:32 docker/ drwxrwxrwx 2 nobody users 25 Nov 16 2021 libvirt/ root@Homer:~# Contents appear ok looking through the various directories.
  11. New diagnostics after starting array in normal mode homer-diagnostics-20230103-1639.zip
  12. I ran through a couple repair sessions and tried to follow its directions. I pasted all the notes in this code block including which flag I was using, typically keeping verbose on. I did not catch it telling me to run the L option so I have not done that.
  13. Having trouble on mobile modifying the code box above. Scanned with the -nv flag per docs. Phase 1 - find and verify superblock... - block cache size set to 3061336 entries Phase 2 - using internal log - zero log... zero_log: head block 417567 tail block 393702 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (0,50297084-50297193) multiply claimed by cnt space tree, state - 2 block (0,48998093-48998203) multiply claimed by cnt space tree, state - 2 block (0,50379227-50379336) multiply claimed by cnt space tree, state - 2 block (0,49633120-49633230) multiply claimed by cnt space tree, state - 2 agf_freeblks 64128684, counted 64133581 in ag 0 agf_freeblks 97418327, counted 97438471 in ag 2 sb_icount 4368768, counted 4433280 sb_ifree 44699, counted 1263236 sb_fdblocks 340342643, counted 354388165 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 58802146 claims free block 7583440 data fork in ino 58802146 claims free block 7583505 data fork in ino 59896646 claims free block 7574017 data fork in ino 59896646 claims free block 7574079 - agno = 1 - agno = 2 bad nblocks 10397115 for inode 2175513269, would reset to 10397118 bad nextents 207685 for inode 2175513269, would reset to 207683 - agno = 3 bad CRC for inode 3227792998 bad CRC for inode 3227792998, would rewrite would have cleared inode 3227792998 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (0,48811889-48811997) only seen by one free space btree free space (0,50494325-50494435) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 bad CRC for inode 3227792998, would rewrite would have cleared inode 3227792998 bad nblocks 10397115 for inode 2175513269, would reset to 10397118 bad nextents 207685 for inode 2175513269, would reset to 207683 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 would rebuild directory inode 1239086149 - agno = 2 - agno = 3 Metadata corruption detected at 0x46e010, inode 0xc0643666 dinode couldn't map inode 3227792998, err = 117 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 3227798067, would move to lost+found Phase 7 - verify link counts... Metadata corruption detected at 0x46e010, inode 0xc0643666 dinode couldn't map inode 3227792998, err = 117, can't compare link counts No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Tue Jan 3 15:25:14 2023 Phase Start End Duration Phase 1: 01/03 15:25:04 01/03 15:25:04 Phase 2: 01/03 15:25:04 01/03 15:25:04 Phase 3: 01/03 15:25:04 01/03 15:25:09 5 seconds Phase 4: 01/03 15:25:09 01/03 15:25:10 1 second Phase 5: Skipped Phase 6: 01/03 15:25:10 01/03 15:25:14 4 seconds Phase 7: 01/03 15:25:14 01/03 15:25:14 Total run time: 10 seconds