Bait Fish

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by Bait Fish

  1. This is probably a no brainer for some, maybe not others. This app has been blacklisted in CA with no apparent updates, while the project page shows continuing update progress. I changed the container settings below and appear to have the latest version now. Repository: ghcr.io/swing-opensource/swingmusic:latest Registry URL: https://ghcr.io/swing-opensource/swingmusic:latest
  2. I had trouble viewing the GUI for the vnStat container. Defaults were left in the template during install except Network Type. I had set that to my custom network. The problem after first install was that clicking the WebUI link in Unraid would bring up a blank page using Chrome on my PC. This was the only machine/browser I tried. The URL listed was something link blocked/blank. Sorry. I did not capture the exact detail. This behaved differently from other containers. I tested using my machine's local IP and the container port. That too was not successful. I added this to the container and was able to reach the GUI. Click Add another Path, Port, Variable, Label or Device and set the port details. Config Type: Port Name: Host Port for 8685 Container Port: 8685 Host Port: 8685 Default Value: 8685 Connection Type: TCP Once that was done it worked for me. Note that I had changed one other container before this success. I adjusted the WebUI setting (under advanced view). The default was "http://[IP]:8685/". I changed it to "http://[IP]:[PORT:8685]" to match other functioning containers on my system. I'm not sure if that had any bearing on getting the GUI to show. One last change was it had an inactive interface as far as I can tell. So after issuing these commands, my only active interface, eth0, was left. The two command issued start with vnstat. / # vnstat --iflist Available interfaces: tunl0 eth0 (10000 Mbit) / # vnstat -i tunl0 --remove --force Interface "tunl0" removed from database. The interface will no longer be monitored. Use --add if monitoring the interface is again needed. / # Hope this helps others.
  3. I issued the commands in an unraid command terminal first. Then install the container.
  4. Worked for me like B4rny metioned. The 'Before you install' instructions listed in overview section of the container settings. One possible issue. The container defaults to appdata to /mnt/user/appdata/swing-music/ (hyphenated) which is different from the instructions ("cd swingmusic"). So, changed appdata to match where it built to.
  5. I ran manually mover probably 10 pm, just hours before the troubles seemed to start. I'm thinking that's not a bad number.
  6. Docker service started without complaints. Containers appear to have auto started successfully as well! You mebtioned BTFRS corruption. I haven't tried that yet. Should I tackle that next? And, thanks for walking me thorough all this.
  7. VM Manager started without any obvious issues. Started a Win10 VM successfully. Seems good.
  8. Uh oh. I have the VM autobackup itself on schedule. BUT I recall the morning the system went bad, that the VM backup executed. I may not. edit: libvirt.img is not in the backup location . . . I do not recall making any backup of it elsewhere manually. x2: and i did not have a location set in Appdata Backup/Restore for backing up libvirt.img
  9. root@Homer:/mnt/cache_nvme/system# ls -lah /mnt/cache_nvme/system/docker total 40G drwxrwxrwx 2 nobody users 24 Oct 11 16:32 ./ drwxrwxrwx 4 nobody users 35 Nov 16 2021 ../ -rw-rw-rw- 1 nobody users 50G Jan 1 01:39 docker.img root@Homer:/mnt/cache_nvme/system# ls -lah /mnt/cache_nvme/system/libvirt total 104M drwxrwxrwx 2 nobody users 25 Nov 16 2021 ./ drwxrwxrwx 4 nobody users 35 Nov 16 2021 ../ -rw-rw-rw- 1 nobody users 1.0G Dec 31 17:00 libvirt.img root@Homer:/mnt/cache_nvme/system#
  10. root@Homer:~# ls -lah /mnt/cache_nvme/system total 0 drwxrwxrwx 4 nobody users 35 Nov 16 2021 ./ drwxrwxrwx 5 nobody users 50 Jan 3 16:48 ../ drwxrwxrwx 2 nobody users 24 Oct 11 16:32 docker/ drwxrwxrwx 2 nobody users 25 Nov 16 2021 libvirt/ root@Homer:~# Contents appear ok looking through the various directories.
  11. New diagnostics after starting array in normal mode homer-diagnostics-20230103-1639.zip
  12. I ran through a couple repair sessions and tried to follow its directions. I pasted all the notes in this code block including which flag I was using, typically keeping verbose on. I did not catch it telling me to run the L option so I have not done that.
  13. Having trouble on mobile modifying the code box above. Scanned with the -nv flag per docs. Phase 1 - find and verify superblock... - block cache size set to 3061336 entries Phase 2 - using internal log - zero log... zero_log: head block 417567 tail block 393702 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (0,50297084-50297193) multiply claimed by cnt space tree, state - 2 block (0,48998093-48998203) multiply claimed by cnt space tree, state - 2 block (0,50379227-50379336) multiply claimed by cnt space tree, state - 2 block (0,49633120-49633230) multiply claimed by cnt space tree, state - 2 agf_freeblks 64128684, counted 64133581 in ag 0 agf_freeblks 97418327, counted 97438471 in ag 2 sb_icount 4368768, counted 4433280 sb_ifree 44699, counted 1263236 sb_fdblocks 340342643, counted 354388165 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 58802146 claims free block 7583440 data fork in ino 58802146 claims free block 7583505 data fork in ino 59896646 claims free block 7574017 data fork in ino 59896646 claims free block 7574079 - agno = 1 - agno = 2 bad nblocks 10397115 for inode 2175513269, would reset to 10397118 bad nextents 207685 for inode 2175513269, would reset to 207683 - agno = 3 bad CRC for inode 3227792998 bad CRC for inode 3227792998, would rewrite would have cleared inode 3227792998 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (0,48811889-48811997) only seen by one free space btree free space (0,50494325-50494435) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 bad CRC for inode 3227792998, would rewrite would have cleared inode 3227792998 bad nblocks 10397115 for inode 2175513269, would reset to 10397118 bad nextents 207685 for inode 2175513269, would reset to 207683 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 would rebuild directory inode 1239086149 - agno = 2 - agno = 3 Metadata corruption detected at 0x46e010, inode 0xc0643666 dinode couldn't map inode 3227792998, err = 117 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 3227798067, would move to lost+found Phase 7 - verify link counts... Metadata corruption detected at 0x46e010, inode 0xc0643666 dinode couldn't map inode 3227792998, err = 117, can't compare link counts No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Tue Jan 3 15:25:14 2023 Phase Start End Duration Phase 1: 01/03 15:25:04 01/03 15:25:04 Phase 2: 01/03 15:25:04 01/03 15:25:04 Phase 3: 01/03 15:25:04 01/03 15:25:09 5 seconds Phase 4: 01/03 15:25:09 01/03 15:25:10 1 second Phase 5: Skipped Phase 6: 01/03 15:25:10 01/03 15:25:14 4 seconds Phase 7: 01/03 15:25:14 01/03 15:25:14 Total run time: 10 seconds
  14. I started it up a second time. The cache drive (sde) that showed missing this morning, now showed present and ready. Nothing was done but restarting 9 hours later. I shut it down, reseated the cables for cache sde. Then started back up, and cache sde remained available. I saved diagnostics from this session and have uploaded to this post as suggested above. I'll attempt repairs now. Update: Cache_nvme repair with default -n option results with, Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (0,50297084-50297193) multiply claimed by cnt space tree, state - 2 block (0,48998093-48998203) multiply claimed by cnt space tree, state - 2 block (0,50379227-50379336) multiply claimed by cnt space tree, state - 2 block (0,49633120-49633230) multiply claimed by cnt space tree, state - 2 agf_freeblks 64128684, counted 64133581 in ag 0 agf_freeblks 97418327, counted 97438471 in ag 2 sb_icount 4368768, counted 4433280 sb_ifree 44699, counted 1263236 sb_fdblocks 340342643, counted 354388165 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 58802146 claims free block 7583440 data fork in ino 58802146 claims free block 7583505 data fork in ino 59896646 claims free block 7574017 data fork in ino 59896646 claims free block 7574079 - agno = 1 - agno = 2 bad nblocks 10397115 for inode 2175513269, would reset to 10397118 bad nextents 207685 for inode 2175513269, would reset to 207683 - agno = 3 bad CRC for inode 3227792998 bad CRC for inode 3227792998, would rewrite would have cleared inode 3227792998 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (0,48811889-48811997) only seen by one free space btree free space (0,50494325-50494435) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 bad CRC for inode 3227792998, would rewrite would have cleared inode 3227792998 bad nblocks 10397115 for inode 2175513269, would reset to 10397118 bad nextents 207685 for inode 2175513269, would reset to 207683 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... would rebuild directory inode 1239086149 Metadata corruption detected at 0x46e010, inode 0xc0643666 dinode couldn't map inode 3227792998, err = 117 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 3227798067, would move to lost+found Phase 7 - verify link counts... Metadata corruption detected at 0x46e010, inode 0xc0643666 dinode couldn't map inode 3227792998, err = 117, can't compare link counts No modify flag set, skipping filesystem flush and exiting. homer-diagnostics-20230103-1458.zip
  15. The memory test passed. First boot into unRAID indicates the cache drive is missing. I'll check cables next. This SSD is only a couple/few months old. Sent from my Pixel 7 Pro using Tapatalk
  16. I'll get a memtest going first thing tomorrow. Thanks for your guidance. Sent from my Pixel 7 Pro using Tapatalk
  17. Now that I'm back in the land of the living, details I could not include in my haste this morning. These errors happened in the middle of the night on a system that was humming along without any other concerns previously. Shares are not showing that had been there previously. I'm sure this is a good part of the problem. Unraid Version: 6.11.5. Fix Common Problems greeted me with these issues when I checked on the server after coffee: Unable to write to cache_nvme Drive mounted read-only or completely full. Unable to write to cache_ssd Drive mounted read-only or completely full. Unable to write to Docker Image Docker Image either full or corrupted. On the Docker tab in Unraid, these errors show. Docker Containers APPLICATION VERSION NETWORK PORT MAPPINGS (APP TO HOST) VOLUME MAPPINGS (APP TO HOST) AUTOSTART UPTIME Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 712 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 898 Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 712 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 967 No Docker containers installed And Unraid Settings>Docker mirrors these errors. Enable Docker: Yes One or more paths do not exist (view) Docker vDisk location: /mnt/user/system/docker/docker.img Path does not exist Default appdata storage location: /mnt/user/appdata/ Path does not exist
  18. [Version: 6.11.5] Unraid, containers, and plugins. Seems to have had a BTFRS error on SDD1 this morning. Log attached. Gotta run to work now, so keeping this description short. Happy new year, and thanks for any help. homer-diagnostics-20230101-0456.zip
  19. Chrome user here. Same error in regular Chrome window. No error (works!) when using Chrome incognito window. Error seems to be cached?
  20. Following up. I did not get a good extended test log via Unraid. I instead swapped the suspect drive out and got its replacement going first. Attempts at testing it as an external drive kept failing, but without log as far as I could tell. Scanning it with Seatools on Windows ended with a failure in the long test warning that the drive is... failing. Thanks again, all of you, for your help. Unraid, and its communitity, are awesome! --------------- SeaTools for Windows v1.4.0.7 --------------- 5/24/2019 3:17:39 PM Model Number: Seagate Backup+ Hub BK Serial Number: NA8TGN5Z Firmware Revision: D781 SMART - Started 5/24/2019 3:17:39 PM SMART - Pass 5/24/2019 3:17:45 PM Short DST - Started 5/24/2019 3:17:52 PM Short DST - Pass 5/24/2019 3:18:58 PM Identify - Started 5/24/2019 3:19:03 PM Short Generic - Started 5/24/2019 3:25:19 PM Short Generic - Pass 5/24/2019 3:26:31 PM Identify - Started 5/3/2022 4:36:31 PM Short DST - Started 5/3/2022 4:36:46 PM Short DST - Pass 5/3/2022 4:37:58 PM Short Generic - Started 5/3/2022 4:38:45 PM Short Generic - Pass 5/3/2022 4:40:27 PM Long Generic - Started 5/3/2022 4:41:42 PM Long Generic - FAIL 5/4/2022 1:18:40 AM SeaTools Test Code: E896A6D4
  21. Thanks for the tips and insights. Making progress now that spin down is disabled. So simple. . . I'll remember this next time. Your cautions spurred me to quit new disk intensive activity I started today. Now most everything is stopped, disk activity at a minimum. The new replacement 8TB drive's preclear is going to finish soon, 2% post-read left. To get the array to normal sooner, I'll play it safe by adding the new drive in first, rebuilding, then testing the failing drive later while unassigned.
  22. Okay, I'm back. This should be my last edit. I do not think the extended test is completing. And I am not sure the downloaded SMART report will show that. This last, third time running the extended test. I sat and watched the progress. It appeared to stop on its own. I feel like the extended test should run for quite some time for an 8TB disk, not 10 minutes. In the drive capabilities section it states "Extended self-test routine recommended polling time: 937 minutes." Below is what I have observed. And I have also attached the last three SMART reports (download button). Even further below, are the last data from the Attributes table. Hope this helps diagnose. While I watch the progress of the SMART extended test (short test button greys out), the most progress observed is, self-test in progress, 10% complete. Then maybe 10 minutes later I notice it then says, Last SMART test result: No self-tests logged on this disk Refreshing the page shows a new status, Last SMART test result: Aborted by host (text colored orange) Further details from the page follow. I did not capture a downloaded report from the first extended test, four times ago. SMART self-test history: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 90% 47049 - # 2 Extended offline Aborted by host 90% 47048 - # 3 Extended offline Aborted by host 90% 47047 - # 4 Extended offline Aborted by host 90% 47047 - # 5 Short offline Completed without error 00% 21432 - SMART error log: No Errors Logged Attributes [before the last test. Also, highlighed in gold are #197 AND #198] # ATTRIBUTE NAME FLAG VALUE WORST THRESHOLD TYPE UPDATED FAILED RAW VALUE 1 Raw read error rate 0x000f 105 099 006 Pre-fail Always Never 7920184 3 Spin up time 0x0003 092 090 000 Pre-fail Always Never 0 4 Start stop count 0x0032 100 100 020 Old age Always Never 805 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 7 Seek error rate 0x000f 076 060 030 Pre-fail Always Never 91008406049 9 Power on hours 0x0032 047 047 000 Old age Always Never 47048 (5y, 4m, 13d, 8h) 10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 020 Old age Always Never 293 183 Runtime bad block 0x0032 100 100 000 Old age Always Never 0 184 End-to-end error 0x0032 100 100 099 Old age Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 188 Command timeout 0x0032 100 100 000 Old age Always Never 1 189 High fly writes 0x003a 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0022 069 033 045 Old age Always In the past 31 (255 255 36 27 0) 191 G-sense error rate 0x0032 100 100 000 Old age Always Never 0 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 583 193 Load cycle count 0x0032 085 085 000 Old age Always Never 31233 194 Temperature celsius 0x0022 031 067 000 Old age Always Never 31 (0 19 0 0 0) 195 Hardware ECC recovered 0x001a 105 099 000 Old age Always Never 7920184 197 Current pending sector 0x0012 098 098 000 Old age Always Never 776 198 Offline uncorrectable 0x0010 098 098 000 Old age Offline Never 776 199 UDMA CRC error count 0x003e 200 200 000 Old age Always Never 0 240 Head flying hours 0x0000 100 253 000 Old age Offline Never 17295 (178 106 0) 241 Total lbas written 0x0000 100 253 000 Old age Offline Never 94608024951 242 Total lbas read 0x0000 100 253 000 Old age Offline Never 3333974639875 Attributes [after the last test. Again, highlighed in gold are #197 AND #198] # ATTRIBUTE NAME FLAG VALUE WORST THRESHOLD TYPE UPDATED FAILED RAW VALUE 1 Raw read error rate 0x000f 105 099 006 Pre-fail Always Never 7920184 3 Spin up time 0x0003 092 090 000 Pre-fail Always Never 0 4 Start stop count 0x0032 100 100 020 Old age Always Never 807 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 7 Seek error rate 0x000f 076 060 030 Pre-fail Always Never 91008485358 9 Power on hours 0x0032 047 047 000 Old age Always Never 47049 (5y, 4m, 13d, 9h) 10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 020 Old age Always Never 293 183 Runtime bad block 0x0032 100 100 000 Old age Always Never 0 184 End-to-end error 0x0032 100 100 099 Old age Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 188 Command timeout 0x0032 100 100 000 Old age Always Never 1 189 High fly writes 0x003a 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0022 066 033 045 Old age Always In the past 34 (255 255 36 27 0) 191 G-sense error rate 0x0032 100 100 000 Old age Always Never 0 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 588 193 Load cycle count 0x0032 085 085 000 Old age Always Never 31241 194 Temperature celsius 0x0022 034 067 000 Old age Always Never 34 (0 19 0 0 0) 195 Hardware ECC recovered 0x001a 105 099 000 Old age Always Never 7920184 197 Current pending sector 0x0012 098 098 000 Old age Always Never 776 198 Offline uncorrectable 0x0010 098 098 000 Old age Offline Never 776 199 UDMA CRC error count 0x003e 200 200 000 Old age Always Never 0 240 Head flying hours 0x0000 100 253 000 Old age Offline Never 17296 (164 225 0) 241 Total lbas written 0x0000 100 253 000 Old age Offline Never 94608024951 242 Total lbas read 0x0000 100 253 000 Old age Offline Never 3333974639875 homer-smart-20220502-0943[1008].zip homer-smart-20220502-0943[0951].zip homer-smart-20220502-0821.zip