elogg

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by elogg

  1. Disk rebuild successfully completed this morning. @trurl & @JorgeB thanks so much for your assistance!
  2. Not recently, no. Just looked in the share via the UI and only seeing an osx metadata file on disk8. Must be a bunch of empty folders… Not sure why it’s showing on all those disks, I definitely haven’t done that many repairs.
  3. OK, started the array. Not seeing any unmountable filesystem messages and the rebuild seems to be running smoothly now 🤞 diagnostics-20230116-1244.zip
  4. Appreciate the explanation. 👍 I am rebuilding to different disks (of the same sizes). Got in the habit of holding the originals aside, just to be safe. So running a xfs repair without any flags, appears to have cleared up the filesystem. It didn’t request a run with -L How do you recommend I proceed? Despite having dual parity, thinking now I might prefer to rebuild one disk at a time if it’s not too late… Since I canceled the first rebuild attempt, do I need to do the unassign, start array, stop, reassign, and start rebuild dance? Not sure if just exiting maintenance mode and starting the array will trigger a rebuild? Output of run with -nv flags now shows Phase 1 - find and verify superblock... - block cache size set to 1432488 entries Phase 2 - using internal log - zero log... zero_log: head block 1859593 tail block 1859593 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 6 - agno = 4 - agno = 10 - agno = 15 - agno = 9 - agno = 5 - agno = 11 - agno = 13 - agno = 12 - agno = 14 - agno = 7 - agno = 8 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Sun Jan 15 22:06:40 2023 Phase Start End Duration Phase 1: 01/15 22:06:10 01/15 22:06:11 1 second Phase 2: 01/15 22:06:11 01/15 22:06:11 Phase 3: 01/15 22:06:11 01/15 22:06:37 26 seconds Phase 4: 01/15 22:06:37 01/15 22:06:37 Phase 5: Skipped Phase 6: 01/15 22:06:37 01/15 22:06:40 3 seconds Phase 7: 01/15 22:06:40 01/15 22:06:40 Total run time: 30 seconds
  5. Yes, redundant format was prior to adding to the array. I learned that lesson the hard way a few years back, only ever format in the array when expanding now. Makes sense that the rebuild handles overwriting too. Ah, so original filesystem is recorded (assuming this is how emulation works?) and if something is off, it will carry over to the new disk? Assuming the next step is performing a repair? Last check was run from the webUI in maintenance mode (xfs). Here’s the full output from a run with the -nv flags Phase 1 - find and verify superblock... - block cache size set to 1432488 entries Phase 2 - using internal log - zero log... zero_log: head block 1859593 tail block 1859593 - scan filesystem freespace and inode maps... Metadata CRC error detected at 0x43d440, xfs_agf block 0x36949ffe1/0x200 agf has bad CRC for ag 15 Metadata CRC error detected at 0x44108d, xfs_bnobt block 0x36949ffe8/0x1000 btree block 15/1 is suspect, error -74 Metadata CRC error detected at 0x44108d, xfs_cntbt block 0x36949fff0/0x1000 btree block 15/2 is suspect, error -74 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 data fork in ino 16106127516 claims free block 2027533582 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (15,14267749-14267758) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 9 - agno = 4 - agno = 15 - agno = 7 - agno = 1 - agno = 8 - agno = 10 - agno = 6 - agno = 12 - agno = 11 - agno = 5 - agno = 13 - agno = 14 - agno = 3 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Sun Jan 15 19:29:52 2023 Phase Start End Duration Phase 1: 01/15 19:29:21 01/15 19:29:21 Phase 2: 01/15 19:29:21 01/15 19:29:22 1 second Phase 3: 01/15 19:29:22 01/15 19:29:48 26 seconds Phase 4: 01/15 19:29:48 01/15 19:29:48 Phase 5: Skipped Phase 6: 01/15 19:29:48 01/15 19:29:52 4 seconds Phase 7: 01/15 19:29:52 01/15 19:29:52 Total run time: 31 seconds
  6. Docker manager, VM manager, & mover logger have been disabled. 👍 Appreciate the advice on just going ahead with rebooting. UI initiated reboot was successful and the array stop/start functionality is now working! Unfortunately, the rabbit hole is getting deeper... Stopped the array, shutdown the server, and swapped the disks out (just incase the rebuild has issues). Set both disk slots to "no disk" and started the array, then stopped it, and formatted the two new disks via Unassigned Devices. Set the disks to their appropriate slots and started the array. This is where things went sideways... disk 4 started rebuilding, but disk 1 showed unmountable...? The odd thing is filesystem check in UD showed no issues after they were formatted. Canceled the rebuild, stopped the array, and restarted in maintenance mode to run a filesystem status check. Which is reporting metadata issues. Not really sure what the best next step should be? Hoping to avoid any data loss at this point. Is it possible Disk 1 is actually faulty or could there be something else at play? diagnostics-20230115-1551.zip
  7. I believe this from some ssh plugin I set up a few years ago… Any thoughts on why disks (1 & 4) re-registered differently, they were not touched at all during the cache swap? I do see now that some of the drive letters changed around, but was under the impression that didn’t really matter as the disk identifier is used for matching. I would really like to get a rebuild started, since the array is unprotected with 2 disks disabled. If there is no way of getting insight into what is causing the UI not to stop the array, is there an alternative safe method to stopping it?
  8. Have 2 disabled disks, but am unable to stop the array to address. Clicking "Stop" + "Proceed" outputs the following message but seems to have no other effect. nginx: 2023/01/11 22:43:30 [error] 8673#8673: *4428546 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 10.20.0.86, server: , request: "POST /update.htm HTTP/2.0", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "10.20.0.10:4443", referrer: "https://10.20.0.10:4443/Main" So leading up to this...I have 2 cache ssd's in btrfs single mode. Attempted to upgrade the smaller of the drive with a larger version, but it ended up being defective. Should have tested it first, but is going back for RMA now. Put the old ssd back in, and the cache pool was rebalanced when the array started back up. Sometime during that process 2 disks from the array were disabled for read issues, which I'm assuming are cabling issues as they're basically new. I've stopped all dockers and shut down the VM's. Unsure what could be preventing the array from stopping, as I'm not seeing any disk or share unmount failures or anything. diagnostics-20230111-2251.zip
  9. Rebuild just finished with no errors! I'm hoping it lasts this time lol. Thanks for the assistance! Aug 4 05:44:05 Superbob kernel: mdcmd (66): spindown 8 Aug 4 05:44:05 Superbob kernel: mdcmd (67): spindown 9 Aug 4 05:48:17 Superbob kernel: mdcmd (68): set md_write_method 0 Aug 4 05:48:17 Superbob kernel: Aug 4 12:29:35 Superbob kernel: md: sync done. time=87679sec Aug 4 12:29:35 Superbob kernel: md: recovery thread: exit status: 0 Aug 4 13:29:36 Superbob kernel: mdcmd (69): spindown 0 Aug 4 13:29:39 Superbob kernel: mdcmd (70): spindown 4 Aug 4 13:29:40 Superbob kernel: mdcmd (71): spindown 6
  10. Thanks, that's good to know. I switched it from the motherboard to a sata controller. It's rebuilding now, fingers crossed that fixes it.
  11. I'm having repeated issues with a brand new disk being disabled. Adding a brand new 10tb disk to my array to replace a 4tb one that was failing. Transferred all data off the failing drive while the new one was being pre-cleared. Swapped the new one in and the initial rebuild/parity check passed ok. A couple of days later I noticed that the new disk was disabled due to write errors. I switched to maintenance mode (didn't really see any smart errors) and ran an xfs_repair. Restarted the array and parity passed. After a couple of days I see the drive is disabled again. Swapped the out the sata cable for good measure and repeated the repair. This time it failed with a long list of read errors. Aug 1 20:58:18 Superbob kernel: md: disk4 write error, sector=19331882160 Aug 1 20:58:18 Superbob kernel: md: disk4 write error, sector=19331886744 Aug 1 20:58:18 Superbob kernel: md: disk4 write error, sector=19331886752 Aug 1 20:58:18 Superbob kernel: md: disk4 write error, sector=19331886760 Aug 1 20:58:18 Superbob kernel: md: disk4 write error, sector=19331886768 Aug 1 20:58:18 Superbob kernel: md: disk4 write error, sector=19398995632 Perhaps I'm doing something wrong here, but I'm wondering if I might have picked up a new drive that's bad. Appreciate any assistance! diagnostics-20190802-2322.zip WDC_WD100EMAZ-00WJTA0_JEHLSVTM-20190802-2000.txt
  12. Back up and running!! Decided to go with the re-install, and copy in the config. Everything seems ok! Going to work next on scheduled backups :-) I really appreciate your patience and assistance in resolving this!
  13. A while back I temporarily added a disk to the array, but removed it due to disk errors. I did reset the disk configuration to remove it from the assigned devices list, does that count? Besides that, added several plugins.
  14. I have one from about a month ago when I updated unraid versions. I've made some changes since then. Is my install toast?
  15. Hmm... Now it won't boot up. Loading /bizimage... ok Loading /bzroot... ok No Setup signature found... Stops on the last line
  16. Looks like it had some allocation errors. Got this message: Repairing file system. ** /dev/rdisk6s1 ** Phase 1 - Preparing FAT ** Phase 2 - Checking Directories Item /logs does not appear to be a subdirectory Correct? yes /logs has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes /.preclear has entries after end of directory Truncate? yes Item /.Trashes/501 does not appear to be a subdirectory Correct? yes /.Trashes/501 has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes /.Trashes has entries after end of directory Truncate? yes /SYSLINUX has entries after end of directory Truncate? yes /CONFIG has entries after end of directory Truncate? yes Item /CONFIG/PLUGINS/tablesorter does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/tablesorter has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes Item /CONFIG/PLUGINS/ipmi does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/ipmi has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes Item /CONFIG/PLUGINS/preclear.disk does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/preclear.disk has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes Item /CONFIG/PLUGINS/dynamix.system.info does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/dynamix.system.info has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes Item /CONFIG/PLUGINS/dynamix.system.stats does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/dynamix.system.stats has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes /CONFIG/PLUGINS/dynamix.ssd.trim has entries after end of directory Truncate? yes /CONFIG/PLUGINS/statistics.sender has entries after end of directory Truncate? yes Item /CONFIG/PLUGINS/unassigned.devices/packages does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/unassigned.devices/packages has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes /CONFIG/PLUGINS/unassigned.devices has entries after end of directory Truncate? yes /CONFIG/PLUGINS/ca.backup has entries after end of directory Truncate? yes Item /CONFIG/PLUGINS/dockerMan/templates-user does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/dockerMan/templates-user has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes Item /CONFIG/PLUGINS/dockerMan/templates/limetech does not appear to be a subdirectory Correct? yes /CONFIG/PLUGINS/dockerMan/templates/limetech has too many clusters allocated (logical=0, physical=16384) Drop superfluous clusters? yes /CONFIG/PLUGINS/dockerMan/templates has entries after end of directory Truncate? yes ** Phase 3 - Checking for Orphan Clusters Found orphan cluster(s) Fix? yes Marked 483 clusters as free Free space in FSInfo block (1933989) not correct (1934482) Fix? yes 386 files, 30951712 KiB free (1934482 clusters) ***** FILE SYSTEM WAS MODIFIED ***** File system check exit code is 0. Updating boot support partitions for the volume as required. Operation successful.
  17. Here it is superbob-diagnostics-20170317-1532.zip
  18. An update to my Plex docker hung and I had to manually kill and rebuild the container. Plex is running fine and I haven't experienced any other issues, but I keep getting error messages. I'm worried that my Flash Device is becoming corrupted. Any advice on troubleshooting and correcting these errors would be much appreciated. The log is spitting out: Mar 17 14:11:53 SuperBob kernel: FAT-fs (sdb1): error, invalid access to FAT (entry 0x20924080) Mar 17 14:11:53 SuperBob kernel: FAT-fs (sdb1): error, invalid access to FAT (entry 0x06802322) Mar 17 14:11:53 SuperBob kernel: FAT-fs (sdb1): error, invalid access to FAT (entry 0x06802322) Mar 17 14:11:53 SuperBob kernel: FAT-fs (sdb1): error, invalid access to FAT (entry 0x46219208) On the Dashboard page, below the apps heading I'm getting this repeating error message Warning: DOMDocument::load(): Start tag expected, '<' not found in /boot/config/plugins/dockerMan/templates/limetech/PlexMediaServer.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 224 Warning: DOMDocument::load(): Start tag expected, '<' not found in /boot/config/plugins/dockerMan/templates/linuxserver.io/tvheadend.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 224 Warning: DOMDocument::load(): Start tag expected, '<' not found in /boot/config/plugins/dockerMan/templates/linuxserver.io/ombi.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 224 Warning: DOMDocument::load(): Start tag expected, '<' not found in /boot/config/plugins/dockerMan/templates/linuxserver.io/domoticz.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 224 Warning:
  19. I've followed the instructions here: https://www.linuxserver.io/2016/07/28/installing-nextcloud-on-unraid/ and keep running into an issue trying to create the admin account for nextcloud. I'm getting an access denied message with an internal docker IP, despite entering my unRAID instance IP and db port. I have tried logging into the db via terminal and get the same messages. Appreciate any advice on how to resolve this issue. Edit: Reinstalled MariaDb docker and can now successfully login to mysql via terminal and Sequel Pro. But I'm still getting the access denied message with a docker IP on the admin account creation page.