David Spivey

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by David Spivey

  1. I recently upgraded one of my 2TB drives to an 8TB. The process was successful, and before this, I was having no issues. A day or two afterwards, unraid is telling me one of my drives has write errors. I don't know what to blame for this write error. Can anyone provide recommendations? Here is the relevant error in the logs. Mar 24 03:00:02 DStorage kernel: sd 34:0:1:0: [sdk] tag#422 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s Mar 24 03:00:02 DStorage kernel: sd 34:0:1:0: [sdk] tag#422 Sense Key : 0x3 [current] Mar 24 03:00:02 DStorage kernel: sd 34:0:1:0: [sdk] tag#422 ASC=0x11 ASCQ=0x0 Mar 24 03:00:02 DStorage kernel: sd 34:0:1:0: [sdk] tag#422 CDB: opcode=0x88 88 00 00 00 00 00 00 23 c7 b0 00 00 00 08 00 00 Mar 24 03:00:02 DStorage kernel: blk_update_request: critical medium error, dev sdk, sector 2344880 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Mar 24 03:00:02 DStorage kernel: md: disk9 read error, sector=2344816 Mar 24 03:00:04 DStorage kernel: mpt2sas_cm1: log_info(0x31110610): originator(PL), code(0x11), sub_code(0x0610) Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#430 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=1s Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#430 Sense Key : 0x4 [current] Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#430 ASC=0x44 ASCQ=0x0 Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#430 CDB: opcode=0x88 88 00 00 00 00 01 64 ed 79 a8 00 00 00 20 00 00 Mar 24 03:00:04 DStorage kernel: blk_update_request: critical target error, dev sdk, sector 5988252072 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0 Mar 24 03:00:04 DStorage kernel: md: disk9 read error, sector=5988252008 Mar 24 03:00:04 DStorage kernel: md: disk9 read error, sector=5988252016 Mar 24 03:00:04 DStorage kernel: md: disk9 read error, sector=5988252024 Mar 24 03:00:04 DStorage kernel: md: disk9 read error, sector=5988252032 Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#436 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#436 Sense Key : 0x4 [current] Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#436 ASC=0x44 ASCQ=0x0 Mar 24 03:00:04 DStorage kernel: sd 34:0:1:0: [sdk] tag#436 CDB: opcode=0x8a 8a 00 00 00 00 01 64 ed 79 a8 00 00 00 20 00 00 Mar 24 03:00:04 DStorage kernel: blk_update_request: critical target error, dev sdk, sector 5988252072 op 0x1:(WRITE) flags 0x0 phys_seg 4 prio class 0 Mar 24 03:00:04 DStorage kernel: md: disk9 write error, sector=5988252008 Mar 24 03:00:04 DStorage kernel: md: disk9 write error, sector=5988252016 Mar 24 03:00:04 DStorage kernel: md: disk9 write error, sector=5988252024 Mar 24 03:00:04 DStorage kernel: md: disk9 write error, sector=5988252032
  2. Ok thanks for your help Jorge. Without your instruction, I would have most likely lost data.
  3. Array comes online, disk is mounted, no lost+found folder when examining Index of /mnt/disk7 Is there anything else I should do now except replace the disk / cables? A parity check had started a while back, when the power problems were occurring, and was aborted. Do I need to do one now, or wait until the disk is replaced?
  4. Phase 1 - find and verify superblock... - block cache size set to 319640 entries Phase 2 - using internal log - zero log... zero_log: head block 674161 tail block 674157 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (2:674385) is ahead of log (1:2). Format log to cycle 5. XFS_REPAIR Summary Tue Nov 23 11:09:15 2021 Phase Start End Duration Phase 1: 11/23 11:07:16 11/23 11:07:16 Phase 2: 11/23 11:07:16 11/23 11:07:27 11 seconds Phase 3: 11/23 11:07:27 11/23 11:08:11 44 seconds Phase 4: 11/23 11:08:11 11/23 11:08:11 Phase 5: 11/23 11:08:11 11/23 11:08:14 3 seconds Phase 6: 11/23 11:08:14 11/23 11:08:52 38 seconds Phase 7: 11/23 11:08:52 11/23 11:08:52 Total run time: 1 minute, 36 seconds done
  5. Phase 1 - find and verify superblock... - block cache size set to 319640 entries Phase 2 - using internal log - zero log... zero_log: head block 674161 tail block 674157 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  6. I don't know what I'm doing wrong, but I started the array in maintenance mode, changed the options to -nv as recommended, clicked check, and after hours refreshing the page it still looks like nothing is happening. How do I check to see if the check button actually started anything?
  7. I'm not sure how to proceed, but I'm sure I should not format anything. I have attached a log and screenshot. The WDC_WD80EDAZ-11TA3A0_VGH5VVGG was Disk7. dstorage-diagnostics-20211122-1114.zip
  8. One of the HBA cards went down and unraid sent me an email saying many of my drives were offline. I started shutting down unraid, but it hung and would not shut down cleanly. The diags for that are attached, as well as the diags after rebooting with the new card installed. Now unraid says two of my drives are disabled. How should I proceed? Should I make unraid rebuild both drives? dstorage-diagnostics-20210824-1128.zip dstorage-diagnostics-20210824-1328.zip
  9. Yep. New config already done. All drives carefully assigned to their proper places, and it's rebuilding parity now.
  10. Thanks for the concern. That was a reasonable concern considering how I messed things up already. Fortunately I had already been careful enough to go into disk settings and disable auto start before I rebooted to put the drives in.
  11. @JorgeB I took the original drives and tested them in another system. After diags, they're fine. If I understand correctly, I should put the original drives back in and remove the formatted ones, then create a new config. If I create a new config, and make sure the parity drives are parity drives and the data drives are data drives, does it matter the order of the slots the drives are in?
  12. As far as assigning the disks and rebuilding parity, wouldn't that be assuming that the drives unraid previously said were bad were actually good? If the drives are actually going bad won't parity sync fail? It looks like I need to do some tests on these drives to see if they're bad first.
  13. This is the disk from disk1's slot. It is mountable and readable. The other drive that was unassigned from disk7's slot is outside of the system.
  14. @JorgeB The "not installed" disk from my screenshot has been formatted. I had no idea that unraid could or would even format an emulated drive. @trurl Both disk1 and disk7 were almost fully filled disks in the array. The current disk7 is the replacement drive for that slot. My diags are attached. diagnostics-20210816-1520.zip
  15. This is a total horror story. I had 2 drives go down. I have dual parity. I followed the procedure to begin replacement of the first drive that failed. I unassigned both drives, stopped the array, and added one (new) drive to the empty slot. While the drive was in the process of rebuilding, a power issue happened, and the system got powered off. When the system came back on, it reported that the rebuild was still in process. However, it was asking me to format the drive and the unassigned device. I left that request alone until the rebuild was completed on that drive, assuming it would go away after the rebuild. After the rebuild finished on the first drive, I rebooted. The message was still there, and the drives said they were unmoutable, so I made the mistake of using that format option. I now have an unassigned drive and the newly rebuilt drive that are both EMPTY. I am horrified. I need to know how to get this system back to normal.
  16. I actually have a full 40tb unraid / esxi server at home, so this is temporary storage as I travel until I get back home and dump it there. I record 1-to-2-hour videos at 4k which runs about 40-150 GB per recording. When I'm busy I'm doing that 7 days a week, and sometimes twice daily. I then need to edit those videos over gigabit ethernet (also leveraging an SSD on the editing laptop) in order to upload them to YouTube and Facebook. Since I need to ensure the source videos never get lost, and eventually get placed in permanent storage, I use 2 parity drives.
  17. Yes, gigabit. However, I would assume that newer processors would be more efficient than a Pentium, saving power, right?
  18. The main reason I intend to use unraid is because of dual parity and flexibility for swapping drives / adding drives as necessary.
  19. Hey guys. I need hardware recommendations. I want to buy a low power consumption, small form factor unraid box to be used in an RV. I will place files on unraid from gigabit ethernet, and maybe occasionally USB3. I need it to be physically small and power efficient, without hindering bandwidth over ethernet. I will be using 2 parity drives, and plan to have 3 or so standard size data drives, so 5 total. I won't be using VMs, nor dockers, so CPU / memory is unimportant unless it would limit ethernet throughput.
  20. @Helmonder I have heard that leaving the Web interface open is the kiss of death for speed with syncthing. I don't have much experience using it, but I have read over on their forums that if you close the web ui for both the client and server, suddenly the transfer speed spikes, and everything syncs quickly. One person had a transfer of about 11gb going for days while he was watching it, complained on the forums, tried closing the webpage, and then everything finished sync in an hour or two.
  21. Please explain to me if I'm missing something -- The things that might get broken only happen if you use the new permissions tool on the appdata share, right?
  22. @mbc0, @CommandLionInterface, @CHBMB I requested, and was granted, a change to the docker container. The solution to all your write-on-samba problems is now here. First, make sure the container is up to date. Then, edit the docker container and go to "Add another Path, Port or Variable" The Config type is variable, the Key is "UMASK_SET", and the Value is "000" (all without quotes). Click on Add, then Apply. Afterwards open the WebUI for the syncthing docker container. Go to Actions > Advanced. On EACH of the Folders you have synced to unRAID, check the "ignorePerms" box, and click save. Now, when new files are synced over to unRAID, they can be written to over the network share. If existing files cannot be edited, use the "New Permissions" tool in unRAID or the "Docker Safe New Permissions Tool" plugin to fix the share that has the problem.