Jump to content

JorgeB

Moderators
  • Posts

    67,067
  • Joined

  • Last visited

  • Days Won

    703

Community Answers

  1. JorgeB's post in Disk read errors following a filesystem error was marked as the answer   
    Just to be clear disk goes red when it gets disabled, it doesn't detect filesystem corruption.
     
    Disks are dropping offline, this can be a power/connection problem, but since both are on a Marvell controller and these are known to sometimes drop disks without a reason first recommendation would be to replace that with a recommended controller.
     
  2. JorgeB's post in Unable to access anything after array start after upgrade to 6.1.2 RC2 was marked as the answer   
    Yes, you can create a new flash drive then assign all the devices as they were and check "parity is already valid" before array start, all data/pool disks will be imported, if you don't know the assignments then can be seen on the diags posted, disk0 is parity:
     
    Apr 1 13:55:46 REPOSITORY kernel: md: import disk0: (sdm) WDC_WD160EMFZ-11AFXA0_2CG607NR size: 15625879500 Apr 1 13:55:46 REPOSITORY kernel: md: import disk1: (sdk) TOSHIBA_MG08ACA16TE_91D0A0SUFWTG size: 15625879500 Apr 1 13:55:46 REPOSITORY kernel: md: import disk2: (sdg) WDC_WD140EDFZ-11A0VA0_9LG79M3G size: 13672382412 Apr 1 13:55:46 REPOSITORY kernel: md: import disk3: (sdi) TOSHIBA_MG08ACA16TE_91L0A23XFWTG size: 15625879500 Apr 1 13:55:46 REPOSITORY kernel: md: import disk4: (sdh) TOSHIBA_MG08ACA16TE_91D0A12BFWTG size: 15625879500 Apr 1 13:55:46 REPOSITORY kernel: md: import disk5: (sdj) TOSHIBA_MG08ACA16TE_91L0A41XFWTG size: 15625879500 Apr 1 13:55:46 REPOSITORY kernel: md: import disk6: (sdf) WDC_WD140EDFZ-11A0VA0_QBJW606T size: 13672382412 Apr 1 13:55:46 REPOSITORY kernel: md: import disk7: (sdl) TOSHIBA_MG08ACA16TE_91L0A30LFWTG size: 15625879500 Apr 1 13:55:46 REPOSITORY kernel: md: import disk8: (sdc) WDC_WD140EDFZ-11A0VA0_9KGVYJ9L size: 13672382412 Apr 1 13:55:46 REPOSITORY kernel: md: import disk9: (sdd) WDC_WD120EMFZ-11A6JA0_XJG0KLNM size: 11718885324 Apr 1 13:55:46 REPOSITORY kernel: md: import disk10: (sdr) WDC_WD120EMFZ-11A6JA0_9JHG38XT size: 11718885324 Apr 1 13:55:46 REPOSITORY kernel: md: import disk11: (sdq) WDC_WD120EMAZ-11BLFA0_8CK40JJE size: 11718885324 Apr 1 13:55:46 REPOSITORY kernel: md: import disk12: (sdn) WDC_WD120EMAZ-11BLFA0_8CJWDE6E size: 11718885324 Apr 1 13:55:46 REPOSITORY kernel: md: import disk13: (sdp) HGST_HUS728T8TALE6L4_VDGMW6LD size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk14: (sdac) HGST_HUS728T8TALE6L4_VDGLS39D size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk15: (sdae) HGST_HUS728T8TALE6L4_VDKZ3DUM size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk16: (sdad) HGST_HUS728T8TALE6L4_VDGMZGJD size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk17: (sdag) HGST_HUS728T8TALE6L4_VDKZS7MM size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk18: (sdaf) HGST_HUS728T8TALE6L4_VDGN090D size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk19: (sdo) WDC_WD60EFRX-68L0BN1_WD-WX11D38E2D7P size: 5860522532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk20: (sdv) ST8000AS0002-1NA17Z_Z840P04T size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk21: (sdw) ST8000AS0002-1NA17Z_Z840P8W8 size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk22: (sdx) ST8000AS0002-1NA17Z_Z8410NWK size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk23: (sdy) ST8000AS0002-1NA17Z_Z8410P11 size: 7814026532 Apr 1 13:55:46 REPOSITORY kernel: md: import disk24: (sdz) ST8000AS0002-1NA17Z_Z8410Q6E size: 7814026532 Apr 1 13:55:47 REPOSITORY kernel: md: import disk25: (sdaa) ST8000AS0002-1NA17Z_Z840P8XR size: 7814026532 Apr 1 13:55:47 REPOSITORY kernel: md: import disk26: (sdab) ST8000AS0002-1NA17Z_Z840P0LC size: 7814026532 Apr 1 13:55:47 REPOSITORY kernel: md: import disk27: (sdu) ST8000AS0002-1NA17Z_Z840Q6YA size: 7814026532 Apr 1 13:55:47 REPOSITORY kernel: md: import disk28: (sdt) ST8000NM000A-2KE101_WKD151FK size: 7814026532

    Cache pool:
     
    Apr 1 13:55:47 REPOSITORY emhttpd: import 30 cache device: (sdb) 512GB_QLC_SATA_SSD_AF20061200152 Apr 1 13:55:47 REPOSITORY emhttpd: import 31 cache device: (nvme1n1) PM951_NVMe_SAMSUNG_512GB_S29PNX0H801455 Apr 1 13:55:47 REPOSITORY emhttpd: import 32 cache device: (nvme0n1) PM951_NVMe_SAMSUNG_512GB_S29PNX0H617859 Apr 1 13:55:47 REPOSITORY emhttpd: import 33 cache device: (sde) INTEL_SSDSC2BB240G4_PHWL4493025V240NGN  
  3. JorgeB's post in (SOLVED)new NVME not visible -- globally duplicate IDs was marked as the answer   
    Update to v6.12-rc2, it should already include a 'quirk' for this device.
  4. JorgeB's post in (SOLVED) Files inaccesible after reboot was marked as the answer   
    Apr 4 07:32:47 NAS kernel: shfs[5708]: segfault at 14e145c0fdc0 ip 000014e167624e79 sp 000014e1667b6b10 error 4 in libfuse3.so.3.10.5[14e167623000+19000] shfs segfaulted, you'll need to reboot, but there are more different apps doing the same, so it might be a good idea to run memtest.
  5. JorgeB's post in Disk 2 down, disk 7 with various error messages and no physical access to the server: what to do? was marked as the answer   
    It's not logged as a disk problem and SMART looks fine, most likely a power/connection problem, if you have been having multiple issues the PSU would be suspect, or some cable that's common to all affect disks.
  6. JorgeB's post in Wrong or no file system was marked as the answer   
    Run it again without -n or nothing will be done, and if it ask for it use -L.
  7. JorgeB's post in Device is Disabled Contents Emulated - what to do was marked as the answer   
    Diags after are rebooting so we can't see what happened, but disk looks healthy so possibly a power/connection problem, assuming the emulated disk is mounting and contents look correct you can rebuild on top, check/replace cables first to rule that out:
     
    https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
     
  8. JorgeB's post in Slow parity check - Help updating LSI firmware was marked as the answer   
    Look for the 9211-8i under legacy HBAs.
  9. JorgeB's post in two drives both with red Xs - Device disabled, contents emulated was marked as the answer   
    Diags are after rebooting so we can't see what happened, but the disks look OK so most likely a power/connection problem, assuming the emulated disks are still mounting and contents look correct you can rebuild on top, suggest first replacing/swapping cables to rule them out.
  10. JorgeB's post in UNC Errors Reported on Parity Drive was marked as the answer   
    Sorry, didn't notice, since it passed disk is OK for now, keep monitoring and any more similar errors I would replace it.
  11. JorgeB's post in Help fixing a zfs pool was marked as the answer   
    It's the known issue, you can wait for rc3 which should be out very soon, if you need to access the data you should also be able to import the pool manually with:
    zpool import hdd it will be mounted at /mnt/hdd, but it won't be available over SMB
  12. JorgeB's post in The total copying speed is very slow when using commands to copy data simultaneously on multiple disks in Unraid, and cannot find the reason was marked as the answer   
    Using dd with /dev/mdX was just to confirm if the issue really is the md driver, like I suspect, but the destinations devices would not be usable in the array, if they are different capacities, you are reading and writing at speeds much higher then usual, since there's no parity, only reading, or only writing should be much faster, like the result you had when transferring over LAN.
     
    Another way you can test, and this way the copy will be valid, is to create multiple single device pools and temporarily assign the destinations disks as a single pool members, then copy from disk1 to poolone, disk2 to pooltwo, etc, if the md driver was the problem performance should be much better, when the copy is done you can reassign the pool disks to the array.
  13. JorgeB's post in Question about /config restore in my working unraid was marked as the answer   
    You can restore but will lose any changes done since, need to be especially careful if there were array assignments changes, also a good idea to backup current /config folder in case you later need something.
  14. JorgeB's post in PCIe Bus Error in LOG was marked as the answer   
    Try this:
    https://forums.unraid.net/topic/118286-nvme-drives-throwing-errors-filling-logs-instantly-how-to-resolve/?do=findComment&comment=1165009
     
  15. JorgeB's post in Using DD command to copy XFS disk, Unraid system does not recognize it was marked as the answer   
    Unraid requires a specif partition layout and signature, you can format the new disk with Unraid then copy the data from the old one.
  16. JorgeB's post in Unraid failing to boot past ../bzroot. No changes or updates made was marked as the answer   
    Yes, but if the array is started when super.dat is copied it will considered an unclean shutdown and a parity check will be started for next boot.
  17. JorgeB's post in v6.11.5 Out Of memory Errors caught by 'common problems' error itself was silent was marked as the answer   
    If it's a one time thing you can ignore, if it keeps happening try limiting more the RAM for VMs and/or docker containers, the problem is usually not just about not enough RAM but more about fragmented RAM, alternatively a small swap file on disk might help, you can use the swapfile plugin:
     
    https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/
  18. JorgeB's post in /var/log is full was marked as the answer   
    Most log spam is from the mover, reboot and disable mover logging.
  19. JorgeB's post in New Build - Unraid not able to see more than 2 disks from my 4 bay DAS was marked as the answer   
    You cannot replace two drives with single parity, if disk2 and parity failed you can do a new config, any data on disk1 will be kept, then assign a new disk2 and a new parity and it will be synced after array start.
  20. JorgeB's post in Multiple new shares that I have not created. was marked as the answer   
    Any top level folder will be a share.
  21. JorgeB's post in Extremely slow transfer from NTFS HDD to Array was marked as the answer   
    This disk is dead:
     
    197 Current_Pending_Sector -O--C- 001 001 000 - 20320 198 Offline_Uncorrectable ----C- 001 001 000 - 20320  
  22. JorgeB's post in Web UI mess up was marked as the answer   
    Try updating to v6.12-rc2, there are some optimizations that can help with those errors.
  23. JorgeB's post in HELP - Docker Service failed to start was marked as the answer   
    Run a scrub on the pool and make sure there are no uncorrectable errors, after that recreate the docker image.
  24. JorgeB's post in Weekly crash debugging help was marked as the answer   
    Macvlan call traces are usually the result of having dockers with a custom IP address and will end up crashing the server, upgrading to v6.10 or later and switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right))
  25. JorgeB's post in i have sbSyncErrs=2047002 on btrfs, but what's next? was marked as the answer   
    Those are sync errors, and unrelated to the filesystem, you had a 3 device array, then did a new config without the SSD and checked "parity is already valid", that's not true unless the device was cleared before removing, so some sync errors after that are normal.
×
×
  • Create New...