Jump to content

JorgeB

Moderators
  • Posts

    67,662
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. The only way would be to use HPA to limit the 18TB disk to 16TB, not all controllers/devices support this, but it won't hurt to try.
  2. Always, you can use the new larger drive as parity and the old parity as data.
  3. No, it needs to be the same size or larger than the largest data disk in the array.
  4. Make sure the emulated disk is mounting after an array re-start, if yes you can rebuild on top, not a bad idea to replace/swap cables/slot to rule that out, since the disk dropped. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  5. Disk look mostly OK, still not a bad idea to run an extended SMART test before using it again, if it passes and only if the emulated disk is mounting correctly you should replace/swap cables to rule that out then can rebuild on top: https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  6. I took a look before and config wise everything looks good to be, also strange that using user0 works correctly, that's what the mover used before, no idea what it could be.
  7. I don't dispute that, in fact like mentioned I've gotten similar results in the past, but it's not always possible, curiously user shares are currently still faster for me when using SMB than when doing an internal server transfer: This is my other main server, better but still not great: My point was that this has been a known issue for many years now, that affects some users particularly bad, here's just one example, and if this could be improved it would be much better than a setting that will only help SMB, also last time I checked Samba SMB multichannel was still considered experimental, though it should be mostly OK by now, but of course if it's currently not possible to fix the base user share performance than any other improvements are welcome, even if they don't help every type of transfer.
  8. Possibly related to this:
  9. Next time please post the complete diagnostics instead, disk dropped offline and reconnect with a different letter, post output of: smartctl -x /dev/sdi
  10. As long as they show up in Unraid it's not a problem, I have the same expander also connected to a 9308-8i (without a BIOS installed) and have no issues with 28 devices connected.
  11. There's nothing logged about the crash, this suggests more a hardware issue, one more thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  12. There are some recovery options here, btrfs restore option is the most likely to work for this case, you should also run memtest since data corruption was being detected in the pool.
  13. DIsk1 is disabled and disk2 is failing, there might a few things you can do to try and recover but since the disks are small and if you have backups that would be the easiest way out of this.
  14. There's nothing about the crash logged, that suggests a hardware problem, unfortunately difficult to say exactly what could be the problem without starting to swap some parts around.
  15. Yeah, user shares performance can vary a lot, I also had some servers/configs in the past were I could get around 600-800MB/s, but lately this is what I get in one of my main servers, there have been a lot o posts from other users with the same issue, between 300-500MB/s when using user shares, 1GB/s+ with disk shares.
  16. Looks like a correct RAM error: Jul 3 20:10:55 OASIS kernel: [Hardware Error]: Corrected error, no action required. Jul 3 20:10:55 OASIS kernel: [Hardware Error]: CPU:0 (17:1:1) MC15_STATUS[Over|CE|MiscV|AddrV|-|-|SyndV|CECC|-|-|-]: 0xdc2040000000011b Jul 3 20:10:55 OASIS kernel: [Hardware Error]: Error Addr: 0x0000000034f18740 Jul 3 20:10:55 OASIS kernel: [Hardware Error]: IPID: 0x0000009600050f00, Syndrome: 0x00008b100a400200 Jul 3 20:10:55 OASIS kernel: [Hardware Error]: Unified Memory Controller Ext. Error Code: 0, DRAM ECC error. Since it's not a server board there won't be more info in the BIOS on which DIMM it was, suggest you try one at a time, though it could have been a one time thing or not be a frequent issue.
  17. Run xfs_repair like I type above, without -n, or nothing will be done. Disk look fine, replace the SATA cable then copy all the data to the new disk in the array, replacing existing files, this will fix any corrupt files. Alternatively you could run a binary file compare utility to detect the corrupt files, but it will take about the same time.
  18. You need to specify the partition at the end: xfs_repair -v /dev/sdc1 to change the UUID you can use: xfs_admin -U generate /dev/sdc1 These errors are usually a bad SATA cable, please post a SMART report for that disk. That's good but don't forget that because of the read errors during the rebuild there can be more corrupt files, unless those sectors were unused.
  19. Yeah, some SAS devices are known to not spin down/up correctly, you can post in the plugin support thread for more info.
  20. That is because of the filesystem corruption on disk5, see below how to fix it, naturally that won't fix the corrupt files. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui Run it without -n or nothing will be done, if it asks for -L use it.
  21. See if this applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  22. There are several ATA errors on disk3, check/replace cables. Disks 1 and 5 are still showing filesystem corruption, run xfs_repair without -n.
  23. Enjoyed the podcast, but IMHO more important than using other things to get around the performance penalty introduced by user shares would be to try and improve that, for example I still need to use disk shares for doing internal transfers if I want good performance, and any SMB improvements won't help with that, e.g., transfer of the same folder contents (16 large files) from one pool to another done with pv, first using disk shares, then using user shares: 46.4GiB 0:00:42 [1.09GiB/s] [==============================================>] 100% 46.4GiB 0:02:45 [ 286MiB/s] [==============================================>] 100% If the base user shares performance could be improved it would also benefit SMB and any other transfers.
  24. You have 2 HBAs, mpt2sas_cm0 and mpt2sas_cm1, cm1 is the one with the problem, you can see which one it is by the disks connected, then check that it's well seated and sufficiently cooled, you can also try a different slot if available.
  25. Yeah, but there's nothing logged before the docker image got corrupt, another thing you should run is to run memtest.
×
×
  • Create New...