Jump to content

JorgeB

Moderators
  • Posts

    67,472
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. That was true once, but not for some time, it should be updated.
  2. Device is dropping offline: Jun 24 16:13:40 DMS kernel: ata6.00: status: { DRDY } Jun 24 16:13:40 DMS kernel: ata6: hard resetting link Jun 24 16:13:41 DMS kernel: ata6: SATA link down (SStatus 0 SControl 320) Jun 24 16:13:46 DMS kernel: ata6: hard resetting link Jun 24 16:13:47 DMS kernel: ata6: SATA link down (SStatus 0 SControl 320) Jun 24 16:13:52 DMS kernel: ata6: hard resetting link Jun 24 16:13:52 DMS kernel: ata6: SATA link down (SStatus 0 SControl 320) Jun 24 16:13:52 DMS kernel: ata6.00: disabled This is usually a connection problem, replace cables, both power and SATA.
  3. Yes. Check "parity is already valid" before starting the array, next to the start array button.
  4. It's a problem with the onboard SATA controller: Jun 24 23:08:02 QuantumBreak kernel: ahci 0000:02:00.1: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x44c2b16d89f81e00 flags=0x0010] Jun 24 23:08:02 QuantumBreak kernel: ahci 0000:02:00.1: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x44c2b16d89f81e80 flags=0x0010] This is quite common with Ryzen servers and there are reports that upgrading to v6.9-beta helps because of the newer kernel.
  5. Because it's read-only it can't be easily moved with the mover, but you should still be able to use your favorite utility to copy it, e.g., see here for a way to do it.
  6. Order doesn't matter, but all pool members must be present and assigned.
  7. Iperf performance is usually directly linked to the hardware used, if you can try with Mellanox ConnectX2/3 or Intel NICs, those are know to usually work well and give close to line speed, I use Mellanox on various Unraid servers and get around 9Gbits.
  8. That is usually a bad SATA cable and likely the reason the disk dropped offline earlier, but as long as it doesn't increase anymore it's fine, and it shouldn't if you replaced the cables as suggested.
  9. If they were unassigned after changing the controller the device identifications might have changed, if that's the only change you can re-assign them but only after starting the array without them so Unraid can forget the old pool config, then before re-starting the array you need to re-assign all the pool members and there can't be any "data on device will be deleted at array start" warning, but note that some RAID controllers make changes to the MBR and/or don't use the full partition size in which case the pool would might be unmountable.
  10. 1.7GB is very little RAM to run Unraid v6 even as a basic NAS, it should be fine with 4GB.
  11. Best bet it to backup cache, re-format the pool and restore the data.
  12. Last call trace is related to the Mellanox NIC, see if you don't get them by temporarily running without it.
  13. If you have single parity data drive order is not important, parity will still be valid, it's not the same with dual parity, parity1 would still be valid but not parity2.
  14. Yep. No problem restoring that but those are just old preclear reports.
  15. This is likely a general support issue, please start a thread in the general support forum and don't forget to include the diagnostics: Tools -> Diagnostics
  16. Backup current flash, re-do it manually or using the USB utility and then restore the config folder from the backup.
  17. You can enable this and post the syslog after a crash, but if it's a hardware problem most likely there won't be anything logged before crashing.
  18. Best bet is to user the dedicated docker support thread:
  19. GUI SMART tests/results don't currently work for SAS devices, you can still run the tests manually and check the SMART report for the result, i.e.: SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed - 23297 - [- - -] This was completed successfully without errors, though for a full drive test you want to run the long test (or do a parity check which basically accomplishes the same for all drives at the same time).
  20. This suggests a flash drive problem, try re-doing it or using a different one.
  21. Disk5 looks healthy, since disk1 suffered some corruption during the rebuild when disk5 dropped offline, easiest way forward would be: -Re-connect old disk1 (you can disconnect the new one if needed) -I would recommend replacing cables on disk5 just to rule them out if there are any more issues with it. -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments and assign any missing disk(s) including old disk1, so that you have the array as it was before the disk upgrade. -Start array to begin parity sync -All disks should mount correctly, if they don't post new diags (never format) When the sync is done you can try the upgrade again.
  22. If a different cable/port doesn't help it's likely a disk issue.
  23. You might have a rogue docker or other process, start by running in safe mode with all dockers/VMs disabled, then if stable start enabling one thing at a time.
  24. IIRC it should work with v6.8.2, but the long test still needs to be run manually because it's a SAS device.
×
×
  • Create New...