Jump to content

JorgeB

Moderators
  • Posts

    67,797
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Update manually, download the zip and extract the bz* files to the flash drive overwriting existing ones.
  2. Disk is not showing a valid SMART report, please reboot and post new diags.
  3. Are you sure? Pretty sure I remember seeing the device being wiped, but it won't hurt to try, if it doesn't mount follow the instructions above, just make sure you don't do anything else to that device (just trying to mount it is not a problem).
  4. I believe the main issue is if you want to use a Mellanox NIC as eth0 or if you add a Mellanox NIC when running v6.10.x, if it is working with v6.9.x and it's not set as eth0 it should remain working after updating to v6.10.2. Having said that I suspect v6.10.3 stable is going to be released very soon, so probably best just to wait, or if you want to update now update to v6.10.3-rc1 which should be basically the same as v6.10.3 final.
  5. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  6. It's reporting the controller link speed, PCIe 1.0 x4 is correct for a SASLP
  7. Since you only have the NVMe assigned it's quite easy, backup current flash, create a new Unraid install, restore only the key, then re-assign the NVMe as disk1 as it is now, don't touch the zfs pool for now or install anything xfs related, see how the VM service behaves after that.
  8. If there's no old parity2 and assuming old disk2 is also dead your best bet it to try and copy everything you can from the emulate disk2, this way you'll now what data fails to copy, you could also clone old parity with ddrescue then rebuild, but that way no way of knowing the affected files on the rebuilt disk, unless you have pre-existing checksums for all files.
  9. SMART looks 100% healthy and the errors in the syslog when it was disable don't indicate a media error.
  10. Don't worry about, number of writes is basically meaningless, you can for example run a parity check with identical disks and some end up with many more reads than other, like double or more. Actual disk looks fine, likely a power/connection problem.
  11. In that case recommend updating to v6.10.3-rc1, multiple NIC discovery doesn't always work with v6.10.2.
  12. To disable that add 'setterm -blank 0' to /boot/config/go
  13. Both are sowing some issues on SMART, suggest running an extended SMART test on both.
  14. It's needed if you want for example to use the myservers plugin for remote access, not needed for local access in a secure lan.
  15. That's the problem with these cheap SATA cards, sometimes you get a good one, other times not.
  16. You can use for example btrfs (or zfs in the near future) or have checksums for all files, though checksum is not as practical for this since it won't warn of corruption on read, you'd need to manually scan the files after transferring them. As for ECC, IMHO anyone who cares about data integrity should definitely use it.
  17. I believe it's planned, but don't know an ETA.
  18. Hmm, will need to investigate, possibly it's not working again, for now do this: -unassign all cache devices -start array -stop array -in the console type: btrfs-select-super -s 1 /dev/sdc1 If you rebooted since the diags check old SSD is still sdc -if the command completes without an error re-assign old cache device and start the array -cache should now mount To replace for now you can add the new device as cache2, once the balance finishes, stop the array, unassign cache1 and start array again to remove it from the pool.
  19. I don't have SAS devices but pretty sure there won't be a SMART status thumbs up/down, I do know that GUI SMART tests don't work for SAS devices, but you can still run them manually.
  20. I don't know which method he mentions in that video, but you can do a direct replacement for a single device "pool" as explained in the link below, note that you need to be on v6.10.x, this was broken on v6.9.x: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480419
×
×
  • Create New...