Jump to content

JorgeB

Moderators
  • Posts

    67,871
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Yes, but would should still keep backups of anything important, any type of RAID adds redundancy, but it's not a backup.
  2. Yes, those have been known to cause some issues in the past, and the driver is likely not as stable and widely used as the LSI driver.
  3. You can thank @NDDanhe had the same problem and remembered changing that file, I would never think of looking at the go file for that, also keep in mind that if something storage related stops working in the future after an Unraid update you might need to do the same.
  4. After the parity copy completes you must start the array to begin the rebuild, without changing anything else, or will need to start over.
  5. Copy 60-persistent-storage.rules from the newer release and modify that file again, that should work, problem is likely because you are using a modified file from an older release.
  6. Firefox is known to sometimes cause this stale GUI issue, using another browser after it happens won't work, you need to reboot first.
  7. First set the SATA controller to AHCI, it's set to IDE, then try again.
  8. Did you start the array to begin the rebuild after the copy finished?
  9. Lots of errors with the controller, though cannot say the controller is the problem, do you have an LSI HBA you could try with?
  10. Check if the docker image or appdata are using it, those are best moved to a cache device, if one exists.
  11. Server load is extremely high, reboot then monitor it so see if you can find the service causing that if it happens again. Also a good idea to disable the mover log so it won't spam the log.
  12. It's logged as a disk problem, run an extended SMART test.
  13. Try booting with a new flash drive, with a new stock Unraid install, no key needed, just to see if it boots.
  14. Try using ddrescue on it.
  15. This suggests a problem with the NVMe device: Oct 23 11:02:46 BTV kernel: blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 And the SMART report confirms it: === START OF SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! - available spare has fallen below threshold - media has been placed in read only mode You need to replace it.
  16. Do a new config without them and sync parity.
  17. Yes, this is normal, you should schedule the trim inside the VM OS, or run it manually when needed.
  18. Yes: Oct 9 05:00:31 Helios kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start e22d2585) Oct 9 05:00:31 Helios kernel: FAT-fs (sda1): Filesystem has been set read-only Oct 9 05:00:31 Helios kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start e22d2585) Oct 9 05:00:31 Helios kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start c1dc2174) ### [PREVIOUS LINE REPEATED 1 TIMES] ### Oct 9 05:00:31 Helios kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start bbb5ad69) Run chkdsk on the flash drive.
  19. Great news, good that you remembered about changing the udev rules, I would never think of checking the go file for that.
  20. @Traumfaenger please try again after removing or commenting out these lines from your go file: # Copy and apply udev rules for white label drives cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/ chmod 644 /etc/udev/rules.d/60-persistent-storage.rules udevadm control --reload-rules udevadm trigger --attr-match=subsystem=block # Copy and apply udev rules for white label drives cp /boot/config/rules.d/60-whitelabel.rules /etc/udev/rules.d/ chmod 644 /etc/udev/rules.d/60-whitelabel.rules udevadm control --reload-rules udevadm trigger --attr-match=subsystem=block
  21. Try commenting out these lines, add # to the beginning of every one.
  22. Please don't do multiple posts about the same thing:
  23. Vdisks can grow with time if not trimmed/unmapped, see here, it's for a Windows VMs but same principle applies, there are also reports that defragmenting the filesystem also helps, but don't do that if you use snapshots, another option is to move the vdisk elsewhere then move back with cp --sparse=always.
  24. You are having the same issue described here, unfortunately not yet clear what causes it but you can read that thread for some ideas, and this way everyone affected can discuss it in the same place.
×
×
  • Create New...