Jump to content

JorgeB

Moderators
  • Posts

    67,600
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Note that you can still mount those disks inside Unraid using the UD plugin if you prefer to do a local transfer to the array.
  2. I still don't get what the issue is here. Are these vdisks? I have no issues adding multiple vdisks.
  3. Just need to check "parity is already valid" after the new config and before array start.
  4. Yes, Unraid requires a specific partition layout so you'll need to re-format any disk to be used in the array or pool.
  5. There were read errors on disk2, but it doesn't look like a disk problem, replace the cables and run reiserfsck again. P.S. In case you're not aware parity disable, so array is unprotected.
  6. To logout click here: Don't understand what you mean.
  7. This is difficult to diagnose with the diags, stop all your dockers/VMs and then enable one by one and let it run for a day or so to see if you find the culprit.
  8. Please post the diagnostics: Tools -> Diagnostics
  9. Yes, it was a problem with LSI, it kept resetting: Apr 17 13:38:11 NAS-NG kernel: mpt2sas_cm0: fault_state(0x2622)! Apr 17 13:38:11 NAS-NG kernel: mpt2sas_cm0: sending diag reset !! Apr 17 13:38:12 NAS-NG kernel: mpt2sas_cm0: diag reset: SUCCESS But strange that booting UEFI would cause this, I have no issues with that. might not be the reason, unless it's some BIOS bug.
  10. Both checks were correcting and both found errors, and only on parity1, which is kind of strange, what was done between the two checks?
  11. Reboot to clear the log, crate a new share with a name that never existed before and post new diags if it fails.
  12. This should be be in the general support forum, try recreating the flash drive using the same device, any more questions please continue below:
  13. TBW is partially the expect life but mostly the limit for the device to be within warranty, it doesn't mean the SSD is going to fail when you reach that, for example the cache device for one of my servers has a TBW of 500TB and is currently at 847TB and still going strong, though it's a good idea to monitor that, on NVMe devices you just need to monitor this: This is the estimated life used percentage.
  14. Apr 17 10:10:35 Tower root: mkdir: cannot create directory '/mnt/user/tmimacnew': No space left on device Was it this share?
  15. Some are OEM, but also have IBM or HP, don't believe I have any Dell.
  16. If the emulated disk is fixed after running xfs_repair and contents look correct, and only if that's true, you can rebuild on top by doing the follow: -stop array -unassign disk1 -start array -stop array -re-assign disk1 -start array to begin rebuild
  17. Check filesystem on disk1: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui Remove -n flag or nothing will be done and if it asks for it use -L
  18. Please post the diagnostics: Tools -> Diagnostics
  19. It will be formatted with the default filesystem set: Settings -> Disk Settings
  20. That suggests the disk is failing, post new diags after running that (and before rebooting).
  21. It is, only the 32GB model, 16GB and smaller were discontinued.
  22. Also mote that I misread the previous diags when I said Unraid was forcing the shutdown before the set timer, it wasn't, hence why changing the timeout from 100 to 150 solved the issue, it just needed more time.
×
×
  • Create New...