Jump to content

JorgeB

Moderators
  • Posts

    61,692
  • Joined

  • Last visited

  • Days Won

    650

Everything posted by JorgeB

  1. FYI it's this issue: https://forums.unraid.net/bug-reports/stable-releases/683-shfs-error-results-in-lost-mntuser-r939/
  2. You might still be able to recover, at least some data, it will depend on the amount of damaged done to parity and disk3, you can try to recover data from both the old disk3 and the emulated disk3 and see if one is better than the other. When you are done with the backups first restore the array: -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments and re-assign parity do the correct slot -IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked) -Stop array -Unassign disk3 -Start array (in normal mode now), and post the diagnostics.
  3. Just boot in CSM/legacy mode, if for some reason that's not possible create a flash drive with passmark memtest and boot from that one instated, it supports UEFI boot only.
  4. No that's not what I'm saying, the spin down command is sent once every time is reaches the set spin down time (Setting -> Disk Settings), but some disks don't recover full performance after that and before being spun up by a spin up command or normal user access, I don't remember if it's possible, but if the parity copy is still going and you can click you "spin up all" do it, wait 5 minutes and post new diags.
  5. Yes and how much it will affect you depends on what was there, for example if you had docker and VMs on cache you'll lose that, obviously any other data there will also be lost.
  6. Settings depend on where you want the data in those shares to end up after the mover runs, cache=prefer will stay on cache, cache=yes will be moved to the array, if they are not in use, don't already exist on the array, etc, mover log will show the problem.
  7. Dual parity can't help with more that 2 missing missing disks, since you have 4 nothing you can do with Unraid except keep the data from the remaining data disks, if there are any.
  8. All non parity disks that are still working.
  9. You can run it on the other shares to make sure everything is correct, but don't do it on the appdata share.
  10. You are getting nginx out of memory errors, that could be caused by having multiple windows on the browser, especially from Android devices like Squid mentioned, it could also be a plugin issue.
  11. Are you using different subnets like mentioned? See the video above.
  12. Most likely this, look for a BIOS update for the server, if possible also test the GPU in a different PC just to make sure it's working.
  13. Still crashing, try with v6.10.3 to see if it's kernel related.
  14. Start by running the new permission tool on that share only.
  15. Try booting in safe mode, also make sure you don't leave any stale browser windows opened on the webGUI.
  16. Yep. Unraid is not RAID, you can have one drive as parity and the remaining as data drives.
  17. Link is currently on eth1, change the cable to eth0, or swap the NIC in Settings -> Network Settings -> Interface rules (Reboot required)
  18. If you swapped cables and problem slayed with the same disk it could be the disk.
  19. Try switching to ipvlan (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right))
  20. Device does appear to be failing, you can try cloning it with ddrescue.
  21. Not a good sign but you can also run an extended SMART test to see if the device is still OK for now.
  22. One thing you can try is to boot the server in safe mode with all docker/VMs disabled, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
×
×
  • Create New...