Jump to content

JorgeB

Moderators
  • Posts

    67,704
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. You can try this but it will only work if parity is still valid: -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments and assign any missing disk(s) if needed, including the new disk you want to rebuild, replacement disk should be same size or larger than the old one -IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked) -Stop array -Unassign disk6 -Start array (in normal mode now), ideally the emulated disk will now mount and contents look correct, if it doesn't you should run a filesystem check on the emulated disk -If the emulated disk mounts and contents look correct stop the array -Re-assign disk6 and start array to begin.
  2. If disk2 is the unassigned 4TB Seagate you should be able to mount it outside the array, if it looks fine you can do a new config with it, note that disk3 will likely have some corruption due to the read errors during the rebuild.
  3. https://forums.unraid.net/bug-reports/stable-releases/683-shfs-error-results-in-lost-mntuser-r939/ Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB.
  4. User shares performance with small files has noticeable decreased with each new release, I posted a comparison about that somewhere but can't find it right now, since you've gone from v6.3 directly to v6.9 I would expect it to be more noticeable.
  5. Looks more like a cable/connection problem, Samsung SSDs are notoriously picky with SATA cable quality, try replacing or swapping it.
  6. It doesn't, it has to do with RAM cache for writes. Those are the defaults.
  7. The diags you posted are after rebooting so we can't see exactly what happened, but the rebuild is not doing fine, you only have one parity drive, if there were errors during the rebuild in another disk the rebuilt disk will be corrupt, and by the description looks like disk2 dropped offline, so there would be a lot of corruption.
  8. Start by upgrading to latest, also check this if you haven't yet.
  9. No one before complained of the same, and I just tried on a new VM using the newest VirtIO driver ISO and still works, so you must be doing something wrong: Double check the procedure
  10. It does for me using the virtio-scsi driver, try with that one, driver is inside the amd64 folder.
  11. Sorry if this was already suggested, I don't remember, but there have been some reports where it helps for servers with lots of RAM, install the Tips and Tweaks plugin and set "vm.dirty_background_ratio" to 1 and "vm.dirty_ratio" to 2, then test to see if it makes it any better.
  12. This was caused by the quite common controller issue with some Ryzen boards, looks for a BIOS update, it's been reported to help, if not consider getting an add-on controller.
  13. Not for disks, unless with many of them on some 12Gb HBAs using expanders .
  14. For writes yes, for reads they would still be SSD fast.
  15. Nov 7 10:54:15 Home-Server kernel: macvlan_broadcast+0x116/0x144 [macvlan] Nov 7 10:54:15 Home-Server kernel: macvlan_process_broadcast+0xc7/0x10b [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan might fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  16. You're previous post about the same thing was moved to the lounge since it's the more appropriate place for that, please continue discussion there:
  17. Can't open the diags, you should also enable syslog server and post that after a crash, together with new diags.
  18. Unraid is expect to support ZFS as of next release (v6.11), for now would get an unmountable disk if you tried to use ZFS in the array.
  19. If it's a hardware problem not much you can do other than changing it, one thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's more likely a hardware problem, if it doesn't start turning on the other services one by one.
  20. There are a few port multiplier related errors but there's nothing crash relevant logged, this usually indicates a hardware problem.
  21. That's weird, if it's not fs corruption no idea.
×
×
  • Create New...