Jump to content

JorgeB

Moderators
  • Posts

    67,852
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. A pool with one or more SSDs, or just a single fast NVMe device. You are virtualizing Unraid, depending on how you are doing it might have a performance impact, or there is a controller bottleneck, post the output of: lspci -d 1000: -vv
  2. Pretty sure it doesn't, it only fixes the shares under /mnt/user, not /mnt/user.
  3. You need to rebuild it, first start the array with the missing disk to see if the emulated disk is mounting and contents look correct. P.S. seeing some issues with disk1 in the last diags, looks like power/connection issue.
  4. Forgot to mention, you must run without -n or nothing will be done.
  5. Strange, there's clearly more data on disk1, check filesystem on disk1 to see if there's any change.
  6. Go to Shares, click on "compute all" and please post a screenshot of the results.
  7. I'm starting to think the same, worth checking when it happens, the big clue here was that the flash drive share was still accessible, and that's obviously not under /mnt/user, so it reminded of a previous case where the permissions were incorrect, it was with v6.9 though.
  8. Not sure how the usage could be of by that much, df is reporting the same, what kind of files do you have there? Media type or more compressible files?
  9. Possibly, though some times detecting it and others not is strange.
  10. NIC problems: Sep 22 02:16:58 Tower kernel: e1000 0000:03:01.0 eth0: Detected Tx Unit Hang Sep 22 02:16:58 Tower kernel: Tx Queue <0> Sep 22 02:16:58 Tower kernel: TDH <1000001> Sep 22 02:16:58 Tower kernel: TDT <1000001> Sep 22 02:16:58 Tower kernel: next_to_use <1> Sep 22 02:16:58 Tower kernel: next_to_clean <0> Sep 22 02:16:58 Tower kernel: buffer_info[next_to_clean] Sep 22 02:16:58 Tower kernel: time_stamp <fffbd99a> Sep 22 02:16:58 Tower kernel: next_to_watch <0> Sep 22 02:16:58 Tower kernel: jiffies <fffbe940> Sep 22 02:16:58 Tower kernel: next_to_watch.status <0> Sep 22 02:17:00 Tower kernel: e1000 0000:03:01.0 eth0: Detected Tx Unit Hang Sep 22 02:17:00 Tower kernel: Tx Queue <0> Sep 22 02:17:00 Tower kernel: TDH <1000001> Sep 22 02:17:00 Tower kernel: TDT <1000001> Sep 22 02:17:00 Tower kernel: next_to_use <1> Sep 22 02:17:00 Tower kernel: next_to_clean <0> Sep 22 02:17:00 Tower kernel: buffer_info[next_to_clean] Sep 22 02:17:00 Tower kernel: time_stamp <fffbd99a> Sep 22 02:17:00 Tower kernel: next_to_watch <0> Sep 22 02:17:00 Tower kernel: jiffies <fffbf140> Sep 22 02:17:00 Tower kernel: next_to_watch.status <0> Sep 22 02:17:01 Tower dhcpcd[1478]: br0: probing for an IPv4LL address Sep 22 02:17:02 Tower kernel: e1000 0000:03:01.0 eth0: Detected Tx Unit Hang Sep 22 02:17:02 Tower kernel: Tx Queue <0> Sep 22 02:17:02 Tower kernel: TDH <1000001> Sep 22 02:17:02 Tower kernel: TDT <1000001> Sep 22 02:17:02 Tower kernel: next_to_use <1> Sep 22 02:17:02 Tower kernel: next_to_clean <0> Sep 22 02:17:02 Tower kernel: buffer_info[next_to_clean] Sep 22 02:17:02 Tower kernel: time_stamp <fffbd99a> Sep 22 02:17:02 Tower kernel: next_to_watch <0> Sep 22 02:17:02 Tower kernel: jiffies <fffbf940> Sep 22 02:17:02 Tower kernel: next_to_watch.status <0> Sep 22 02:17:04 Tower kernel: ------------[ cut here ]------------ Sep 22 02:17:04 Tower kernel: NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Do you have a different one you could try with?
  11. Yes, not sure exactly why it happens, and possibly more with btrfs image type files can grow with time and report less space than they are actually using, this does not happen, or at least, it's much less obvious, if the images are regularly trimmed/unmapped. One way to confirm would be to move the images outside the pool, then move them back using cp --sparse=always, there are also reports that defragging the filesystem helps, as long as you don't use snapshots, in that case defragging would be a bad idea.
  12. Unraid is not RAID, you'll never get same speeds as other solutions that stripe drives, you can have fast pools, array will always be limited by single disk speed.
  13. Remove all RAM from CPU0 and use half that RAM on the other CPU for that one, if it boots start adding the removed DIMMs.
  14. Disabled and unmountable are two different things, were is the old disk9? It's not in the diags posted.
  15. Syslog in the diags starts over after every boot, enable the syslog server and post that if it crashes again.
  16. Looks more like a power/connection issue: Sep 21 16:26:33 NetPlex kernel: pm80xx0:: mpi_sata_completion 2455:IO failed device_id 18439 status 0xf tag 95 Sep 21 16:26:33 NetPlex kernel: pm80xx0:: mpi_sata_completion 2490:SAS Address of IO Failure Drive:50000d1109b27a9e Sep 21 16:26:33 NetPlex kernel: sas: sas_to_ata_err: Saw error 135. What to do? But not familiar with these controllers and their errors.
  17. Since disk4 is disabled unassign it, start the array and post new diags.
  18. virtlib appears to be mounting correctly, there's a crash though, reboot in safe mode and post new diags, you should also update to -rc5.
  19. If it's in IT mode it's plug and play.
  20. Yeah, just to check the firmware the backplane needs to be detected, and to update it you will likely need a supported HP controller. Diagnostics would show the firmware if the expander is detected, this might also show the firmware, but again unlikely you'll be able to update it with the LSI.
  21. Vdisks can grow with time if not trimmed, e.g. for Windows VMs see here.
  22. /mnt/user permissions are wrong, don't ask me why they work with v6.9, but they are still wrong, type chmod 777 /mnt/user reboot and try again.
×
×
  • Create New...