Jump to content

JorgeB

Moderators
  • Posts

    67,831
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Doing a parity sync should not crash the server, please post the diagnostics.
  2. He formatted the disk btrfs, it mounted, then he changed the fs back to xfs, that's why it didn't mount but parity was still reflecting btrfs.
  3. Yes, the problem is having a single btrfs array disk, parity will have part of the superblock info and it will look like parity has an invalid btrfs filesystem during btrfs scan, and pools get confused by that, this should be fixed soon by parity no longer using a partition.
  4. It's happening because you had a single btrfs array drive, it's a known issue since it make parity appear to have an invalid btrfs filesystem, to fix: -unassign both pool devices -start array -format disk3 using xfs -stop array -re-assign both cache devices -start array, all should be good, if it isn't post new diags.
  5. Diags you posted don't show the problem, please reboot and post new diags after array start.
  6. It's only linking at x4, than means at theoretical max speed of 2GB/s, about 75% of that is usable, so 1.5GB/s seems about right.
  7. This should give 2.2GB/s, check the LSI link speed, post the output of: lspci -d 1000: -vv
  8. I've been asking for this for pools for a long time, and last info I have is that it should be implemented soon, hopefully when zfs support is added, this way it can used for both filesystems. Basically monitor the output of "btrfs device stats /mountpoint" (or "zpool status poolname" for zfs) and use the existing GUI errors column for pools, which currently is just for show since it's always 0, to display the sum of all the possible device errors (there are 5 for btrfs and 3 for zfs), and if it's non zero for any pool device generate a system notification, optionally hovering the mouse over the errors would show the type and number of errors like we have now when hovering the mouse over the SMART thumbs down icon in the dash. Additionally you'd also need a button or check mark to reset the errors, it would run "btrfs dev stats -z /mountpoint" or "zpool clear poolname". I never asked for array support for this, but it would also be a good idea, again for both zfs and btrfs, especially since we know that data corruption with btrfs can go undetected when transferring data using user shares, it will likely be the same with zfs, we could also use the existing errors column, in the array it is used but it could show errors for both md driver and the fs.
  9. It still looks like a power/connection problem but if the cables were replaced it could be the drive, I would replace or remove it if you don't need the space.
  10. Each of this ports can be split in 2, so 12 total just here.
  11. Should mount now, then look for a lost+found folder and any data inside.
  12. If you transfer using SMB with user or disk share there's no error, there is if you do it internally using a disk share, so disk share copying without error with SMB is new.
  13. Run it again without -n, if it asks for -L use it.
  14. You can use splitters if there's no other way, but a SATA plug should only be spitted for 2 devices, you can also use molex to SATA, those can handle more than 2 SATA disks, but if possible still all limit to 2 for splitter.
  15. You can try reverting to the previous release or upgrading to v6.11.0-rc3, it might not like the current kernel.
  16. These are a very bad idea. In the syslog, search for "fwv". Use this one, latest release is 20.00.07.00
  17. Curiously with v6.11.0-rc3 there's no i/o error even when copying using a disk share, looks like a Samba issue, it stumbles but still copies a known corrupt file without error, it still works as it should if copying locally with a disk share. root@Tower15:~# pv /mnt/disk2/test/1.iso > /mnt/disks/temp/1.iso 395MiB 0:00:07 [75.3MiB/s] [========> ] 9% ETA 0:01:04 pv: /mnt/disk2/test/1.iso: read failed: Input/output error root@Tower15:~# pv /mnt/user/test/1.iso > /mnt/disks/temp/1.iso 3.93GiB 0:01:06 [60.5MiB/s] [===========================================>] 100%
  18. Don't have enough experience with Helium disks for now to see if it's worth monitoring, it won't hurt, just have a couple of WD using Helium and the attribute is different, in this case and since there's a SMART attribute "failing now" user gets a notification anyway, assuming they are enabled.
  19. Disk7 appear to be failing: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 24 Helium_Condition_Upper PO---K 075 043 075 NOW 0 Helium leak? Also check/replace cables on parity2 and disk5.
  20. Problem with the onboard SATA controller, quite common with some Ryzen servers especially under load, best bet is to use an add-on controller.
  21. Try switching to ipvlan (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)). If just the OS crashed you would still have IPMI access, that makes think there might also be other issues.
  22. Looks more like a power/connection problem, check if the affected devices share anything, like a splitter or miniSAS cable. Also a good idea to update the LSI firmware, since it's quite old.
×
×
  • Create New...