Jump to content

JorgeB

Moderators
  • Posts

    67,684
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Filesystem Size Used Avail Use% Mounted on /dev/sdc1 224G 224G 168K 100% /mnt/disks/SSDSC2KB240G7R_BTYS83220MDP240AGN Disk containing the vdisk is full.
  2. That's no concern for an SSD, don't forget to run memtest
  3. Looks like it is, an extended SMART test will confirm.
  4. Aug 19 13:05:50 Server emhttpd: unclean shutdown detected You likely need to increase the timeout even more.
  5. Btrfs is detecting a lot of corruption you should run memtest, then backup pool and re-format, also take a look here for better pool monitoring.
  6. Sometimes user shares are disable after updating to v6.10-rc1, go to Settings -> Global Share Settings and re-enable.
  7. Looks more like a connection/power problem but since the disk dropped offline there's no SMART report, reboot/power cycle the server to see if it comes back online and post new diags.
  8. Previous syslog shows many call traces, can't see the reason for them though, could be hardware or kernel related, try upgrading to v6.10 to see if the newer kernel helps, if it doesn't could be hardware.
  9. I would recommend using multiple pool for this, create as many as needed with 3 or 4 disks max, they can all have the same share and Chia will see all the plots with a single entry, if you use for example \mnt\user\plots, smaller pools are much easier to manege and if you lose a disk you only lose that pool, not everything.
  10. I'll add a note that you can never go beyond minimum number of devices for the selected profile, i.e., raid1c3 and raid5 require minimum 3 devices, raid10 4 devices etc.
  11. Next step would be trying a different board/CPU, if available.
  12. Currently the mover doesn't move data from pool to pool, only pool to array or array to pool.
  13. Best bet for now is to backup, re-format and restore data to the pool.
  14. There's nothing logged about the crash, this suggests a hardware issue.
  15. Check syslog for name of the file(s), delete them or restore from backup.
  16. That can take weeks, you can check the progress by clicking on the pool in the GUI, balance section.
  17. Try with another computer if available, or a clean OS. This is normal, that's why you should test with a single stream for best results.
  18. That suggests it's not the controller.
  19. This is normal, you first needed to convert to raid1 then remove the device.
  20. Please post the complete diagnostics.
  21. See here for the checksum errors, you should run a scrub a monitor the pool for the future, but the main issue appears to be the constant call traces, I can't see what's causing them, looks more hardware related (or your hardware doesn't like that kernel), you can try upgrading to v6.10 to see if the newer kernel helps, if it's the same it's likely hardware.
×
×
  • Create New...