• Posts

  • Joined

  • Last visited

  • Days Won


JorgeB last won the day on February 28

JorgeB had the most liked content!


  • Member Title
    On vacation, back in a week or so


  • Gender

Recent Profile Visitors

42860 profile views

JorgeB's Achievements

Grand Master

Grand Master (14/14)




Community Answers

  1. Not really, it's behaving as most free instead of highwater, try changing the allocation method to a different one and then back to highwater, just in case there's some glitch.
  2. No, but if you still have the old disk you can see if there's any data there. Tools - New config - Keep array and pool assignments - apply. Then go back to main and start the array
  3. The problem still appears to be parity, but with a disable disk, to remove it you will need to do a new config, and will lose any data that was on disk5, but it's probably the only option now.
  4. This usually means bad RAM or other kernel memory corruption, start by running memtest.
  5. See if you can get the syslog to see if the issue is still with parity or there's also a problem with disk1 filesystem cp /var/log/syslog /boot/syslog.txt Then attach here.
  6. I didn't notice before that you have a disable disk, with a disabled disk you cannot remove parity, at least not without losing the data on that disk, has that been disabled for long? Do you know if there's any data there? Do you still have the old disk?
  7. Not doing it can degrade write performance. It's an option.
  8. If auto start if enabled and you cannot disable it using the GUI, edit disk.cfg on your flash drive (config/disk.cfg) and change startArray="yes" to "no", then type reboot in the CLI.
  9. Correct. It should.
  10. Are they connected to an LSI? They only support trim for devices with deterministic trim support.
  11. Check to see if there's a new BIOS setting to enable the iGPU.
  12. Depends on if the data is mostly compressible, but the default compression level doesn't usually affect performance, so it should never hurt, though compression will only work for new writes. It's expected with zfs, you can use a docker image instead if you prefer to avoid that. I would still schedule one to run daily or weekly, auto trim sometimes does not trim everything that it could.
  13. The array should start if you unassign/disconnect parity, did you try that?
  14. Unfortunately there's nothing relevant logged, this usually points to a hardware issue, one thing you can try is to boot the server in safe mode with all docker containers/VMs disabled, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.