Jump to content

JorgeB

Moderators
  • Posts

    67,519
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Don't overclock the RAM, see here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  2. In that case space is correctly reported, and like mentioned snapshots are immediatly deleted and free space reclaimed, so that's not your problem.
  3. This is normal with btrfs, it allocates 1GB chunks as need, both for data and metadata, this is not a problem, in fact the opposite can be, if the filesystem is fully allocated when it shouldn't be.
  4. Try this: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta25-available-r990/?do=findComment&comment=10090
  5. Data from delete snapshots (not reference anywhere else) is immediately deleted, though it can take a few seconds or even minutes for very large filesystems to cleanup, note that depending on the btrfs pool config used and Unraid version it can show incorrect used space, free space or both for some btrfs pools, depending on profile and number of devices.
  6. We usually recommend once a month. Your parity devices are still invalid, Unraid is doing a parity sync, not a check, only after parity is valid you can do a check, if you cancel/shutdown the array before the sync is done it will start over from the beginning next time array is started.
  7. Wait for the parity sync to finish, trying to use the array while it's not done will result in very bad performance.
  8. Same thing as any other JMB585 controller, depends if you prefer to use a PCIe slot or an M.2 slot.
  9. Does the same happen after booting in safe mode?
  10. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2 0.0 0.0 0 0 ? S Sep12 0:00 [kthreadd] root 76 95.6 0.0 0 0 ? R Sep12 950:33 \_ [kworker/1:1+pm] root 31 95.3 0.0 0 0 ? R Sep12 947:35 \_ [kworker/4:0+usb_hub_wq] Yeah, not sure what these are but it's strange, especially the USB HUB part, not sure what the other one is, does it go away if you reboot?
  11. Please post the diagnostics: Tools -> Diagnostics, might show what the problem is.
  12. Because it was only found and reported on the latest beta, it should be fixed on the next one.
  13. Yes, you can use for example midnight commander (mc on the console)
  14. Turbo write helps with speed, parity will still be overused compared to data devices, if for example you have 10 data devices, over time parity will have 10 times the number of writes of any data device, so you want a device with better endurance and since it can't be trimmed and will be much more used faster performance will also help.
  15. Memtest won't find any errors unless ECC can be disabled in the BIOS, check the event log in the board BIOS/IPMI, there might be some more info, e.g.:
  16. Please use the existing docker support thread, you can find it by clicking on it:
  17. Yeah, it's a known bug, for now it needs to be done manually.
  18. It's fine, sync finished successfully: Sep 12 10:10:05 unraid kernel: md: sync done. time=88414sec Sep 12 10:10:05 unraid kernel: md: recovery thread: exit status: 0 Strange the GUI was stuck, you do have something spamming your logs, no idea what these are about: Sep 10 20:47:40 unraid kernel: traps: node[14845] trap invalid opcode ip:560d6ce0270f sp:7ffc308bd0e8 error:0 in node[560d6cdfe000+77b000] Sep 10 20:47:41 unraid kernel: traps: node[15235] trap invalid opcode ip:5564cbf2c70f sp:7ffccd709a38 error:0 in node[5564cbf28000+77b000] Sep 10 20:47:42 unraid kernel: traps: node[15587] trap invalid opcode ip:5646aceda70f sp:7ffe15854878 error:0 in node[5646aced6000+77b000] Sep 10 20:47:44 unraid kernel: traps: node[16105] trap invalid opcode ip:55f17924770f sp:7fff2b8b0ea8 error:0 in node[55f179243000+77b000]
  19. It's not the superblock, first error lines are the important ones, it's extent tree corruption, loos like part of it was wiped/trimmed, if it's just the extent tree btrfs restore should be able to recover most of your data.
  20. Those ATA errors suggest the SSD also isn't very happy there, so probably a good idea to replace that controller ASAP, since the SSD shouldn't normally be as stressed as an array device during a rebuild it might be OK to use it there for some time.
×
×
  • Create New...