Jump to content

JorgeB

Moderators
  • Posts

    67,684
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Sorry, I'm not in the US. It's in the link above, Asmedia 1166 based controllers.
  2. du isn't reliable with btrfs, GUI will show the correct used/free space, note that if you have vdisks they will grow if not trimmed, see here for Windows VMs.
  3. You have to reset the errors, it explains how in the linked FAQ entry.
  4. Try without the extra RAM you recently added.
  5. Aug 20 01:04:41 Prism kernel: BUG: unable to handle page fault for address: ffffe80005fd7c34 Looks more like a hardware issue, start by running memtest.
  6. No, just up to 6, but a SAS HBA can use SATA drives, and they are cheap used on ebay.
  7. It shouldn't, but it might make things worse, or just fix it temporarily, recommend re-formatting it once it's backed up.
  8. Kingston DTSE9H (not the DTSE9G2), avoid any USB 3.0 flash drive, they can fail much more.
  9. It will, you can try booting using the GUI mode, or accessing the server by the IP address.
  10. That syslog might give a clue, post it if it crashes again.
  11. It will depend on the use cache setting for that share, if it's set for example to "cache only" and using the pool you want to write to it should even work correctly for writes, just need to remember to set it to the next pool once that one is full.
  12. There's a problem with the pool, it's only using one device, looks like the 2nd one was never successfully added, but unlikely to be related to your issue, to fix that you can try this: -Stop array -Unassign cache1 (sdk currently) -Start array -Stop array -Re-assign cache1 -Start array and post new diags.
  13. Possibly. No, but also doesn't look like a controller related issue.
  14. Please don't double post, you can bump the original thread a couple of times to see if anyone has nay ideas:
  15. You can have multiple pools with the same share, just need to move the data there manually, since in this case data is just going to be written once and then left alone it would not be that complicated, in the docker plots entry you'd just need to specify /mnt/user/plots, and it would access all the pools containing that share (folder).
  16. SMART Extended Self-test Log Version: 1 (1 sectors) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 70% 41 1477270990 SMART test failed, disk should be replaced.
  17. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  18. Yes, that's what you should do, some recovery options here if needed.
  19. It doesn't, since parity will be be re-synced, just make sure you don't assign a previous data drive to a parity slot.
  20. You can "resparsify" the vdisk, move it to somewhere else, then copy back with cp --sparse=always /source /dest
  21. Also this can help with that not happening in the future: https://forums.unraid.net/topic/51703-vm-faq/?do=findComment&comment=557606
×
×
  • Create New...