Jump to content

JorgeB

Moderators
  • Posts

    67,432
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. The log snippet you posted is perfectly normal, if the FAQ suggestions are being followed try this, it might catch something.
  2. It works for any profile, as long as the old device remains connected during the replacement.
  3. Best bet is to backup then reformat, some recovery options here if needed.
  4. If you haven't yet see here.
  5. Only the first one will be accessible when using the user share, both can be accessed when using the disk share.
  6. That confirms there wasn't a valid filesystem there, at least not xfs, and that's not surprising. We can do whatever you want with disks 15 and 16, the easiest way for you would be be to leave 15 as is and rebuild 16, this would also be the quickest way to get the array back to a protected state (assuming the new 8TB disk didn't arrive yet), but like mentioned you decide what you want to do and I'll post the instructions, as long as you follow them ant there are no other failures you won't lose anything else except the data that was already lost on the old disk 15.
  7. Just so we're clear, you want to use the new 8TB drive for disk15 and remove disk16, is that correct? Also please post a screenshot of the main GUI page to see current array status and best way forward.
  8. Correct to both No need to resize, Unraid will do it on next array start.
  9. Best bet is to ask on that docker's support thread:
  10. Also, xfs_repair is not always clear when it finds problems or not, only way to know for sure is to check the exit status, or just always run it without -n or nothing will be done if there are issues found.
  11. Scrub is safe but won't fix those, --repair is not safe, you can try it but make a backup first.
  12. Filesystem is corrupt and best bet is to re-format the pool, if needed see here for some recovery options.
  13. System share which contains the docker image exists on both cache and disk1, so can't see where the image is, there have been some reports that being on the array might cause issues, so confirm where it is, post the output of: find /mnt -name docker.img
  14. That won't work, you can't remove two drives with a single parity. You can do that, but the invalid partition error can't be fixed by a filesystem check, why I suggested unassigning disk 15, Unraid would recreate the partition and we could see if a valid filesystem exists or not on the emulated disk, I suspect it doesn't. Only that drive's data would be lost, assuming the old disk is really dead. My best guess of what happened here: -disk15 failed (assuming it got disable, even if it didn't it wouldn't change the below) -you replaced disk15 with a new disk and at the same time added disk16 -Unraid wouldn't let you start the array, it's impossible to replace one disk (disabled or not) and add another disk and start the array, you'd get the error "Invalid expansion. - You may not add new disk(s) and also remove existing disk(s).", this error isn't very helpful, it's an old bug. -you did a new config, maintained all assignments, including the new disks and started the array, a parity sync/data rebuild would begin (does this ring any bells?) -Parity sync finished and both disks 15 and 16 are unmoutable with an invalid partition since they were never formatted or rebuilt. The only other explanation I can think of, though it seems less likely to me, would be some flash drive trouble that would allow you to do something that is not possible on a normally functioning Unraid server.
  15. https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive Recommend connecting it to one of the onboard SATA ports first to see if it doesn't happen again.
  16. I'm not necessarily saying hardware is bad, might just not be good with Unraid, perc h710 is a good example of that, not sure how good that driver is for linux, it's certainly not a recommended controller, LSI based HBAs are recommended.
  17. And for a more normal example, this is another Unraid server with standard disks as devices, in fact they are SMR, though that's usually fine with Unraid:
  18. Yes, but I wasn't writing to RAM, on the example above only the first few GB are cached to RAM as seen in the transfer graph, here's an example of a much larger transfer: My point was that Unraid can be fast, the speed you're experiencing suggests some issue, either hardware or config.
  19. The allocated chunks are created and deleted as needed, as long as there is unallocated space on the device(s) you should never get out of space errors.
  20. Domains and system shares are still on the array, but that's OK if that's what you want. Cache still has plenty of space, the problem is likely with VM itself, possibly a wrong path somewhere, see if you can copy a large file to cache.
  21. Yes, all indications are that controllers based on the JMB585 chipset (also the 2 port version JMB582) work reliably with Unraid, I have one for a few months.
  22. With turbo write you should be able to write to the array as fast as your slowest device, Unraid is not made for performance but it can be fast as long as the hardware used allows it, e.g.: This is writing directly to the array, not cache, but all array members are a couple of disks in raid0.
×
×
  • Create New...