Jump to content

JorgeB

Moderators
  • Posts

    67,539
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. You can try testdisk, it might be able to recover the deleted partition.
  2. It will depend on the current data tree, split level always overrides allocation method.
  3. Yes, luks errors start immediately, no idea why, please try booting in safe mode.
  4. Cache pool is completely full, errors are caused by the docker image running out of usable space, free up some space.
  5. Best bet is to use the appropriate docker support thread:
  6. This really won't work, since I forgot an important detail for this case, when the beginning of the data disk was wiped parity was also synced without that disk, so there's no way to recover using the invalid slot command, best bet is to use a delete file recovery utility like UFS explorer.
  7. Recommended options would be a 16 port LSI HBA, or two 8 port LSIs, or one 8 port LSI plus a SAS expander.
  8. Did some Samba aio enable/disable tests, no time to do them using many different controllers, but wanted do a least use a couple, so tested on a an xfs formatted array with reconstruct write enable connect to an LSI 3008 and a 3 SSD btrfs raid0 pool connected to the Intel SATA ports, used robocopy to copy two folders to/from an NVMe device in my Win10 desktop, first a folder with 6 large files totaling around 26GB, second one with 25k small to medium files totaling 25GB, tried to remove RAM cache from the equation as much as possible. I only ran each test once, an average of 3 runs would be more accurate but didn’t have the time, these are the speeds reported by robocopy after the transfers were done: I was only going to be using user shares for testing but because of the very low write speed for small files I decided to repeat each test using disk shares: Not what I testing here but still interesting results, shfs has a very large overhead with small files, especially for writes, not something I usually do in my normal usage but perhaps one of the reasons people with time machine backups are seeing very low speeds? I believe those use lots of small files. As for Samba aio, I don’t see any advantage in having aio enable, if anything it appears to be generally a little slower, add to that it apparently performs much worse for some and that I still don’t trust that the btrfs issue is really fixed, and that it might come back on future releases, I would leave it disable by default, of course different hardware/workloads can return different results, but if anyone wants to enable it’s really easy using the Samba extra options.
  9. Then that might be the problem.
  10. Try connecting that disk to the LSI instead to see if it makes any difference, swap with another if needed.
  11. Does a manual SMART report look correct? smartctl -x /dev/sde
  12. Previous users solved the problem by using an LSI HBA instead, you'll need a new cable though.
  13. Test one of those disk using a model to SATA adapter, to rule out the 3.3v issue. FYI, the 30 device limit applies to array devices, if you just had 8 you're still very far from the limit.
  14. Don't overclock RAM with Ryzen it's a known issue. Usually yes, any VMs might need tweaking.
  15. Would need the diags, did you save them before rebooting?
  16. This usually suggest a connection/power problem.
  17. Might still be able to recover with the invalid slot command if the disks are using xfs (or reiserfs), but note that parity won't be 100% in sync due to mounting the other disks, still worth trying, need to know what Unraid release you have and the disk # you want to rebuild.
  18. Any hardware/software changes in the past week?
  19. Check/replace cables, if it happens again get the diagnostics before rebooting.
  20. See here and make sure you're using the correct "power supply idle control" setting, also run memtest.
  21. Like mentioned the snapshots can be a user share, or a folder inside a user share, wherever you want, you can accessed them using the normal way you access any other data on Unraid. They don't take any space until you change the original data. Yeah, also btrfs single profile filesystem, like it is with array devices, is much less likely to cause issues than pools, since many of those in Unraid are caused by members having errors or dropping without the users noticing.
  22. Yes, something strange going on there, preclear seems to be getting the correct data, do you have a custom SMART controller set? Full diags might also help.
  23. NIC is not being detected, that's quite common with that NIC and recent boards, it mostly only works on boards with PCIe 1.1 slots. if you google "intel pro 1000 pt not recognized" you'll get a lot of hits.
  24. Yep, I have a script snapshotting a share in all disks with the same name, e.g. TV@2020-10-10-11-58, that snapshot is then sent to the backup server using send/receive (zfs also has this) disk by disk, the data in that snapshot can then be accessed together as a user share. For the OP, he just needed to have btrfs on the backup server, snapshot the share(s) in all disks they occupy with the same name, then they can than be accessed the same way.
  25. It can be done with Unraid, I do it for all my backups, you need to use btrfs and take snapshots disk by disk, since each array device is an independent file system.
×
×
  • Create New...