Jump to content

JorgeB

Moderators
  • Posts

    67,519
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Like mentioned disk1 will likely be unmountable because you mounted the array without all devices, you can cancel the rebuild and run xfs_repair to see if it's fixable, if yes then start rebuilding again: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  2. No, but you can use a spare disk if available, same size or larger.
  3. Try the latest v6.9-beta, if still not supported best bet is to ask LT to include the driver, they usually do on the next release, assuming one is available that works with current kernel.
  4. Like already mentioned you should respect the max officially supported RAM speeds by AMD depending on the config, at least while you're troubleshooting to rule that out, several cases in the forum of instability and even data corruption with Ryzen and overclocked RAM.
  5. Follow the instructions below carefully and ask if there's any doubt. -Tools -> New Config -> Retain current configuration: All -> Apply -Check all assignments and assign any missing disk(s) if needed, like old disk1 -Important - After checking the assignments leave the browser on that page, the "Main" page. -Open an SSH session/use the console and type (don't copy/paste directly from the forum, as sometimes it can insert extra characters): mdcmd set invalidslot 1 29 -Back on the GUI and without refreshing the page, just start the array, do not check the "parity is already valid" box (GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the invalid slot command, but they won't be as long as the procedure was correctly done), disk1 will start rebuilding, disk should mount immediately (probably not in this case) but if it's unmountable don't format, wait for the rebuild to finish (or cancel it) and then run a filesystem check.
  6. Yes, but if you were also getting errors with Windows it's likely a device problem.
  7. Very strange then, I never seen that before and I've been using snapshots for years, also don't remember any bug related to that on the mailing list, please report if rebooting fixes that, though if it does stop/start the array should accomplish the same.
  8. I'm also using the beta and would think the most likely is everything is working correctly, just some mistake or confusion, also Krusader showing wrong used space is likely a Krusader issue, I don't see if btrfs isn't freeing the space how Krusader could calculate that, please try this: btrfs sub create /mnt/disk4/test fallocate -l 50G /mnt/disk4/test/file Wait a couple of secs and confirm used space changed by 50G, then: btrfs sub snap -r /mnt/disk4/test /mnt/disk4/test/snap rm /mnt/disk4/test/file Now delete the snapshot and used space should go down by 50G after 30 seconds or so. btrfs sub del /mnt/disk4/test/snap After don't forget to delete test subvolume: btrfs sub del /mnt/disk4/test
  9. Just used my test server for a quick test and using an encrypted filesystem the space was also recovered immediately (it took a few seconds like it usually does) after deleting a snapshot, so appears to not be that.
  10. You can try the invalid slot command, but since it looks like you started the array twice with a different config parity won't be 100% in sync, so some corruption is expected, if you're using xfs it should be recoverable with small to none data loss, before posting the instructions I need to know what Unraid release you're using, also confirm that you only have one parity disk and the number of the disk you want to rebuild.
  11. Syslog starts over after every reboot, you can try this and then post that syslog.
  12. Did you set the correct "power supply idle control" option as described here? If yes try disabling c-states completely.
  13. That suggests it's restoring to an invalid path, e.g. created a folder called /mnt/disk1/Restore and are trying to restore to /mnt/disk1/restore
  14. Try this and post that syslog after it crashes.
  15. Don't forget that Unraid uses independent filesystem for each device, so if you want to do that to a share that spans multiple disks you have to snapshot all disks, not the share.
  16. https://forum.openmediavault.org/index.php?thread/7331-guide-windows-previous-versions-and-samba-btrfs-atomic-cow-volume-shadow-copy/ These instructions are not for Unraid but the principle is the same, samba options can be added on Settings -> SMB -> SMB extras
  17. Looks like a controller issue. make sure it's well seated and sufficiently cooled, you should also upgrade it to latest firmware (20.00.07.00), and can also try a different slot if available.
  18. I don't use encryption, wonder if that can make a difference, see if rebooting really reclaims the space, if yes my guess is that it's related to luks.
  19. Assuming you mean to cache, because the cache floor is set to 10TB.
  20. Sorry, no idea, try booting in safe mode just to make sure it's not plugin related.
  21. This is likely a general support issue, basically you'd need to start your services one at a time to try and find the culprit.
  22. I never saw that, and I delete snapshots daily. Post the output of: btrfs sub list /mnt/disk# and btrfs fi usage -T /mnt/disk# # is the number of the actual disk.
×
×
  • Create New...