WEHA

Members
  • Content Count

    83
  • Joined

  • Last visited

Community Reputation

1 Neutral

About WEHA

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I had a share that was set to cache prefer on cache nr 2. I want to get rid of cache 2 to replace hdd's with ssd's. So I changed the setting to cache yes on cache nr 1. When I started mover it wanted to move the files on cache nr 2 but it said "file exists": move: move_object: /mnt/cache2/xxx.yyy File exists When I set it to cache yes on cache nr 2 and restarted mover, it started working again.
  2. That should not be necessary at all, exceptions is when all else fails. Anyway, it does not really matter, it's "fixed" in beta35
  3. Very well, thanks for your input.
  4. Well yes, not via btrfs but I have no issues with the vm, no errors in eventlog and full backups are working. That's why I believe the vdisk is fine. It's just weird to me that only docker image is affected and it was on a COW share. But if you're confident that there is no issue with this scenario then ok.
  5. I mean COW by enabling. So system had COW and the vdisk had NOCOW But docker image was corrupt and vdisk image was not.
  6. It's strange that it's only the docker file and not the vm file... could it be related to NOCOW / COW? I enabled this for the system share and thus the docker image, the vdisk has NOCOW. Thank you for assisting
  7. I moved everything off, 2 files remained, 1 vdisk file and docker img. The docker image was unable to be moved due to an i/o error, so I removed it and recreated it on another pool. I reran scrub and now no errors are detected. Is this related to docker image being set as xfs on a btrfs pool? I set this to xfs to be sure the bug that causes much disk i/o to be gone. Smart does not show any errors on the disk so I can be sure this was a software corruption and not caused by a hardware (hdd) defect?
  8. Same story, I see callbacks suppressed though [203355.213783] BTRFS error (device sde1): unable to fixup (regular) error at logical 1342354677760 on dev /dev/sde1 [203436.360164] scrub_handle_errored_block: 8 callbacks suppressed [203436.360209] btrfs_dev_stat_print_on_error: 8 callbacks suppressed [203436.360212] BTRFS error (device sde1): bdev /dev/sde1 errs: wr 0, rd 0, flush 0, corrupt 93, gen 0 [203436.360214] scrub_handle_errored_block: 8 callbacks suppressed [203436.360215] BTRFS error (device sde1): unable to fixup (regular) error at logical 1348826648576 on dev /dev/sde1 [2
  9. tower-diagnostics-20201124-1452.zip
  10. It's copied from the syslog file in nano, so I would think that is the full syslog? There are warnings from before the scrub though: root@Tower:/var/log# cat syslog |grep "BTRFS warning" Nov 23 03:59:25 Tower kernel: BTRFS warning (device sde1): csum failed root 5 ino 182291 off 1765621760 csum 0xd488241c expected csum 0xdbe78a4e mirror 1 Nov 23 03:59:25 Tower kernel: BTRFS warning (device sde1): csum failed root 5 ino 182291 off 1765621760 csum 0xd488241c expected csum 0xdbe78a4e mirror 1 Nov 23 20:40:23 Tower kernel: BTRFS warning (device sde1): csum failed root -9 ino 281 of
  11. Syslog does not show files: Nov 24 13:01:51 Tower kernel: BTRFS info (device sde1): scrub: started on devid 1 Nov 24 13:01:51 Tower kernel: BTRFS info (device sde1): scrub: started on devid 2 Nov 24 13:03:22 Tower kernel: BTRFS error (device sde1): bdev /dev/sdk1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 Nov 24 13:03:22 Tower kernel: BTRFS error (device sde1): bdev /dev/sdk1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0 Nov 24 13:03:22 Tower kernel: BTRFS error (device sde1): bdev /dev/sdk1 errs: wr 0, rd 0, flush 0, corrupt 3, gen 0 Nov 24 13:03:22 Tower kernel: BTRFS error (device sde1):
  12. *sigh* ... how do I get a list of files? I'm running scrub and this is the status already: Error summary: csum=35 Corrected: 4 Uncorrectable: 31 Unverified: 0 These are software errors, correct? Smart does not indicate a problem, this is also a new disk.
  13. Attached Seems like this is the curlprit? Nov 24 09:22:05 Tower kernel: BTRFS warning (device sde1): csum failed root -9 ino 283 off 951992320 csum 0x47d58bec expected csum 0x56997f79 mirror 1 tower-diagnostics-20201124-1149.zip
  14. Tried converting twice, remains the same state as posted earlier. It starts and after about 30 seconds or so it goes back to no balance.
  15. Could you just confirm to me if converting from single to raid 1 does not lose data? (not stated in faq nor unraid gui) I just added a disk to a cache pool from 1 to 2 and unraid made it single. (I believe this is the default according to the faq) So this is the current state (2 states, related to the btrfs bug?): Data, RAID1: total=42.00GiB, used=24.68GiB Data, single: total=1.18TiB, used=1.16TiB System, DUP: total=8.00MiB, used=176.00KiB Metadata, DUP: total=2.00GiB, used=1.69GiB GlobalReserve, single: total=512.00MiB, used=0.00B I have enough space available so nothi