Jump to content

JorgeB

Moderators
  • Posts

    67,726
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. It doesn't you just need to adjust the units, GUI is in TB, not TiB Used=1.13TiB=1.24TB/2=620GB Free=348.5GiB=370GB
  2. Nice work, seems to be working good enough for a beta, couple of suggestions so far: Would like at least an option without seconds, or make schedule snaps show 00 in the seconds, or they won't be OCD appropriate If possible I would like a retention option based on available space, that's how I currently clean up some of my sent/receive snapshots, have an hourly script that checks the pool for available space and when below a certain threshold it deletes the oldest snapshot, if it's an option difficult to implement or there's not enough interest by others it's not a big deal, I can keep using my clean up script. I did a test that I knew was going to fail, since I made a manual snapshot between schedule incremental sends, so the parent snapshot didn't exist on target, send failed but not seeing the error in the syslog. Nov 30 11:00:01 Tower15 snapshots: Start snapping process /mnt/disk1/temp Slot:0 Nov 30 11:00:01 Tower15 snapshots: btrfs subvolume snapshot -r '/mnt/disk1/temp' '/mnt/disk1/snaps/temp-202111301900' Nov 30 11:00:01 Tower15 snapshots: btrfs snapshot send -p /mnt/disk1/snaps/temp-20211130180418 /mnt/disk1/snaps/temp-202111301900 To /mnt/disk2/ 1 At snapshot temp-202111301900 Nov 30 11:00:11 Tower15 snapshots: End snapping process /mnt/disk1/temp Besides some of the things not working yet, I also miss the option to delete several snapshots together, like a check-mark were I could select for multiple deletes. That's all I can think of so far, keep up the good work.
  3. du isn't reliable with btrfs, GUI will show the correct stats for that pool, same as the btrfs command: btrfs fi usage -T /mnt/systempool
  4. Depends on the filesystem used, UD plugin lists the supported ones.
  5. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  6. Parity swap is only needed when there's a disabled disk (or two with dual parity) and you want to use the current parity to replace that data disk, is that the case? You just mentioned upgrading parity.
  7. Yes, first it will balance the partiality filled chunks, after that's done it should start moving the data.
  8. Device is not part of the pool, despite of what the GUI shows, try this, stop array, unassign cache3, start array, stop array, re-assign cache3, start array, post new diags.
  9. Auto parity check after an unclean shutdown is always non correcting.
  10. This is usually the result of bad RAM, you're running Ryzen with overclocked RAM, it has been known to corrupt data even when the RAM is OK.
  11. You can't direct replace to smaller devices. You can do that, when done the cache reset procedure is the same as the one posted in the link above. P.S.: metadata profile is DUP, you should change to RAID1.
  12. Disk looks OK, most likely a power/connection problem, replace cables to rule those out and rebuild on top.
  13. https://forums.unraid.net/topic/115767-solved-unraid-v692-upgrading-cache-pool-drive/?do=findComment&comment=1052576
  14. Then it looks like a xfs_repair issue and if that's the case not much we can do, please post new diags after array start (in normal mode).
  15. Pool device replacement is broken on v6.9+ If you're comfortable using the console you can do it manually.
  16. You need forward breakout cables, sometimes called SFF-8087 host to SATA target.
  17. I if understood correctly you need 3 extra ports, if yes I would go with an Asmedia 1164.
  18. It's normal with WD SMR disks before they were used, reads come from the controller, since it knows the disk is all zeros it doesn't read it, it's also pointless to do a pre-read, post-read will be normal.
  19. These errors suggest device sdc dropped offline in the past, you should run a correcting scrub and make sure all errors were corrected, also see here for better pool monitoring.
  20. That can help, and they should be off if you don't need it, it's this issue:
  21. To expand, there would be some times when writes would go to both SSDs, others were it would go to one SSD and the HDD, it would alternate every new chunk, and when the HDD is used it would always be limited by the HDD speed.
×
×
  • Create New...