Jump to content

JorgeB

Moderators
  • Posts

    67,492
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. One of them is likely generating these: Jul 24 14:55:59 FluxHubCentral kernel: ata4: softreset failed (1st FIS failed) Jul 24 14:55:59 FluxHubCentral kernel: ata4: limiting SATA link speed to 3.0 Gbps Jul 24 14:56:04 FluxHubCentral kernel: ata4: softreset failed (device not ready) Jul 24 14:56:04 FluxHubCentral kernel: ata4: reset failed, giving up Jul 24 14:56:14 FluxHubCentral kernel: ata4: softreset failed (1st FIS failed) ### [PREVIOUS LINE REPEATED 2 TIMES] ### Jul 24 14:56:59 FluxHubCentral kernel: ata4: limiting SATA link speed to 3.0 Gbps Jul 24 14:57:04 FluxHubCentral kernel: ata4: softreset failed (device not ready) Jul 24 14:57:04 FluxHubCentral kernel: ata4: reset failed, giving up Jul 24 14:57:14 FluxHubCentral kernel: ata4: softreset failed (1st FIS failed) ### [PREVIOUS LINE REPEATED 2 TIMES] ### Jul 24 14:57:59 FluxHubCentral kernel: ata4: limiting SATA link speed to 3.0 Gbps Jul 24 14:58:04 FluxHubCentral kernel: ata4: softreset failed (device not ready) Jul 24 14:58:04 FluxHubCentral kernel: ata4: reset failed, giving up Jul 24 14:58:14 FluxHubCentral kernel: ata4: softreset failed (1st FIS failed) This is hardware issue, try different cables/port
  2. There's one, Toshiba P300 4TB and 6TB models are SMR.
  3. Yes, after array start you can change page or close it.
  4. You can just restore super.dat (disk assignments) and your key file, that will get your array running, then either restore a few files at a time until you find the culprit or reconfigure the server.
  5. Oh yea, don't do that, as parity will need to be updated for all disks at the same time making it slower (and in current release it also disables turbo write).
  6. For reference here's the raid1 with odd device bug created as a result of my report: https://github.com/kdave/btrfs-progs/issues/277
  7. That's about right for default writing mode, see turbo write.
  8. Thanks for giving it another shot, I appreciate that getting btrfs and stats to work correctly gets frustrating, hopefully once it's good it stays good for the future. Including this configuration, which should start working correctly once it gets fixed on a future kernel release.
  9. Use this after restoring only the config folder from the flash backup.
  10. This will only work if parity was valid, aslo make sure you follow the instructions carefully, any doubt ask. -Assign all disks (including new disk3) and check all assignments, especially make sure parity is correctly assigned. -Important - After checking the assignments leave the browser on that page, the "Main" page. -Open an SSH session/use the console and type (don't copy/paste directly from the forum, as sometimes it can insert extra characters): mdcmd set invalidslot 3 29 -Back on the GUI and without refreshing the page, just start the array, do not check the "parity is already valid" box (GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the invalid slot command, but they won't be as long as the procedure was correctly done), disk3 will start rebuilding, disk should mount immediately but if it's unmountable don't format, wait for the rebuild to finish and then run a filesystem check
  11. With a flash backup, if you don't have one you need to use the invalid config command, I need more info for the instructions, what Unraid version, single or dual parity and the disk# you want to rebuild.
  12. Not sure, you can post the diagnostics from the latest beta and we can check.
  13. Yep, note that I mentioned correct used and free space, for total size it includes everything, including non usable space for a different size device raid1 pool and including parity for raid5/6, if total size should include that it's I think debatable but IMHO the most important stats are the used and free space, and those are always reported correctly by df (except free space for the above mentioned scenario, odd number of devices raid1 pool). Empty 250 + 500GB raid1 pool: Unraid pre-beta25 - used is correct, free is wrong Unraid beta25 - used is wrong, free is correct df -hH - both used and free are correct: Filesystem Size Used Avail Use% Mounted on /dev/sdg1 376G 3.6M 249G 1% /mnt/cache stat -f File: "/mnt/cache" ID: 4270a881f170f3d Namelen: 255 Type: btrfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 91573138 Free: 91572270 Available: 60779040 Inodes: Total: 0 Free: 0 Empty 5 x 500GB raid5 pool: Unraid pre-beta25 - used is correct, free is wrong Unraid beta25 - used is wrong, free is correct df -hH - both used and free are correct Filesystem Size Used Avail Use% Mounted on /dev/sdd1 2.6T 3.8M 2.0T 1% /mnt/cache stat -f File: "/mnt/cache" ID: 64ae90de57db7af6 Namelen: 255 Type: btrfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 610483190 Free: 610482286 Available: 487844800 Inodes: Total: 0 Free: 0
  14. Yes, but because there was a problem with that partition, chkdsk might fix it.
  15. That's from an unassigned NTFS device, note that until rebooting those errors can still appear even if the device was already disconnected.
  16. It is, but like the previously linked study found, some write amplification is unavoidable: As long as it's not ridiculous like before I'm fine with it, but anyone that doesn't need a pool or the other btrfs features might as well stick with xfs.
  17. Based on some quick earlier tests xfs would still write much less, I would estimate at least 5 times less in my case, still I can live with 190GB instead of 30/40GB a day so I can have checksums and snapshots.
  18. All SSDs, my cache is NVMe but earlier I tested on a regular SSD and the difference was similar, though it can vary with brand/model.
  19. Opps, misread as 8125, not sure about this one, did you try beta25?
  20. They are already available in the latest beta (v6.9-beta25), and should be on any future ones, assuming they compile.
  21. SMART attributes look OK but not being able to complete a SMART test is a red flag.
  22. Just wan slow or LAN transfers also? Did you run an iperf test?
  23. New install won't permit rebuilding a failed drive (without going through the invalid slot procedure), you should use the old install and just do a standard disk replacement.
×
×
  • Create New...