Jump to content

JorgeB

Moderators
  • Posts

    62,842
  • Joined

  • Last visited

  • Days Won

    666

Everything posted by JorgeB

  1. SSH into the server or use the console and type: mover stop
  2. Never seen it and I would guess no very common, but I did have a couple of situations where checksums were very valuable, and that's why I want them, once when there were read errors on a second disk during a rebuild of a failed disk (this was before dual parity) and another time when a disk red-baled during a disk to disk move, checksums allowed me to quickly find the affect files and replaced them from backups.
  3. A scrub can only fix checksum errors on a redundant btrfs filesystem, e.g. a raid1 cache pool.
  4. Yes, file name will appear on the syslog, any time you want you can also run a scrub to check that everything is OK.
  5. No, btrfs will error if it finds a checksum error, i.e., you won't be able to copy or play that file, you just need to check the log and will find a checksum error.
  6. Correct, since each data disk is a separate filesystem and using the default profile, btrfs will detect a checksum error but won't be able to fix it, that is what backups are for. You could use dup profile for one or more disks, so data would be duplicated and any checksum error fixable, but obviously you'd lose half capacity on those disks.
  7. Did you click the link? Decrease the amount of RAM used for cache and it should help with the OOM errors, it's mostly a kernel problem.
  8. Are you using a 10GbE switch or direct connect? If using a switch you need to change the MTU there also.
  9. These may help if not already in use: 1-Change NIC mtu do 9000 (unRAID server and any other computer with a 10GbE NIC) 2-Go to Settings -> Global Share Settings -> Tunable (enable direct IO): set to Yes
  10. Disk has seen better days, if you determined to use it at least run it through a couple more preclear cycles.
  11. It *may* wok with latest unRAID, but IIRC never seen a post of someone using one. This one works for sure.
  12. Correct, but since most SSDs don't support that it won't work for most users.
  13. They both support trim, are they on the Perc H310? If so move them to the onboard controller, LSI SAS2008 based controllers don't support trim.
  14. Curious myself, current Intel CPUs already use so little power at idle, does undervolting also help at full load?
  15. You'd lose bet, completely different disks, just some obvious differences ST8000DM004 / ST8000VN0002: Maximum sustained data transfer rate - 190MB/s / 230MB/s Number of disks/heads - 4/8 - 6/12 Idle power - 3.4w / 7.6w
  16. 7200 rpm they are not for sure, I suspect they are around 5000rpm, it's the only way the performance makes sense, they also use very little power, also consistent with low/lower rpm:
  17. These new disks are reportedly shingled as well, platter count is in Seagate's specs, rpm are not specified, but if they really are 2TB/platter it has to be lower than 5400rpm to only have the same performance as the old 1.33TB/platter drives. http://www.seagate.com/www-content/product-content/barracuda-fam/barracuda-new/en-us/docs/100805918d.pdf
  18. You should, your filesystem was practically 100% allocated, 232.88 out of 232.89GiB, so only 0.01 GiB were being trimmed.
  19. Typical cache usage, i.e., constantly filling up and emptying the cache exacerbates the large slack issue, this is supposed to improve once we get to kernel 4.14 as there are some modifications to deal with this, but until then it's a good idea to monitor this and/or do a periodic balance, not only because of the trim issues but also because in extreme cases you can run into another issue, btrfs reporting the device full when it's not because it's fully allocated and can't create any new chunks. If doing a periodic balance it should be enough to do a partial balance, it will recover most of the free allocated space but it will be much faster and cause much less wear on the SSD, e.g.: btrfs balance start -dusage=75 /mnt/cache This will only re-allocate chunks that are up to 75% unused.
  20. I have a theory on why some users may be having this issue, if anyone wants to try and post if there was an improvement please do. Currently fstrim on btrfs is only trimming the unallocated space, this is apparently a bug but it's been like this for some time, for some users with a large slack on the filesystem this will be a very small area of the SSD leaving all unused but allocated space untrimmed, this can lead to very poor performance, so first check for slack on the filesystem, i.e., the difference between the allocated and used space, on the main page click on cache and look at the "btrfs filesystem show" section, e.g.,: Label: none uuid: cea535d2-33f9-4cf2-9ff0-0b51826d48a1 Total devices 1 FS bytes used 265.61GiB devid 1 size 476.94GiB used 427.03GiB path /dev/nvme0n1p1 In this case there's about 161GiB of slack, 476.94GiB is the total device size, 427.03GiB are allocated but only 265.61GiB are in use, since only unallocated space is trimmed, fstrim will only trim 49.9GiB (476.94-427.03) so most free space will remain untrimmed, to fix this run a full balance to reclaim all allocated but unused space, on the console type: btrfs balance start --full-balance /mnt/cache This will take some time, in the end it should look like this: Label: none uuid: cea535d2-33f9-4cf2-9ff0-0b51826d48a1 Total devices 1 FS bytes used 265.68GiB devid 1 size 476.94GiB used 266.03GiB path /dev/nvme0n1p1 Now slack space is less than 1GiB, so fstrim will work on practically all unused space, trim you pool: fstrim -v /mnt/cache And check if performance improves.
  21. Yeah, but it should be noticeably faster, not similar, that together with the WLR dropping from 180TB to 55TB/year makes me think the older models are superior, hell, if a user does a parity check a month just that is enough to go well above the recommended 55TB/year.
  22. That's disappointing, was expecting it to be significant faster than the old one, since it has 4 platters instead of 6.
×
×
  • Create New...