gfjardim

Community Developer
  • Posts

    2213
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by gfjardim

  1. You're welcome, thanks for reporting.
  2. Preclear Plugin usually tries to update your CSRF token if it's invalid. I've added a new version right now that changes this behavior, showing an alert instead. Please update it to 2020.03.14a and see if it helps.
  3. That resolved your problem?
  4. That what the syslog is telling us.
  5. Your login session is no longer valid, just close your web browser and login again into the webgui.
  6. Please send me your diagnostics file.
  7. Unraid itself writes very little into the flash drive. You will see writes mainly during settings change, SO update and plugins update. Plugins are a whole different matter. Many plugins use flash drive to store their own installation files and their configuration only, but some will store some status too like Preclear Plugin that stores reports and resume status. Resume status is written every minute, they are less than 1KB size, so every instance of preclear will write at most 1.44MB each 24 hours, which isn't much. It writes two 4KB reports too at the end, again, not much. Not only NAND wearing kills flash drives. Some controllers too have low MTBF and will render your flash drive useless even if your memory isn't wore out. I do recommend Sandisk flash drives since them all have wear leveling, but it's good to make a flash backup now and then.
  8. I've looked the code and I see no obvious way this is happening. It filters the report from /boot/preclear_reports using disk short serial number, and that value you see in the link it's the file base name without the extension. I've tried and couldn't replicate it too....
  9. @wepee, please close all your browser windows and tabs and try it again. Let's try to replicate it.
  10. If you still have this problem, you can try to change the SHUTDOWNCMD argument in upsmon.conf to "/sbin/init 0".
  11. Please send me your diagnostics file.
  12. Glad it helped both ways! Pre-read will abort if there is any read errors, including those generated by pending sectors. The problem here is documentation IMO, because the pending sector will show in the log but the user don't have any indication of what to do. Maybe more Q/A entries on the OP should help Thanks for pointing out.
  13. There are no incompatibilities between the script and your hard drives; or the data stream became corrupted before it was written, or it became corrupted after it was read, or your system produces zero filled data streams that aren't composed only by zeros. Many things can corrupt data, like bad memories, bad PSU, bad cables, bad HBA. The script relies on applications widely used around the world, like dd for read and cmp for comparison. And no incompatibility can lead to hard drives being dropped by the HBA, only a driver or a hardware problem can drop drives like that. I'm not defending the script by itself, I'm alerting you that your system may have more serious issues than it appear to have.
  14. You have to go to Tools > New Config, set Preserve current assignments to None, check Yes I want to do this and then click Apply.
  15. So you will preclear an empty drive and then add it to the array? Empty the contents another drive into the array, preclear this one and then add it too?
  16. Do you already created your parity?
  17. Array drives aren't supposed to be removed like that; or you replace the drive with another with the same/larger capacity, or you have to invalidate the parity to do that. What dou you want to accomplish?
  18. For that amount of disks failing preclear, my guess is indeed memory errors, but he got disks dropping out too like "/dev/sde" and "/dev/sdg".
  19. After remove the drive from the array, you have to start the array again to remove any references of it.
  20. Direct bypasses both data cache and I/O queueing; nocache bypasses only the data cache. The performance can be dramatically different depending the amount of I/O the device supports and how it manages it's I/O requests without the kernel I/O queue: No caches, I/O queue, 512 bytes read block: root@Servidor:~# echo 3 > /proc/sys/vm/drop_caches root@Servidor:~# dd if=/dev/sdh of=/dev/null count=1048576 iflag=nocache 1048576+0 records in 1048576+0 records out 536870912 bytes (537 MB, 512 MiB) copied, 2.14517 s, 250 MB/s No caches, I/O queue, 1M read block: root@Servidor:~# echo 3 > /proc/sys/vm/drop_caches root@Servidor:~# dd if=/dev/sdh of=/dev/null bs=1M count=512 iflag=nocache 512+0 records in 512+0 records out 536870912 bytes (537 MB, 512 MiB) copied, 1.98191 s, 271 MB/s No caches, no I/O queue, 512 bytes read block: root@Servidor:~# echo 3 > /proc/sys/vm/drop_caches root@Servidor:~# dd if=/dev/sdh of=/dev/null count=1048576 iflag=direct 1048576+0 records in 1048576+0 records out 536870912 bytes (537 MB, 512 MiB) copied, 217.961 s, 2.5 MB/s No caches, no I/O queue, 1M read block: root@Servidor:~# echo 3 > /proc/sys/vm/drop_caches root@Servidor:~# dd if=/dev/sdh of=/dev/null bs=1M count=512 iflag=direct 512+0 records in 512+0 records out 536870912 bytes (537 MB, 512 MiB) copied, 2.12653 s, 252 MB/s
  21. Did not took a screenshot, but the controller benchmark showed greater speed values when all disks were read together, probably because they were already read one at a time. When I dropped the cache, it returned to normal behavior. I had better luck setting "iflag=nocache" parameter rather then the iflag=direct before. I think it worth trying.