pwm

Members
  • Posts

    1682
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by pwm

  1. Is the machine overclocked? And do you get the same events if you reboot, i.e. is it just a single transfer failure when loading the microcode or a persistent error?
  2. In that case you have to continue to replace memory modules as long as you are sure the parity errors aren't caused by overclocking, overtemp or unstable supply voltages. When run within the specifications, you should not see these errors. It could be possible to accept maybe one correctable ECC error / year knowing that the same specific address needs two or more bit errors to actually lead to incorrect data being read. Only an unhealthy system produces this amount of ECC errors.
  3. I like your ordering. Order does matter a lot since lots of users refuses to read help texts. So creating a "temperature scale" that takes a guess and ends up off-by-one will at least be close to their intended goal.
  4. But are any of the errors from after pulling one memory module? 556 03/10/2018-15:40:20 Memory, Mmry ECC Sensor (#0x2) Warning event: Mmry ECC Sensor reports correctable error. There has been a correctable ECC or other correctable memory error for the memory module RANK_0, CPU_1, Channel = A, DIMM_1. BIOS SMI Handler - LUN#0 (Channel#0) The above is likely to be before the 24 hour memory test.
  5. "Never" and "Temporary" is definitely better than "No" and "Yes".
  6. The only time software produces ECC/CRC errors is when overclocking is involved. And since there is no overclocking of the SATA transfers there is no way unRAID can be the cause. I'm pretty sure you have had these transfer errors for a while but with 6.4.1 you got informed about them.
  7. Without TRIM, unused sectors will keep their previous contents after files have been erased. After TRIM, they will not. So I'm not sure any SSD can safely handle TRIM in a parity-protected array unless the system knows which sectors that has been trimmed and recomputes the parity for these sectors. The safe way is to use a SSD that has overprovisioning and do not need any TRIM to maintain good write speeds. This functionality is normal for enterprise-grade SSD. When used outside of parity-protected arrays, they can still make good use of TRIM to reduce wear - TRIM informs them which unused parts of flash blocks that doesn't need to be copied to new flash blocks when rotating in new flash blocks from the overprovisioning pool.
  8. The number of parity drives is all about how many drives that can fail without any data loss.
  9. Consider mounting the unRAID disks read-only - a read-write mount of a data disk will have the OS perform a number of writes to the disks that will invalidate the parity.
  10. Not sure of any forum in existence that can get better search results than Google. Which is why this forum should have the search button create a google search for this site instead of wasting peoples time with failed searches.
  11. A larger capacity supply may have a much larger initial cold current draw - the DC/DC converters normally have special logic to reduce the inrush current by implementing a slow-start functionality. And it really isn't much of an additional cost, because this logic is part of the DC/DC controller chip. It just doesn't start with full PWM modulation from cold start but instead ramps up the output voltage. Another thing here is that the PFC isn't just about compensating phase. The main thing with PFC on a computer PSU is to make the consumed current sinus-shaped instead of spike-shaped. With an old PSU without PFC, the power supply will only draw current when the voltage reaches the maxima - it's only then that the rectified input voltage will be above the voltages of the input capacitors and be able to top up the capacitors with more energy. I measured an older Chieftec PSU and it had a crest factor of 10 compared to 1.41 for a pure sine wave. And this is why the EU has laws that only puny power consumers may be without PFC - the current spikes produces a huge amount of noise in the local cables but also flattens the top of the voltage curve introducing large amounts of overtones on the whole power grid. Look at the first image on this PDF for an example what an older PSU without PFC looks like. http://www.programmablepower.com/support/FAQs/DF_Crest_Factor.pdf
  12. I would think people posting +1 or uses the like button really is a vote. It's a clear indication to LT that multiple users likes a suggested feature/change. It isn't like there are ten thousand users ready to vote so you really do need machine-processed counters to keep track of the voting process.
  13. He meant that when switching from one processor to a different processor, he had to readjust the core allocations for the already existing VM that had the core allocations optimized for the older processor.
  14. It has the read-only bit so it isn't 100% without permission support
  15. Quite a lot of support threads gets started in the General Support thread even if being related to a specific plugin or Docker. So a question is how to better inform people where and how to get support. Maybe the web interface should get large "Support" buttons directly in the list of installed plugins, just as there are "Update" buttons.
  16. Cached RAM just means that the OS will keep recently used file data in memory in case some program will request the same data again. The OS always strives to use as much of the available (but unused) RAM for cache so as not to waste excess RAM in the machine. So adding 4 GB more RAM will not increase the free RAM. It will just increase the amount of RAM used for caching of file data. Your 600 MB of free RAM is just the result of the actual memory consumption jumping a bit up/down as different processes runs. And when memory is released it ends up as free - until some other program needs it or until the OS will repurpose it as cache. If a program needs more than your 600 MB of memory, then the OS will instantly throw away some cached data and hand over to the program. So you aren't lacking RAM in the system.
  17. It's not the rating of the PSU, but the inner design of the PSU, that will affect how much inrush current the PSU might give when powering on. And really big PSU (if good quality) normally have better slow-start functionality than small PSU. A high inrush current is bad for the components in the PSU and bad for the components in the disks and motherboard. So a 1000 W PSU doesn't mean the UPS needs to be able to supply 1200-1500 W. What matters way more in selecting the UPS is the startup power needed by the equipment connected to the PSU. Next thing is that low-end UPS often directly cuts off the output on overload, while high-end UPS will continue to feed power but will current-limit the output to help with slow-starting the connected equipment.
  18. A number of people have glued a cap over buttons (more often the power button) they want to child-proof. Add a tiny hole in the cap allowing you to press the button with some pin but with the hole too small for the child's fingers.
  19. When you "give" disks to a disk array, it's the array that owns the disk. Never play with that disk outside the control of the array management system unless you really, really do know what you do - it's only safe to do a read-only mount of an array disk outside of the array. A normal mount will introduce writes to the disk - there are multiple reasons but one important reason is so an incorrect power-down will leave a marker on the disk about an unclean shutdown indicating that the next time the disk is mounted the file system layer has to look at pending writes in the journal on the disk and either replay them or roll back them. So taking an unRAID array disk and mounting it in normal read/write mode outside of the array means the write commands introduced when mounting the disk will create parity mismatches. Which means the system can't do a clean disk rebuild in case a drive fails. And the writes introduced when mounting one file system can result in very damaging rebuild errors for file systems on other disks.
  20. Or maybe better formulated "after any two drives have failed, the loss of another drive will cause data loss." Yes, the number of drives affects the probability of having a drive failure. But two parity drives greatly reduces the probability of data loss from fatal disk failures (the probability of data loss from user errors is still the same). Going from one to two parity drives is a quite big improvement. Going from two to three is an extremely small additional improvement in availability/survivability.
  21. I have a huge PSU tester at home - designed to be able to handle over 800W of load to be able to stress-test PSU with known loads and measure efficiency and difference between true and apparent power. But it probably weights 10 kg compared to the 50 gram testers that just checks the voltages.
  22. The parity check itself doesn't normally load the CPU very much - but the unRAID system regularly does other things too and the actual CPU load is bursty on top of the constant power consumption by the disks during the parity check. The intention with loading both CPU and disks at the same time is to push the PSU by keeping a constantly high load on the PSU. First off - if the PSU gives unstable voltages then you have a much higher probability of hanging the machine when you push the PSU. More so if you push it for a longer time, making the PSU hotter. But load tests involving read operations on the disks are best done with the array stopped so any system crashes doesn't require a parity scan.
  23. No - a normal PSU tester just validates the voltages. You don't get any loading of the PSU so you never runt it hot.
  24. Remember that older SATA cables weren't designed for 6Gbps transfer rates. The first generation cables were intended for max 1.5 Gbps and the second generation cables for 3 Gbps.
  25. There are function calls that can be used to pre-allocate disk space when copying data (fallocate() or posix_fallocate()). Great to reserve the required space up-front so a file copy doesn't fail because some other disk writes consumes part of the free space making the file copy fail. Also great to reduce disk fragmentation since two concurrent file transfers will not interleave the two data streams on the disk. It's just that most programs do not make use of pre-allocation in which case neither the copying program nor the file system layer will know if the file will fit until the disk suddenly runs out of space or the program finally closes the file handle. The disadvantage with pre-allocation is that the user will see a file of full size even if only a small fraction of the file has actually been copied.