Vr2Io

Members
  • Posts

    3668
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Vr2Io

  1. Yes, I think both may have same function. so I didn't try and I don't want system hang again.
  2. Do you think NOHALT in Syslinux boot loaders would help? http://www.syslinux.org/wiki/index.php?title=Config#NOHALT I haven't try NOHALT but confirm global C-stats disable still need when system in light load. For how much power consumption save I haven't idea, but I think just little. BTW, I running R7 1700 with stock cooler and config one of my case FAN would stop if CPU lower then 45c ( CPU fan running in silent profile ~1200rpm). I found the case FAN always in stop condition, I setting this so I can easy got CPU temp status. NOHALT § NOHALT flag_val If flag_val is 1, do not halt the processor while idle. Halting the processor while idle, significantly reduces the power consumption, but can cause poor responsiveness to the serial console, especially when using scripts to drive the serial console, as opposed to human interaction.
  3. Parity check should be a easy way to check and you can stop anytime. If you don't like that, you may try access file by any kind of network share to check speed normal or not.
  4. Does slow only happen during writing, how about read speed ? i.e. file read, array check ?
  5. How could a UPS 1500VA 900W suitable protect dual 750w PSU ? I use a sinewave APC UPS, it rock solid. For fans cycling wildly during power dead, it is normal as heat generate so serious. I also have another 550VA 390W APC back UPS, it haven't any FAN, so the firmware will limite it work max ~30min even battery have enough power, this is safety design. If you in China now, suggest buy a cheap 2nd hand APC sinewave UPS in TAOBAO.
  6. Haven't any experience on SAS backplane, but I think use with SATA HDD should be OK, just all LED / management may not work. For M1115 ( LSI 2008 chip ), may need to crossflash to LSI firmware. I use those 2008 / 2308 haven't extra FAN for cooling. Anyway I think you cound try to boot to unRAID, no need make any change on any harddisk, the GUI ( don't assign / mount any disk ) would let you know the status / result.
  7. Thanks, I am using same MB too, keep for ref.
  8. apcsmart The 'apcsmart' protocol uses an RS232 serial connection to pass commands back and forth in a primitive language resembling modem-control codes. APC calls this language "UPS-Link". Originally introduced for Smart-UPS models (thus the name 'apcsmart'), this class of UPS is in decline, rapidly being replaced in APC's product line by USB and MODBUS UPSes. http://www.apcupsd.org/manual/manual.html#supported-upses-and-cables I think APC SMART ups not mean all item will be same/exist, my one will show "Load%" but no "Load" item, this same under "PowerChute Business". In unRAID most item could be show as below. I also have 2 "Back ups", both also have different item show.
  9. For me, all system will first prove stable under Windows then put in unRAID, Windows have many tools for checking.
  10. I don't think this is HBA/ports issue, if SMART was good, suggest stop array -> "New config" -> retain data disk config, then re-assign parity drive (with valid parity) and try again. If all thing alright then fully sync once.
  11. May be. Suggest OP check if mainboard BIOS "Interrupt 19 caputre" enable and add-on card BIOS show or not. Usually, I will disable it, this can reduce the boot time.
  12. I have try ASM1061 before, just plug and work. Not sure require or not, I have enable BIOS mainboard AHCI function even no any SATA connect to mainboard. ASM1061 work stable and cool
  13. Suggest an importance step should do, Sync the arrary once before change. Because you don't know parity valid or not.. That's what I do.
  14. Yes Better one Preclear new 8TB disks (already done) Stop the array Shut down the server Swap the current parity disks with the new ones Boot up server Re-assign parity slots to the new drives Turn on the array in maintenance mode and just let it rebuild
  15. If you want the array still valid with the old parity drive (6TBx2), you should run in maintenance mode to rebuild both new drive in same time. The drawback was whole file system will not available, but you still have 2 drive fail protection.
  16. Another question, what make you need bonding for 10Gb ? I found unRAID itself always haven't extreme performance.
  17. I have Ubiquiti US-16-XG and connect in SFP+ fiber wihout problem. BTW, the fiber NIC was single port, so I haven't try bonding. As we know those Switch have issue with 10GBASE-T, so I don't think it is unRAID issue.
  18. set the drive to FFD ? I think you means FDD (emulating USB-FDD). I don't think emulating USB-FDD was correct. In general, I always let it be default AUTO, never touch this setting.
  19. For me, I build a Ryzen system. It also have 8 onboard SATA and 1 NVMe, re-use parts are LSI 2308, 10Gb SFP+ network card etc. It is most cost-effective way which I got all what I want. ( You shouldn't found a AIO in market for this setup ) For a new build and not reuse parts future, AIO will be fine. I haven't LSI 3008, but unRAID haven't problem with 2008 / 2308.
  20. Just join Ryzen party with R7 1700, Asus Prime X370 pro. The build and test already complete. Thanks @Pauven for the C-State tweak, this save lot of time troubleshoot. - Whole system power draw have some increase, but check not a big deal. ( I have UPS, Socket meter, clamp meter, quick measurment ) - RGB LED mode config retailn in main-board , power-off won't lost. (setting first under Windows) - Quite disappoint the storage throughput, it celling at ~700MB/s. ( LSI 2308, 6 data + 2 parity disks ), CPU usage ~20%, in this moment it seems SW bottoleneck rather then HW issue.
  21. Seems you are not running APCUPSD in Rpi. Your config in unRAID was correct, I haven't problem connect PC & unRAID to Rpi UPS server.