Rudder2

Members
  • Posts

    157
  • Joined

  • Last visited

Everything posted by Rudder2

  1. I can confirm this is the problem. Issuing "echo active > /sys/devices/system/cpu/intel_pstate/status" command fixed the parity check speed problem also. I've also noticed my server idles @ 10% instead of 35% after issuing the command. Guess this command goes in to the GO file.
  2. Preliminary check makes me think this might of been the problem. I will report back Friday when my Parity Check Starts. I just created a user script to issue "echo active > /sys/devices/system/cpu/intel_pstate/status" command every Wednesday @0500, 30 minutes after my automated reboot, to see how it works, I don't like messing with the GO file unless I have to. My CPU was locked at 1200 MHz instead of being able to go to 3000 MHz as needed. I noticed my Gaming VM, Web UI, and SMB shares were sluggish but didn't think much of it till I read that thread you posted. After running "echo active > /sys/devices/system/cpu/intel_pstate/status" from the SSH Terminal my WebUI, Gaming VM, and SMB shares are snappy again. I have the Tips and Tricks plugin installed and my only CPU governor options are Power Save, Performance, or Scheduled.
  3. It's a problem with saving the file. I can copy and paste the screen print... Nevermind...It's truncated.
  4. This is what I got: # lspci -vv > /boot/lscpi-vv.txt pcilib: sysfs_read_vpd: read failed: Input/output error pcilib: sysfs_read_vpd: read failed: Input/output error pcilib: sysfs_read_vpd: read failed: Input/output error
  5. Here's the Diagnostics 10 Hours in to the Parity Check running on 6.9.1. On 6.8.3 the Parity Check would be complete in 4 hours...It won't be...It's estimating 13 more hours to go when it use to only take 14 HRS and 30 Minutes. rudder2-server-diagnostics-20210326-0913.zip
  6. I can confirm that this worked for fixing the Parity Check after clean shutdown. I will gather the Diagnostic Data Friday AM and upload it to see if we can figure out why my Parity check is now ~50% the speed after updating from 6.8.3 to 6.9.1.
  7. unRAID usually works. When it doesn't I know Linux so usually can fix it myself. I've very rarely had a problem since I started running unRAID the July of 2014 so I'm not verse on getting help...Sorry about that...Would think that this is a testament to the quality LimeTech offers unRAID as a Server OS.
  8. Figured it happened during the update so y'all would know what you did that could of caused that... Parity check will start Thursday Evening if it doesn't start with the automated reboot like it has been. Will do it Friday AM so Parity Check has been running 12 hours or so. I wouldn't of EVER downloaded a Diagnostics with a Parity check running till you asked for it. Never would of crossed my mind because I try to not do anything with my server, other than the automated stuff which I have non critical stuff not happen during Parity Check or Rebuild.
  9. I just tried this. Will see how it works when I have my automated restart Wednesday AM. So what about the Parity Check being 50% of the speed since the update?
  10. Ever since I upgraded from 6.8.3 to 6.9.1 if I shutdown the server with the /usr/local/sbin/powerdown or reboot the server with /usr/local/sbin/powerdown -r when the system reboots it always performs a parity check like it didn't cleanly shutdown. Also, my Parity check average is 66 MB/s instead of 116 MB/s like it was on 6.8.3.
  11. Squid assisted me with this years ago...I do weekly reboot...Been working flawlessly for years now!
  12. I too want to upvote this. I have a SuperMicro server with SAS drives in it. Auto spin-down and spin-up would be nice.
  13. Been using unRAID since the summer of 2014. It's the best Server solution I've used. Here is to many more years! Keep up the good work!
  14. +1 I know the threw put would be slower and there are lots of other arguments not to use WiFi for a server but for some the benefits of placing the server in locations where Ethernet is not practical or not possible would be nice. My little brother hasn't set up his unRAID server because of the WiFi problem. I know there are ways around it, WiFi to Ethernet adapters and the such but the ability for someone to just put WiFi in the case would be nice.
  15. Please implement the ability for unRAID to spin down SAS drives automatically when idle. I understand that there is an argument that spinning down lowers drive life but there is already a feature to disable this for those who don't want drives to spin down. I like the drives spinning down to save power. Also, Parity drives are spun down most of their life on my unRAID system as my cache drive is huge and only syncs @ 0300 or when the Cache drive hits 70% usage. I've searched the forms and see others who want this feature but couldn't find it as a formal Feature Request. It's already a feature of FreeNAS. I'm not going to switch from unRAID, you guys rock. Just would like on the road map that it will become a feature one day, preferably sooner than later. I do understand your busy and I'm not a programmer so have no idea what I'm asking you to do. Thank you for such a wonderful server product!
  16. Thank you for the information. I have SATA enterprise drives from Seagate and they came with it enabled by default. I pick up these SAS Seagate drives and it was disabled. Thank you for the correction. It makes since. I have a UPS so no issues. Never had a problem even with power outages on my computer over many years so it sound like the risk is small but can understand in enterprise environment even a small risk is too much.
  17. You are the man! This was my problem. Man, it would of saved me 24 hours if I could of found this. LOL. I wander why unRAID is set to have write cache off by default on SAS drives. It's a drive setting not an unRAID setting. (Corrected by jomathanm below.) It's not a problem now that I know. Thank you so much for your help!
  18. I'm experiencing what I consider slow Parity Disk rebuild, 50MB/s. I replaced my parity 1 disk with a bigger disk and I plan on replacing party 2 disk with a bigger one in the next month or two so I can start using bigger disks. I upgraded my computer recently to a Supermicro 24 bay system. Primary Specs: CPU: 2x Intel Xeon E5-2690 V2 Deca (10) Core 3Ghz RAM: 64GB DDR3 (4 x 16GB - DDR3 - PC3-10600R REG ECC) Hard Drives: Not included, visit our store to purchase 1 Year Warranty drives at low price Storage Controller: 24 Ports via 3x LSI 9210-8i HBA Controller Installed closer to all 3 PCI-E Slots near power supply so Customer can Install his Dual Graphics card NIC: Integrated Onboard 4x 1GB Ethernet Secondary Specs: Chassis/ Motherboard Supermicro 4U 24x 3.5" Drive Bays 1 Nodes Server Server Chassis/ Case: CSE-846BA-R920B (upgraded to 920W-SQ PS quiet power supplies) Motherboard: X9DRi-LN4F+ Rev 1.20 Backplane: BPN-SAS-846A 24-port 4U SAS 6Gbps direct-attached backplane, support up to 24x 3.5-inch SAS2/SATA3 HDD/SSD PCI-Expansions slots: Full Height 4 x16 PCI-E 3.0, 1 x8 PCI-E 3.0, 1 x4 PCI-E 3.0 (in x8) * Integrated Quad Intel 1000BASE-T Ports * Integrated IPMI 2.0 Management 24x 3.5" Supermicro caddy I don't see a reason for the parity rebuild to be only 50 MB/s. I'm including my diagnostics file. I did the: and verified that all my cards were running in PCIe x8 mode. My LSI cards are PCIe 2.0, want to upgrade to 3.0 someday but that's not high on the priority list. My bios says all the PCIe slots are in PCIe Gen3 x16 mode with exception of the x8 that is in x8 but they defiantly are over the required standard for the LSI cards. Been reading and researching for 24 hours before posting here for an extra set of eyes. Thank you for your help in advance. rudder2-server-diagnostics-20190407-1333.tar.gz
  19. Port 8112 is in use so I have port 8112 mapped to 8114 threw unRAID.
  20. I have my LAN Network set to: 192.168.2.0/24 and Network Type set to: Bridge not any of the others, tried then just for shits and giggles and they give errors. My LAN on 192.168.2.1 as the first IP and 192.168.2.254 as the last IP. AKA 192.168.2.1 with a Subnet of 255.255.255.0