SMLLR

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by SMLLR

  1. Hello, I recently updated from 6.11.5 to 6.12.1 and have been experiencing some stability issues. So far the server has become unreachable twice in the last few days, leaving me to force shutdown the server. The server loses all network connectivity so, since I am running this headless, leaves me with no way to interact with the server. I plan on connecting a monitor, but I have to have one connected at power on for it to work for some reason, so I have yet to do that. I did, however, enable mirroring syslog to flash drive. Unfortunately, this did not seem to shed any light on the issue as the last entry in the syslog file prior to the reboot is a few hours prior to the issue. Though, looking at the syslog after the reboot, one thing did stand out to me a bit, but not sure if it is related: I have attached the diagnostics log that I pulled immediately after powering back on. Thanks gibson-diagnostics-20230626-1944.zip
  2. So this may not be the correct place for this, but since this is the thread that for the coral drivers, I'd thought I would give this a shot... Anybody have an idea on what I need to do to get the dual edge TPU to even show up in the OS? I would expect to see one device listed in lspci, but I see nothing unfortunately. I have verified that the wifi e-key slot is enabled in the BIOS. Any ideas on where to go from here would be helpful. Motherboard is an AsRock H570 Steel Legend with an Intel 10700K CPU (don't mind the K, it was part of a bundle and I had it lying around). Thanks
  3. I have an older HP G8 server that I am thinking about replacing with a newer build. The main use of the server is approximately 30 or so docker containers. The containers are mainly for home automation (Home Assistant, Nodered, etc), media (Plex, sonarr, radarr, etc), and game servers (valheim, minecraft, etc). I would like to eventually add a coral at some point, but I am still debating between PCIe, M.2 or USB. My current build is very much overkill mainly due to getting the hardware for free from work: Model: HP ml350e G8 CPU: 2x CPU E5-2440 v2 @ 1.90GHz Memory: 192GB ECC DDR3 The new build I am looking at is: Motherboard: ASRock H570 Steel Legend CPU: Intel Core i7-10700K (i have this on hand as its currently unused from a bundle purchase) Memory: 2x G.Skill Ripjaws V Series 16 GB I'm just mainly looking for any glaring issues that I would be missing and IOMMU groupings info. I don't really have plans to add a GPU for transcoding as I'm planning to use quicksync for it. I'd also consider selling the CPU and buying a newer 11th non-K cpu possibly if that would make more sense. Thanks
  4. So I believe this may be resolved, finally. I did make a large number of changes, but I believe a BIOS update and switching server's power management to OS controlled made the most impact. I am even using the default tunable settings, which I am now going to work on tweaking in an effort to improve performance. I just find it odd that it was working perfectly fine until earlier this month...
  5. All drives show udma6 as the mode the drive is currently being used. The only difference between them is that the older, 4TB drive does not have the "Advanced power management level" line. Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 0 Advanced power management level: 164 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns
  6. Finished up the rebuild, which averaged at 130MB/s, however the parity check still ran at 10MB/s and CPU was pegged at ~80%. I had to reduce the tunable settings to about a quarter of the original values to get back to where I was before the most recent parity check. It just seems odd to me that the rebuild is so fast without changing any settings yet the parity check goes so slow.
  7. I took the opportunity to reinstall the OS after backing up the existing configuration. As of right now, my parity rebuild is running at around 120MB/s and is 25% complete (it was running upwards of 150MB/s at the beginning). This parity rebuild is being completed with all four disks in place (three previously existing and the one new one). I fully believe all hardware is working as expected right now, however I will not know if a parity sync will work as expected until the rebuild is done tomorrow. If the parity sync works as expected, it may be worth digging in the configs to compare my old config with near stock config to see what may have caused issues. I believe reinstalling the OS should return parity sync runtimes to normal as the parity sync was working without issue prior to about two weeks ago. At this rate, the rebuild will be completed probably around noon EST tomorrow. I will hopefully have a positive update at that time.
  8. I have an HP ML350e G8 and have started to experience similar parity check issues that you were experiencing. I went through multiple parity check cycles with the default tunables an an HP H220 HBA without any issues, but started to see your problems once I added an additional drive and the issues are persisting even after removing the drive. I will be attempting to rebuild the server from scratch when I get a chance to rule out any configuration issues.
  9. Found the config and wiped it out. Rebooted to verify the settings actually stuck this time and re-ran a parity check with the same results. I have attached a new diagnostics report. prefect-diagnostics-20180808-1855.zip EDIT: It is worth mentioning that this is while in safe mode.
  10. Those settings were still set as the default when I first started experiencing this issue. I believe I started playing around with the value and even put a config in place to boot up with those options. I changed them in the disk settings before running the most recent parity check as shown below: I can kill that config and reboot the server to see if that helps at all.
  11. So I have been battling with this for a week now and am just about at my wits end at this point. I have had unRaid setup on my server for a few months now and everything has been mostly fine until the most recent parity check on Aug 1st. When I awoke that morning, the parity check was only 5% complete and had ran for over 8 hours. Normally, this check is completed in about 20 to 24 hours with my 8TB drives. I have tried a lot of troubleshooting and have even gone so far as to roll everything back to a single 8TB storage drive and an 8TB parity drive, yet the issue persists. This issue was first observed while running the first parity check after adding an 8TB drive. I am not sure if I ran a parity check after upgrading to 6.5.3. Additionally, this issue also persists with VM manager and docker services stopped, so zero disk activity outside of the parity check should be occurring. System specifications: HP Proliant ML350e G8 v2 2x Xeon E5-2440 v2 @ 1.90GHz (8 physical cores ea for a total of 32 threads) 96GB ECC Memory (12 x 8gb) HP H220 HBA card in IT mode (currently at firmware 15, but may be able to upgrade it to 20) A few things I have yet to try are: Revert back to an older version of unRaid Restart the server in no plugins mode I have uploaded a diagnostics below, though the log is a bit small as I canceled out the parity check after only a few minutes. prefect-diagnostics-20180808-1319.zip
  12. EDIT: This is apparently solved for some reason. I installed unRaid 6.3.5 and it booted without issue. I then upgraded to 6.4.1 and it still booted no problem. Not sure what the issue was. Hello all, I picked up an ML350e G8 and am working to get it setup, however I am having a few issues even getting started. This may just be a hardware compatibility issue, but I would like to try to confirm that before going out and purchasing more hardware. I was hoping the hardware had been around long enough that it would be supported. The issue I am having is I simply cannot get the network up on the server. I have tried setting up the USB both with DHCP and Static IP, however the br0 interface still comes up with a 169.254 address. Additionally, when booting into GUI mode, I cannot connect to the web interface when attempting to browse to localhost or 127.0.0.1 despite being able to ping the IP address from the server. I tried to check the network configuration under/boot/config, however the only folder there is ssh. I have tried to build the USB stick on two different computers using two different USB sticks, so I'm not sure why I'm only seeing this one folder. I ran the diagnostics and found the following lines in lspci.txt: 06:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01) Subsystem: Hewlett-Packard Company Ethernet 1Gb 2-port 361i Adapter [103c:337f] 06:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01) Subsystem: Hewlett-Packard Company Ethernet 1Gb 2-port 361i Adapter [103c:337f] I'm not seeing a kernel driver in use for those two items, so I'm assuming the issue is the unsupported NICs. Any input would be appreciated. Thanks