squirrelslikenuts

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by squirrelslikenuts

  1. To anyone else experiencing this problem... My issue was related to a Trendnet 2.5GBE card that I installed. After the update the network card driver was borked. You need to pull your usb boot drive out, open "network.cfg" and replace all the MTU sections that are set to 9000 with 1500. reboot the server and you will be ok.
  2. I just pulled my usb stick out and edited all the MTU's back to 1500. I had an uptime of over 200 days, on server grade hardware (hp proliant) and I am using a cheapo trendnet 2.5gbe card. I pulled the card and still the boot hung at "triggering udev events" My 200 day uptime was running on a TRIAL (yes I have a ups and good power) and I just purchased the unlimited license for this machine (will be my 3rd unraid). After purchase I did the upgrade from (i think i was 6.12 and went to 6.12.4 Just booting with the config file changed, will advise. Man if this was the cause I'm gonna shit a brick. Edit : Boot hung after "triggering udev events" and now hangs at "device vhost3 doesnt exist". Ill reinstall the 2.5gbe card and see... Edit2: Now there is a Kernel Panic "not syncing VFS unable to mount root fs on unknown-block(0,0) I will reinstall the GTX 1050 that I left out while trying to solve this problem Edit3: Yes, changing MTU, reinstalling the Trendnet 2.5gbe card and reinstalling the GTX 1050 has allowed the system to boot. fuck me sideways
  3. I have the exact same problem. HP Xeon server running unraid on trial for 200 days uptime. Decided to finally buy it for this server (I have 2 other HP servers with unraid) and did and upgrade. Upon reboot, system is bricked at the same spot. How does one run diagnostics if the system doesnt boot?
  4. Mods dont delete, this is EXACTLY what my problem was and I never would have figured it out. Thank you J05u!
  5. I'm not too good at interpreting log files, but Ill tell you what worked for me. After activating a VPN in Deluge, my webui wouldn't start. I banged my head against the wall all morning, I was ready to chop my dick off I was so frustrated. What ended up working for me was this. I was using a static IP for the specific container that deluge was running in docker. IE: My unraid server was say 20.20.20.10 and I have the deluge docker container pull its own IP from my router of 20.20.20.11 . I found it easier to split some containers away from my unraid root ip. Stop Deluge-VPN edit the container change network type to BRIDGE vpn on apply boot the container This is what worked for me. Goodluck!
  6. Can you be a bit more specific about how you fixed it? I have the same problem, after activating the VPN I cant connect to the webUI. Thanks!
  7. Array disk to array disk 8TB Red to 8TB Red No parity Turbo write enabled HP H220 HBA (SAS2308) -> SINGLE SFF-8087 Cable -> HP 12 Drive Backplane with integrated expander. Interestingly - While using unBALANCE to move files from drive to drive - at ~81 MB/s (lots of smaller files) I am able to speed test one of the other drives in the array at 200+ MB/s with only slight impact on performance. Slight speed hit to the drive being tested, but the 2 drives that are doing a transfer aren't really affected in performance. Edit: My model shows this in the specs 12HDD Models HP Smart Array P212/256MB Controller (RAID 0/1/1+0/5/5+0) NOTE: Available upgrades: P410 with FBWC, 256MB with BBWC, 512MB with FBWC, Battery kit upgrade (for the 256MB cache), and Smart Array Advanced Pack (SAAP). NOTE: Support transfer rate up to 3Gb/s SAS or 3Gb/s SATA I am using an HP H220 HBA capable of 6Gb/s sas or 3Gb/s SATA - but It appears the 12 drive backplane will only negotiate 3gb/s SAS. The P212 supports 6Gb/s SAS so I assume its the expander/backplane that does not. Edit2: hba connected at single link not dual
  8. I am using a PCIe 3.0 HBA on a PCIe 2.0 server, connected to a SAS1 expander (hp dl180 g6). I can speed test at ~205-210 MB/s on a single drive, but transferring disk to to disk its limited to 85-90 MB/s. Does this make sense?
  9. I just fell for the honeypot! Damn I wish this was still a thing.
  10. Example: WD RED WD40EFRX, slows to below 100 MB/s after 3600 GB
  11. 22 dell R710's for $180... man, the USA is a batshit crazy firesale. I feel bad for those businesses as a single R710 can easily fetch $200+ on any open market.
  12. CMOS battery completely dead on my server - date was Dec 2000 - chaning date resolved my issue!
  13. haha 1st world internet problems.. lol ! so they "took" the data , deleted and requested ransom? What was the fee?
  14. Thanks for the response! Questions 1-3/4 were pretty much answered. I was just recounting my interest. Defiantly look at pfSense and known blocklists... Almost anything in china/russia isn't needed for daily use. Was the data encrypted or uploaded as you speculated earlier?
  15. I read this entire thread for 4 reasons. 1. To see if OP recovered data 2. To ask why unRAID was facing the internet (appears to be an accident) 3. How/Why hack happened answers #2 4. To make sure someone tells OP to run pfSense. LOL Also OP, the unRAID box was facing the internet, how did they guess the password and actually ssh into the box? Was it an easy password? Does it appear in the logs that they just brute forced (1000's of logins)? Shouldn't unRAID have locked down after several failed attempts?
  16. Also find it interesting that I can squeeze a few more Mbit if I use FTP instead of SMB. I would assume this is due to Samba overhead ?
  17. Unfortunately no. And it pisses me off. The client (i7-3820, Asus Maximus MB, 32GB ram, intel ssd boot drive, all WD black drives) was a Windows 7 system. It has served me well for 6 years (since last re-install), and can WRITE to various servers (ubuntu, freenas and unraid) all at over 100 MB/s. When reading from the arrays, it would max out at 65 MB/s like clockwork, across 3 different server OS (with a slight bump in speed reading from ubuntu). I changed 4 variables (yes I know thats bad lol) at once to get a solid 112 MB/s R/W speed. Different Hardware (lower power Acer prebuilt i5/8gb/120gb ssd) Different OS - Windows 10 (albeit fully reinstalled and "fresh") Different Network Card Different Port/Cable on the switch I will not dedicate more than 1 more hour to tracking down what went wrong, as I was looking for an excuse to upgrade to Windows 10 so take advantage of installing (without kijigering) natively on an NVMe boot drive. Offending client that was capped at 65 MB/s read speeds from unRAID (network RX) was using an; -Intel 82579V Gigabit Network Adaptor (onboard) I'm unaware if this has known issues with unRAID, but the server shouldn't care what chipset of card is on the other end as long as it can handle GbE My goal was to get full speed from unRAID, and I have. If that requires a different network card or a different OS so be it.
  18. sorry that was the last test I had done, but my main original post was tested from bare metal Win7
  19. SOLVED I've made progress. Found an Acer i5-650 system in the basement with 8gb ram and threw Windows 10 on an SSD into it. After updating all the Windows Updates and throwing in an Intel PCIe network card, I was able to achieve this..with no changes to the unRAID server. Tested on unRAID 6.6.6 Previous tests were with a (higher end hardware but windows 7 system with onboard nic). No magic config. Fresh install of Windows 10 and an Intel PCIe network card. Thats it. Will test with the onboard nic in that system and report back. First Pic is WRITES TO the unRAID server Second Pic is READS FROM the unRAID server Kinks worked out, Im ready to buy
  20. I've isolated 2 unraid servers on a separate switch that doesn't have internet or any other devices. Static ip address assigned. Cat6 cable all around. Each server is running dual xeon X5570s and 32 or 64 GB ram - onboard broadcom nics. One server running 6.6.6 the other 6.7.0 iperf3 run as a client and server on each server report 112 MB/s sustained transfers. A windows VM on one machine (on nvme cache drive) writing to the other servers SMB NVMe share (or ssd share) reports over 100 MB/s transfer. The same windows VM copying data from the other servers SMB NVMe share maxes out at 65 MB/s. The Windows 10 file transfer graph looks like a rollercoaster (for reading) I'm at a loss. The 2 last things I will try is using PCIe intel nics and attempting a previous version of unraid like 6.5. Can anyone else try reading data from an unraid cache or SSD and report back speed over GbE?
  21. I've just recreated the issue in 6.6.6 on completely different hardware (ibm x3550 m3 dual xeon 32gb ram etc) Exact same symptoms. 101 MB/s write. 65 MB/s read through SMB. ~106 MB/s write and ~90-95 MB/s read using unraid FTP server. Last thing I will try and isolate the unraid box and windows machine onto their own switch.
  22. no change in read speed after upgrading to 6.7.0 Tried reading from cache, and from unassigned disk . ~65 MB/s read max