srfsean

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

srfsean's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I've included the diagnostics file since I did gather that while I did not have access. I've now enabled the syslog server for the next time if it happens. tower-diagnostics-20221130-1814.zip
  2. For months I've been dealing my server 'crashing' every 1-3 weeks. I travel for work a lot and would just get home and force a reboot. This started I think after 6.9.3(maybe). Today I had the same problem but was local and had time to dig into it. No GUI, no Plex access, no VM access through VNC. Using the locally attached monitor I used the CLI to determine array, docker and vm's were running. ifconfig showed a massive number of dropped receive packets. Tried '/etc/rc.d/rc.inet1 restart' and immediately regained access to everything. No idea if this is a permanent fix but I setup a user script to do it every night. If anyone has insight I would greatly appreciate it.
  3. Same, I had to stop using it entirely. Dual 2690v4's and over 100GB ram with almost zero issues in years but this crushed it.
  4. My T600 also isn't showing power, just 0W no matter what is running. The RTX 4000 I had in before showed everything fine and I've cycled all of the settings and reinstalled the plugin.
  5. Had the same issue and clearing the cookies fixed it for me.
  6. I'll have to see if this can be run in the VM and offer the same convenience. I tried the Splashtop Anywhere access and it won't connect. I'm fairly certain all UDP is blocked out here.
  7. Looking for how to do this in OpenVPN-as now. Also researching Stunnel. Thank you. In looking for info on Stunnel I found Streisand. Has anyone implemented any of its more secure options in unraid?
  8. I'm running out of ideas for ways to access my server when away from home for work. I leave the country regularly for weeks to months at a time and like to maintain access to the server. I'm very often out of cell service and rely on the provided wifi (typically over satellite connection on ships). These have proven difficult to make VPN connections. I'm currently using the following which covers, almost, every use case: Wireguard OpenVPN ZeroTier Windows VM w/ Splashtop - Once VPN connection is made it works great - I've not had consistent success signing into this service when remote so I'm reluctant to pay for the Anywhere access. When Wireguard works it is fantastic. I keep OpenVPN docker running as a backup in case Wireguard has an issue(which it hasn't). ZeroTier worked really well on one ship I was on that blocked VPN connections but I'm now on one where even it isn't working. I'm running out of secure ideas. I'm currently researching SSH tunneling or using a VPS. I'm willing to buy a domain and direct that to the duckdns I already have setup if that will work(although I think duckdns is blocked out here as well). Also willing to pay for a cheap VPS service. Does anyone have an idea of a combination that will provide secure access while presenting a normal web request that would not be blocked? My Unraid setup is 6.8.3 on a Dell R730 on an AmpliFi HD router(that does not support VPN). I intend to replace with either UniFi or PFSense once I get home in a couple months. I typically do have access to the router settings like port forwarding via the AmpliFi app(not sure if that helps). Thanks in advance.
  9. In this new release could we expect to see full Raid 10 speeds from 4 NVME drives in a separate cache pool? Or will the SMB overhead still affect it? As it stands using the original cache pool design there have been no significant speed differences using Cache: Yes, No or Prefer settings, on 10Gbe, NVME to NVME, Ram Disk to NVME, SSD to NVME transfers. 10Gbe connection verified using iperf3. If this is as intended and not an expected feature from unraid let me know and I'll move this to a feature request in the correct forum.
  10. Have there been any developments in this use case? I have 4 970 Pro NVMe drives in a Raid 6 (I believe) cache pool on 10Gbe network and am seeing zero difference in read write speeds compared to the array. No matter the cache use setting. Occasionally the array has better transfer speeds than the NVMe cache(only a few MB/s more). I’m assuming this is the smb user share overhead. Everything seems to be right around 500Mb/s no matter the settings used. Iperf confirms 10Gbe connection speed is capable of being fully saturated. I’ve tried transferring from RAM disk, SSD and NVMe to array (cache set to Yes, No and Prefer). No matter the setting it’s like the cache does nothing for speed. I understand I could enable disk shares and probably see much better speeds but is there any intention or ability to have unraid function as intended through user shares? Faster transfer to cache and then move to array later.
  11. It was disabled within the VM. When I went in under VNC to try just that, it was already enabled. Then switching back to the GPU passthrough no-go. Not sure why it was set separately. Still learning all of this. What I was able to do was connect a monitor to the GPU directly, bypassing the remote connection over the network and re-enable it that way. Not sure why I didn't think of that initially. VM's are new to me so I don't think my brain even considered it an option. Thank you for the assistance!
  12. I accidentally disabled the VertIO NIC while trying to diagnose slow connection speed. This now prevents me from accessing the VM. If I switch to VNC(for graphics card) the NIC appears properly and I can access the VM, it's only an issue when I'm passing thru my graphics card. I'd rather not rebuild the whole VM. Is there a way to turn this back on through XML or something?