BillR

Members
  • Posts

    32
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BillR's Achievements

Noob

Noob (1/14)

5

Reputation

  1. That's just one of the depreciated options. Look for any of the options listed here: https://community.openvpn.net/openvpn/wiki/DeprecatedOptions (thanks to binhex for the link) Remove anything from your .ovpn file that is in the list of removed options.
  2. First, a banana would know exactly how to fix this 😁 Second, what is showing in your logs?
  3. Yes, exactly like that! 😆 Thank you - that will be most helpful to anyone else looking for the offending option in their .OVPN file. Would have been nice if the OVPN devs had coded their app to ignore depreciated options rather than throw a king-sized spanner in the works. But then, I admit to being a pleb who knows diddly squat about how these things work 😜
  4. Thanks mate, you too. I can't help but think that if there was a list of the depreciated .ovpn entries somewhere, we all might have found our solutions a bit quicker/easier. It's very interesting to read through this thread and see the different entries that have affected different people, presumably depending on what VPN provider they're using. There must have been quite a large number of depreciated features in this particular update to have affected a number of us like it has.
  5. Yes, I figured this out too, as per my earlier post:
  6. I only saw the same error in the supervisord.log file without the detail I needed. However, I ended up finding it by a process of elimination, commenting out one line at a time. I use ExpressVPN and for me it was the entry: keysize 256 Commented that out with # and I'm up and running again. Seems like an unlikely culprit, but there you go.
  7. Same issue here since the latest update. Can someone guess which option might be unrecognised in my .ovpn file? Or is there a list of depreciated options in this latest version somewhere so I can figure it out for myself? Here's the config in my .ovpn file: dev tun fast-io persist-key nobind remote (Hidden) remote-random pull comp-lzo no tls-client verify-x509-name Server name-prefix ns-cert-type server key-direction 1 route-delay 2 tun-mtu 1500 fragment 1300 mssfix 1200 verb 3 cipher AES-256-CBC keysize 256 auth SHA512 sndbuf 524288 rcvbuf 524288 auth-user-pass credentials.conf Any help would be most appreciated.
  8. Yep, just found this out the hard way after installing it myself. Removing it for a better option...
  9. Yep, that's the same thing I tried (see above). Unfortunately, I need btrfs because I want redundant disks and it didn't seem to help.
  10. Here you go. This is over VPN from my work place to my Unraid server at home, so take that into account. Performance seems fine to me.
  11. I'm actually a Limetech shill and they pay me $50 to tell people the Web GUI is fine 😆 No, but seriously, I'm at work and am logged into my Unraid server over VPN and the GUI is honestly snappy. I have always used a good quality, speedy USB3 flash drive plugged into a USB3 port. I wonder if that has anything to do with it? I have honestly never had a complaint about the Web GUI performance. I have never actually stopped to think about where the data is loading from to build the GUI - I can't tell you if it comes from cache or flash. All I can tell you is that the Web GUI has never been sluggish for me. Tell me if you want me to record a screen grab example and I'll post it somewhere.
  12. Do your run a cache drive/pool? If not, and your disks are spinning down, that might be the reason you are seeing this. If some data needs to be pulled from disk to update the UI, you are going to be waiting for a spin-up. I have run a cache pool ever since I first built my Unraid server a number of years ago and I've never experienced a sluggish web UI.
  13. Will you be migrating data off your old drives or just starting from scratch? Either way, Unraid is surprisingly tolerant of moving to new hardware. You can add new drives and migrate the data, or just delete your array and start all over again. I would suggest backing up your USB drive first - on the main screen, click on the flash drive and then click on the "flash backup" button, in case you want to roll back. But other than that, as John said, just boot on the USB drive - the license is tied to the USB drive alone, not any of the other hardware (drives, motherboard, cpu). So unless you wanted to move your license to a new USB key (instructions on the wiki for that), you don't need to do anything special.
  14. Wow, that's a big sacrifice to make. If I couldn't use dockers at all, I would have to move away from Unraid, which would be a real shame, as I really do love the functionality and ease of use of Unraid. May I suggest doing what I've done and cache on a couple of hard drives instead? I do notice that it takes much longer for all my dockers to start up on spinning rust, but once they are up, the performance isn't too bad. The disks I'm using are a couple of 2 TB WD Reds I happened to have lying about, but I'm tempted to buy a couple of newer drives with higher performance, as the 2TB Reds are quite slow.
  15. I've been using UNRAID for a few years now and I have burnt through so many SSDs, that I have stopped using them for cache and now just use a pair of spinning rust drives. I want redundancy in my cache, so I pool, and that means I have to use btrfs, which I understand is more susceptible to the problem. I know a few versions ago (6.9.0, I believe), there was a supposed fix to stop excessive writes to cache. At the time, I blew away my cache pool and re-created it, but this has been an ongoing problem for me regardless. After burning out my last pair of NVMe drives after just a few months, I ordered and installed a new pair. After about a week they showed hundreds of GB of writes - and this was without caching any data shares, just the default used for Docker containers. I tried the suggested fix to send log files to RAMDisk here: But this didn't seem to stop the incessant writes to cache that seem to occur if I have pretty much any Docker container running - and not just the known troublesome ones like Plex. We get cool new features with new Unraid builds, but show-stopping issues like this seem to persist for years. I would love to see some more effort into fixing problems like this, that actually cost me significant money in having to replace SSDs. As said, I've now given up on using SSDs for cache and have a pair of hard drives on cache duty, sucking more power and not performing as well. And yes, they are still constantly being written to by just having docker running, but fortunately, don't wear out and die like SSDs do.