BillR

Members
  • Posts

    32
  • Joined

  • Last visited

Everything posted by BillR

  1. That's just one of the depreciated options. Look for any of the options listed here: https://community.openvpn.net/openvpn/wiki/DeprecatedOptions (thanks to binhex for the link) Remove anything from your .ovpn file that is in the list of removed options.
  2. First, a banana would know exactly how to fix this 😁 Second, what is showing in your logs?
  3. Yes, exactly like that! 😆 Thank you - that will be most helpful to anyone else looking for the offending option in their .OVPN file. Would have been nice if the OVPN devs had coded their app to ignore depreciated options rather than throw a king-sized spanner in the works. But then, I admit to being a pleb who knows diddly squat about how these things work 😜
  4. Thanks mate, you too. I can't help but think that if there was a list of the depreciated .ovpn entries somewhere, we all might have found our solutions a bit quicker/easier. It's very interesting to read through this thread and see the different entries that have affected different people, presumably depending on what VPN provider they're using. There must have been quite a large number of depreciated features in this particular update to have affected a number of us like it has.
  5. Yes, I figured this out too, as per my earlier post:
  6. I only saw the same error in the supervisord.log file without the detail I needed. However, I ended up finding it by a process of elimination, commenting out one line at a time. I use ExpressVPN and for me it was the entry: keysize 256 Commented that out with # and I'm up and running again. Seems like an unlikely culprit, but there you go.
  7. Same issue here since the latest update. Can someone guess which option might be unrecognised in my .ovpn file? Or is there a list of depreciated options in this latest version somewhere so I can figure it out for myself? Here's the config in my .ovpn file: dev tun fast-io persist-key nobind remote (Hidden) remote-random pull comp-lzo no tls-client verify-x509-name Server name-prefix ns-cert-type server key-direction 1 route-delay 2 tun-mtu 1500 fragment 1300 mssfix 1200 verb 3 cipher AES-256-CBC keysize 256 auth SHA512 sndbuf 524288 rcvbuf 524288 auth-user-pass credentials.conf Any help would be most appreciated.
  8. Yep, just found this out the hard way after installing it myself. Removing it for a better option...
  9. Yep, that's the same thing I tried (see above). Unfortunately, I need btrfs because I want redundant disks and it didn't seem to help.
  10. Here you go. This is over VPN from my work place to my Unraid server at home, so take that into account. Performance seems fine to me.
  11. I'm actually a Limetech shill and they pay me $50 to tell people the Web GUI is fine 😆 No, but seriously, I'm at work and am logged into my Unraid server over VPN and the GUI is honestly snappy. I have always used a good quality, speedy USB3 flash drive plugged into a USB3 port. I wonder if that has anything to do with it? I have honestly never had a complaint about the Web GUI performance. I have never actually stopped to think about where the data is loading from to build the GUI - I can't tell you if it comes from cache or flash. All I can tell you is that the Web GUI has never been sluggish for me. Tell me if you want me to record a screen grab example and I'll post it somewhere.
  12. Do your run a cache drive/pool? If not, and your disks are spinning down, that might be the reason you are seeing this. If some data needs to be pulled from disk to update the UI, you are going to be waiting for a spin-up. I have run a cache pool ever since I first built my Unraid server a number of years ago and I've never experienced a sluggish web UI.
  13. Will you be migrating data off your old drives or just starting from scratch? Either way, Unraid is surprisingly tolerant of moving to new hardware. You can add new drives and migrate the data, or just delete your array and start all over again. I would suggest backing up your USB drive first - on the main screen, click on the flash drive and then click on the "flash backup" button, in case you want to roll back. But other than that, as John said, just boot on the USB drive - the license is tied to the USB drive alone, not any of the other hardware (drives, motherboard, cpu). So unless you wanted to move your license to a new USB key (instructions on the wiki for that), you don't need to do anything special.
  14. Wow, that's a big sacrifice to make. If I couldn't use dockers at all, I would have to move away from Unraid, which would be a real shame, as I really do love the functionality and ease of use of Unraid. May I suggest doing what I've done and cache on a couple of hard drives instead? I do notice that it takes much longer for all my dockers to start up on spinning rust, but once they are up, the performance isn't too bad. The disks I'm using are a couple of 2 TB WD Reds I happened to have lying about, but I'm tempted to buy a couple of newer drives with higher performance, as the 2TB Reds are quite slow.
  15. I've been using UNRAID for a few years now and I have burnt through so many SSDs, that I have stopped using them for cache and now just use a pair of spinning rust drives. I want redundancy in my cache, so I pool, and that means I have to use btrfs, which I understand is more susceptible to the problem. I know a few versions ago (6.9.0, I believe), there was a supposed fix to stop excessive writes to cache. At the time, I blew away my cache pool and re-created it, but this has been an ongoing problem for me regardless. After burning out my last pair of NVMe drives after just a few months, I ordered and installed a new pair. After about a week they showed hundreds of GB of writes - and this was without caching any data shares, just the default used for Docker containers. I tried the suggested fix to send log files to RAMDisk here: But this didn't seem to stop the incessant writes to cache that seem to occur if I have pretty much any Docker container running - and not just the known troublesome ones like Plex. We get cool new features with new Unraid builds, but show-stopping issues like this seem to persist for years. I would love to see some more effort into fixing problems like this, that actually cost me significant money in having to replace SSDs. As said, I've now given up on using SSDs for cache and have a pair of hard drives on cache duty, sucking more power and not performing as well. And yes, they are still constantly being written to by just having docker running, but fortunately, don't wear out and die like SSDs do.
  16. If I'm reading this right, you have two NICs on the new server? In that case, the likely problem is the order of the NICs is opposite to what you think. Boot into GUI mode and swap them here:
  17. I think you are right. I have chosen to remove that missing disk from the pool and it has allowed me to start the array. The dead disk does not show as unassigned, so I think it really is dead. As mentioned, I will test on another motherboard to confirm and report my findings. Fortunately the place I bought the drives from is usually very quick to process RMAs.
  18. Do you have more than one NIC? Can you check that the MAC address showing for the NIC assigned to Interface eth0 actually matches the MAC address of the physical adaptor? Under interface rules, you can choose the NIC order.
  19. Sorry to sound like a broken record here, but still I don't know which drive has failed, as the pop-up message blames one drive, whilst the web GUI blames the other - there has got to be some kind of bug in UNRAID for this to be happening. I think my next move will be to test each nvme drive separately on a test motherboard so that I can figure out which physical disk is actually faulty, as I cannot trust UNRAID to correctly report it. As mentioned before, these are new drives and I think one has just physically died after 3 or 4 weeks of use. I will report back my findings once I've been through the fault-finding process. Thanks for your suggestions.
  20. Thanks for the suggestion. I have already tried that - I shut down the server, removed the power cable and let everything completely discharge before powering back up, but it didn't help. Then I shut down again, removed and re-seated the nvme drives, but again, no dice. Due to the fact that the web GUI and the pop-up warning disagree with each other, I still don't know which physical drive is bad, so I am going to have to remove one at a time to find out. These nvme disks are only about 3 or 4 weeks old, but it does seem that one had died - I just don't know which one, due to the conflicting error messages in UNRAID. Cheers, Bill.
  21. Hey There, My Unraid server was showing pop-up messages about a missing cache disk (I have 2 x nvme), but the UI showed both disks intact. So I rebooted my server. Now, the server shows a disk missing, but the pop-up warning shows that it's the other disk that's missing. Which one is correct and why do they disagree? The pop-up message tells me that the disk with the name/serial number TEAM_TM8FP6001T_112203280020020 (nvme0n1) is missing, but the UI tells me that the disk with the name/serial number TEAM_TM8FP6001T_112203280020220, which was (nvme1n1) is the one that's missing. Why do the pop-up message and the UI each report that the opposite disk is missing? I don't know which one I should replace. Do my logs show which is correct? unr1-diagnostics-20220912-1403.zip
  22. Thanks Johnnie. It seems that the same bug was also stopping update notifications coming up. I've updated to 6.6.6 and will see what happens.
  23. I have my download share cached. I had my Mover schedule set to hourly and time of day set to every three hours. I noticed yesterday that my cache drive was full, so I manually ran Mover and this freed up all the space on my cache drive. I then set the time of day to every hour to see if that would help. After some testing, I am finding that nothing is cleared off the cache drive unless I run mover manually - the scheduled Mover task is doing nothing. I enabled mover logging a few hours ago, which I hope will help with diagnostics. Any idea what I can do to fix this? unr1-diagnostics-20190123-2152.zip
  24. Are we reading the same forum? Lots of posts, very few issues as far as I can see.
  25. The fans built into the drive cages seem to keep the disks nice and cool. It's winter here, but when I shut the unit down after hours of use, I can put my finger on the CPU heat-sink and it is barely warm. I will re-assess when the weather gets warm, but right now, everything seems cool. The little 80mm side fan is currently set to intake, and between it and the drive cage fans, there's plenty of air flow. The CPU TDP is 65 watt, including the GPU, so not a lot of heat generated. I think it will be fine.