mattz

Members
  • Posts

    81
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by mattz

  1. Thanks, @JorgeB. It's running with what looks like all my original data! This is a huge relief. I restarted the server in Safe Mode before I ran that check. The checkbox to "rebuild from parity" was available, but I ignored that and started the array in maintenance mode. I think at that point it was already OK since it wasn't asking me to format the drive. I ran the check, via the WebGUI and it didn't have any suggestions. Started the array in regular mode and it worked! Thank you.
  2. Background in my previous post: I had a failed drive and purchased a new, bigger one and ran a parity swap and rebuild. Everything was going according to instructions until I open up to see how the Data-Rebuild went and see the message (screen shot attached): "Data-Rebuilding finished (21 errors) Canceled". The [previously parity] disk drive that was being rebuilt is still listed as "Unmountable: Unsupported or no file system" despite xfs being listed as the FS (after the rebuild). A couple questions: Did I lose the ability to rebuild that drive from the parity+other data drives (and lose a large portion of my data )? How do I get that drive working in my array? If I lost the data, is it just a matter of formatting (the option is below on Main)? Attached latest diagnostics and a couple screen shots. Edit: I did run a quick SMART check on that 0SW disk after the canceled rebuild, it passed, attached. untower-diagnostics-20231024-0619.zip untower-smart-20231024-0611.zip untower-smart-20231024-0611.zip
  3. Sorry, sorry, sorry. Clearly spelled out as a "3 drive parity shuffle" in the FAQ: https://docs.unraid.net/legacy/FAQ/parity-swap-procedure/
  4. I think I painted myself into a corner. Unraid 6.12.2. Short question: Can I replace a failed data disk in a single parity with a disk bigger than the parity disk? e.g. rebuild the array using only 2 TB of a 4 TB drive? (I think the answer is the parity disk ALWAYS needs to be the biggest). Background: I have a 6 TB array with a single parity; i.e. 3 x 2 TB data disks + 1 x 2 TB parity disk. I had 1 x data disk die on me. I purchased a 4 TB disk because... The WD Reds are nearly the same price for a 2 TB as a 4 TB. However, the GUI is prohibiting me from hitting "start" after I removed the failed 2 TB drive and replace it with the 4 TB because If this is a new array, move the largest disk into the parity slot. If you are adding a new disk or replacing a disabled disk, try Parity-Swap. How can I get up and running at this point? Is this the best path to go down: Copy the current 2 TB parity disk onto the new 4 TB drive (how do I do this??). THEN use the 2 TB parity disk as a data disk, keeping the 4TB disk as the new Parity. Rebuild from here. (now with a 4 TB parity in case of future upgrade)
  5. Bear with me - Forgive my ignorance and forgetful brain: About a year ago I set up a Docker Airflow deployment using docker-compose to automatically restart on server reboot. I did this with a docker-compose.yml file (don't remember from where I got the code) and not an App or Plugin. It has been working and bulletproof. Now I no longer need it, but I cannot figure out where I set it up to restart on every reboot! How can I stop Airflow from launching on reboot? Where would this normally be scheduled through? Any log entries I should be looking for to discover this? A few things I've tried: I had a user.scripts plugin entry to start (pip install) docker-compose on every reboot. I disabled that, but on restart the airflow containers are still launching (HOW??) I renamed the dockerized-airflow folder (/mnt/user/code/dockerized-airflow) that contained the docker-compose.yml, but it is still launching and creating the folder for the postgresql db! The code must be somewhere else, but I can't, for the life of me, figure out where. appdata/airflow-home only contains logs and dags, but I have deleted it to no avail. Attached screenshot of the docker containers and mounted volumes. Thank you!
  6. @francishefeng59 I will message you the public key. I think they are OK to share on a forum, but I'll err on the side of caution.
  7. @Roudy I made that change, started it up, and it's working. WTF? I mean, it ran for YEARS problem-free with my old DNS setup. Whether coincidence or your suggestion: I thank you! 🍻 @francishefeng59 - Check my Log for the environment variables, but also will copy them here. Some of my settings may be out of date because I set it up years ago, but it does work. Check https://github.com/binhex/arch-sabnzbdvpn for the latest settings. Set up these environment variables via your docker config, change the few to fit your : -e VPN_ENABLED=yes \ -e VPN_USER=<vpn username> \ -e VPN_PASS=<vpn password> \ -e VPN_PROV=custom \ -e VPN_CLIENT=openvpn \ -e VPN_OPTIONS=<additional openvpn cli options> \ -e STRICT_PORT_FORWARD=yes \ -e ENABLE_PRIVOXY=yes \ -e LAN_NETWORK=<lan ipv4 network>/<cidr notation> \ -e NAME_SERVERS=84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1 \ -e DEBUG=true \ Copy the OpenVPN config files into the appdata folder: appdata/binhex-sabnzbdvpn/openvpn. There should be 3 of them: credentials.conf (with your VPN username and password) the single .ovpn config file that you want to use vpn.crt with the public key (I found it on the Privado Support pages) Side note: Privado now has Socks5 support, so you can directly connect using Radarr and Sonarr, but SABNZBD still does not have it, so you will still need a VPN solution like this for the time being, though I do see a new feature may be coming for that.
  8. I might be having the same issues as @francishefeng59. I am also using Privado VPN with the OpenVPN client (Privado is the VPN that comes with the newshosting.com subscription). Since November 4 I have been unable to connect news.newshosting.com AND radarr and sonarr fail to connect to any indexer using the sabnzbdvpn privoxy container proxy. The most maddening part is that watchdog-script in the Log shows the host name resolves: 2021-11-14 21:07:05,613 DEBG 'watchdog-script' stdout output: [debug] DNS operational, we can resolve name 'www.google.com' to address '142.250.69.196' However, running `docker exec -ti binhex-sabnzbdvpn curl -L http://www.google.com` results in: curl: (6) Could not resolve host: www.google.com And in the sabnzbd webgui I get a DNS error for news.newshosting.com: [Errno 99] Address not available Check for internet or DNS problems I tried a different Privado VPN server with new ovpn config files to no avail. Connecting to it directly with my PC's OpenVPN has no problems with DNS or anything. Attached my log with sensitive info redacted. I am not sure where to go from here and would appreciate the help. log-2021-11-14-binhex-sabnzbd-vpn-redact.log
  9. I am now running 6.9.2 without issues for either GPU or USB Controller. That is, 6.9.0 fixed the FLR issues that was causing my system to hang, without having to use the custom kernel with 6.8.3. If you are updated past 6.9.0 you should be good to use the VFIO-PCI Config interface under Settings in your Unraid admin web GUI to isolate USB controllers. The only caveat is that I had to be careful about which USB controller I isolated: One had the Unraid USB key drive, and that needs to stay. Another USB controller had my Network port on it, so I couldn't isolate that. Otherwise, I isolated one controller for pass-through to my Windows VM.
  10. Redacted... In my experience: currently on Centurylink Fiber in Portland, OR. and I can't crack the nut.
  11. That's a classic use case. It has not gone well for me personally, due to my ISP blocking incoming port 443 requests. I need to use a different port to forward requests via a Reverse Proxy. For example, I need to enter nextcloud.example.com:1443 (notice the port number). Most ISP's lock down port 443, ISPs I have been with across the country locked 443. so you won't be able to use any domain without appending the port number you are using as a substitute (e.g. 1443 or something). It works, but it is not the super clean option I wanted. Other consideration is static vs. dynamic IP address - you can use a service like DuckDNS.org to get around that, and link to that with an ALIAS or A record from your domain. LDAP (or any Single Sign On) is not necessary and would be overkill for a family. You will not be managing users on a regular basis (you add them and they stay, right?). You would use NextCloud's built-in user management and separate logins for each Sonarr and Radar. If you want to remote access to YOUR server from outside, you would need to set up a VPN server to access your network. Check out OpenVPN Sever to do that. Wireguard VPN would be more about securing your server's outgoing connection.
  12. I am not an expert in this space, but the first thing we would need to help you is your use case: for what are you trying to solve? If you want to access your server or assets remotely, consider setting up a VPN on your server and connecting exclusively with that. The reverse proxy makes me think you want to be able to route requests from subdomains to specific applications, most likely for publicly hosting access to your server. In that case you do need to open port 80 and 443 to the reverse proxy app. LDAP makes me think you want to set up user management to one or more apps... maybe to have them register for your blog and also allow a user to use some completely separate front end app? It's a complicated process, for sure.
  13. You said it, @MsDarkDiva! Thank you for the GitHub link and FAQ quote. I guess it was just lazy/taking for granted the "Ignore IP" for me. Took me 2 weeks to finally sit down and try to solve this.
  14. Just did my upgrade to Unraid 6.9.1, and it is all smoothly running! I have not yet removed the Kernel VFIO definitions in the boot flash, but I will switch over to the new, integrated menu in the Settings when I get chance. Because I should be able to remove both the pci_no_flr (no longer need) and vfio-pci.ids (now in Settings > VFIO-PCI Config). 🍻
  15. Wanted to close the loop on this. I *think* this issue has been fully resolved with the release of UnRaid 6.9.0, since they are using the Linux Kernel 5.10.x branch: https://wiki.unraid.net/Unraid_OS_6.9.0 The original Linux Kernel fix for the AMD 3xxx/Xen CPUs was implemented in 5.8.x, so we should be good now: https://github.com/torvalds/linux/commit/39a1af76195086349c4302f01e498a5fcbcb11d6 I have not yet tried it, but I will when I have potentially a few days to feel the frustration if I have to revert.
  16. I like this Q- my take: Best bet is a dedicated NAS- keep it up 24/7 and stable OS with no fuss, and this should be independent to your main desktop environment. (this is why I like using a VM as my main gaming desktop on top of Unraid) Like @trurl said, you can only do RAID when you have multiple disks (you need at least 3 drives for RAID 5). You only need any RAID setup for disaster recovery, so if you are not worried about a disk failure wiping your content, you wouldn't need it at all. Unraid let's you use a few different methods that are "like" RAID, using a "parity drive". You don't need to install all disks at the same time no matter what route you take. I like Unraid because adding/upgrading disks is EASY and SAFE. Plex is a common use case for Unraid, and it's as easy as searching for "plex" in the app store, which literally sets it up in a Docker container and allows you to access via a browser. Installing on your own (as in desktop) should be as easy as any other application, too. I also see roon server, but I never heard of it. (screen shot) Trurl answered the pool and cache questions but, in short, they represent potentially faster and better ways for your NAS to operate.
  17. Has the latest Unraid (6.9.0) included all the fixes this Kernel includes? I would like to upgrade to the latest version (or have a timeline for it), but not if it involves compiling a custom kernel to accommodate the whole x470 motherboards with Ryzen 2 3000 CPU's and pass-through. TY! Note, I am specifically referencing the FLR fix with the Kernel.
  18. @RaidBoi1904 You are a champ for jumping head-first into this issue with a new UnRaid setup. And, sorry to hear the problems all at once... they are not so bad when they pop up once every 2 years after a major hardware upgrade. But your first time out can be rough. So, to pass through Audio and USB (or anything), you will need to isolate them (in addition to the no_flr hack right now for this mobo/cpu combo). It looks like you know where you're going- Main > Flash > Syslinux Configuration to add these lines My setup looks like this for just the USB -- notice the vfio-pci.ids for isolation--I don't know if I need all of them, but I do them as a group and it works: pcie_no_flr=1022:149c,1022:1487,1022:1485 vfio-pci.ids=1022:149c,1022:1487,1022:1485 You will also need to isolate the Audio device to pass-through. on my mobo it looks like it's 10de:10f0, so you would add that to vfio-pci.ids: I use the Arctis Pro Wireless headset that has an external USB driver, so don't need the audio controller.
  19. Good idea adding limetech. They may defer for it to be included into the Linux Kernel, which should come based on that commit I reference. However, with the Ryzen 3600 and others SO CHEAP and performant I am sure there are quite a few people moving on them. BTW - Those steps you had to take, good points. Super annoying, it's because the VM image will "remember" devices that are "removed". You can also edit the XML directly to remove the reference so you don't need the checkbox; however, it's a little bit of guesswork to figure out which XML element(s) it is.
  20. Wanted to confirm this. First install of OPNsense (v 20.1 DVD ISO), and I was unable to see the default UnRaid network interface with Q35. Reinstalled with i44fx-4.2 and it worked without a hitch. See the same on the OPNsense forum - https://forum.opnsense.org/index.php?topic=13607.0 I should be getting my quad port NIC this week. Excited to get running.
  21. Wanted to follow-up. The cause for my issue [with the Ryzen 3900x hanging while trying to pass-through USB Controller 3.0] was totally that FLR issue posted above. Luckily, someone on this forum had already compiled a kernel with a temporary fix, and I used that. Find that custom kernel for Unraid 6.8.3 here: Note that I tried Unraid 6.9.0-beta1 and it did not yet have the FLR fix in the Linux kernel. It will eventually make it into the Linux Kernel, but probabaly not until 5.8... So, might be a while before it makes it into Unraid, read more about the commit - https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git/commit/?h=pci/virtualization&id=0d14f06cd665 @killeriq - not sure how you got it to work with the Unraid 6.9.0 beta 1, but if it works, I would say that's the important part.
  22. Wanted to follow-up. The cause was totally that FLR issue posted above. Luckily, someone on this forum had already compiled a kernel with a temporary fix, and I used that. Note that I tried Unraid 6.9.0-beta1 and it did not yet have the FLR fix in the Linux kernel. Find that custom kernel for Unraid 6.8.3 here:
  23. @JoeBloggs - I just used the kernel. Yes, just copy it to your flash drive (the /boot/ directory). Save the stock kernels as .bak or something in case you need them. Everything will boot like normal. Just make sure you match the version. Leoyzen has an attached version earlier on this page for Unraid 6.8.3, same as I am using. @Leoyzen - Wanted to say thank you for the FLR fix, I used your kernel, added the parameters I needed and am up and running with the USB controller in my VM! Added `append pcie_no_flr=1022:149c,1022:1487,1022:1485` and used vfio-append to those same ones. Thanks again!
  24. @Leoyzen - I've never used a custom Kernel before, but I am running into a Ryzen 3000 FLR error. Does the Kernel you provide cover this issue? I found the commit that's [going to be??] included: https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git/commit/?h=pci/virtualization&id=0d14f06cd6657ba3446a5eb780672da487b068e7 What version of Linux kernel will that make it into? 5.7?
  25. So, I think this is the resolution to the problem "PCI: Avoid FLR for AMD Matisse HD Audio & USB 3.0": https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git/commit/?h=pci/virtualization&id=efaa35873d66bf4a4903f757333692766e34e448 It should be brought into some new version of Linux... Does anyone know what version and when Unraid will get it?? My first time looking through these commits.