danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by danioj

  1. Hi All,

     

    I have been using the Linuxserver.io swag container for many years now. I use this to generate a wildcard certificate for my custom domain via LetsEncrypt (LE) and it works great.

     

    Since Apple have recently amended the Safari autofill feature on IOS to only do so if you are accessing a website via HTTPS it is forcing me to enable this on my LAN services.  Basically I use unbound within pfsense to hostoveride access to my app.domain.etc (within my LAN) and take me to the internal IP of my LAN services. 

     

    Anyway, I digress. I have been able to get this working and install my wildcard certifate for every LAN service except unRAID. That's because what unraid is asking for is slightly different to everyone else. unRAID is asking for a bundle file.

     

    Swag only produces these files (I believe it used to create a bundle file but it doesn't seem to anymore):

     

    cert.pem

    chain.pem

    fullchain.pem

    privkey.pem

    README

     

    None of these appear to be the bundle file that unRAID is looking for. When I open the file that unRAID has there there are more cert's in there than any single file generated above. My Google Fu seems to suggest that a bundle file is some combination of these files but there is so much LE information out there Im finding it hard to figure it out. It also doesn't help that there so many different names for the same thing, things changing over time and even file names don't seem to be named what they are commonly referred to.

     

    So I turn to you good community, can you help me figure out how I generate the bundle file that unRAID is after from the files that LE certbot generates when producing / renewing keys?

     

    Thank you!

     

    D

  2. Well, I think I will call this thread a wrap.

     

    There has been no material change since upgrading to v6.12.4. I obviously jumped the gun and "fixed" the issues myself prior to the unraid "fix" for macvlan issues.

     

    All that said, my server is back to its usual rock solid self. All of the other things that were playing me have also gone.

     

    Starting fresh really did help and I am glad I did it. I can go back to forgetting that the server is there and just using it when needed.

    • Like 2
  3. 1 hour ago, itimpi said:

    You can only be sure that it was not unclean if you successfully stopped the array before shutdown.


    I know what you’re saying but honestly the reset happened in a matter of seconds ie the timeouts for unmounting / shutting down services that drive an unclean shutdown were not exceeded. I’m not aware of a scenario where services shutdown and drives are unmounted so quickly that still causes a unraid to think a parity check is still required on reboot? Happy to be educated though. 

  4. Upgraded without major issue. Not a big deal as I don’t use bonds or bridges anymore and all my dockers are on my secondary network interface and unraid on primary. I moved as things were unstable and now they are I CBF reverting back. Don’t really notice the speed loss due bond. 
     

    My only gripe is that the upgrade caused a Parity Check. Shutdown wasn’t unclean as I was observing it. Shame, as my scheduled monthly only finished yesterday. 

  5. Im now at the beginning of September and things are still rock solid.

     

    Re my previous issue, the author of UAD confirmed that there are no components of that plugin installed into the OS by default. He also stated that the UAD error could only occur IF UAD was installed, which it is not. Unresolved but hasn't caused any problems so I have moved on.

     

    I see the .4 release of v6.12 has been released. I am going to go for it. I have no wish to give up my stability but it's a personal decision to keep with the stable branch for security fixes and upgrades etc.

     

    Tune in next week! LOL!

  6. On 8/24/2023 at 1:18 AM, rheumatoid-programme6086 said:

    For anyone who comes here by Google, I've looked into this myself a little and I thought I'd share my conclusions.

     

    IF you are going to do this, the way to go is definitely to virtualize unraid and run both it and pfSense on Proxmox. Having said that, doing so may be pointless for those users who are trying to reduce the energy consumption of their homelab (read on).

     

    The biggest issue that I've found with running pfSense on Unraid is that it's impossible to have any VMs running if the array is not started, even if those VMs and their data are entirely hosted on non-array disks. This has a few implications: if the array is down, pfSense is down, and as a result:

     

    - Your DHCP server is down. Fine if you have configured static IPs to important LAN devices (as I have), but if you've instead allocated static IPs in the pfSense DHCP service, everything on your network will be inaccessible.

    - Your VPN server and router are down. This is the big one for me. If the Unraid box loses the array for some reason (or loses power and fails to re-start the array), both your Internet connection and your VPN server are gone. No remote troubleshooting for you.

    - You will be unable to set any of this up in an Unraid trial -- Unraid trials will not start the array until Internet access is available. Conversely, Internet access will not be available until the array is started. Therefore, this setup is completely untestable without first buying a license.

     

    None of these issues exist if you virtualize Unraid on Proxmox and pass-through your SATA controller(s) and/or HBA(s). Doing this actually works pretty well, but results in signficantly higher idle CPU usage and idle power consumption (~+8-10 watts, on a server that otherwise takes only 15 watts at idle) on my server vs. just running Unraid on the metal. A side benefit of this is that (at least on my system), Proxmox boots and starts pfSense very quickly, whereas the Unraid boot is glacially slow (and it would require still more time to start pfSense after that!).

     

    Since the power consumption hit is so huge for virtualizing Unraid, at this point I'm planning to run Unraid on the HW and run pfSense on a separate system that only takes ~10 watts idle total (same as the hit to my Unraid server). Yes, this situation is extraordinarily frustrating, but I guess I just have to keep in mind that Unraid is primarily a storage server and it isn't reasonable to expect its VM management capabilities to be comparable to a dedicated virtualization product. I understand that "most" applications of virtualization would need the array up anyway ... but it's still disappointing that there's not a way to run a router VM independent of the array status.


    Welcome. It’s a shame someone didn’t see your OP earlier and save you the journey many of us have already taken. 
     

    I’ve been asking for the “feature” to start virtualisation services independently of the array for years now. It doesn’t appear to be something LT want to do. The answers from LT has long just been, they can’t. 

  7. 11 hours ago, dlandon said:

    No.  Nothing in UD is installed in the core Unraid.  The only way that would happen is if UD was installed.

    I can assure you Dan it’s not installed. I did a usb wipe just a couple of weeks ago and a clean install and only installed 5 plugins - UAD was not one of them. 

  8. In this version or a previous version(s) was a decision taken to install the third party plugin Unassigned Devices (or components of it) by default?

     

    I ask as I got this in the syslog this week:

     

    Aug 17 07:02:38 unraid nginx: 2023/08/17 07:02:38 [error] 5368#5368: *1417305 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: X.X.X.X, server: , request: "POST /plugins/unassigned.devices/UnassignedDevices.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "host.local"

     

    Note the "/plugins/unassigned.devices/UnassignedDevices.php" bit.

     

    I do not have this plugin installed nor have I installed it since installing unraid clean a couple of weeks ago.I had a look in /boot/config/plugins/ and its not there, as I would expect having not installed it.

  9. Another week goes by the the server is still solid as a rock.

     

    Only one odd message to speak of in the syslog this week:

     

    Aug 17 07:02:38 unraid nginx: 2023/08/17 07:02:38 [error] 5368#5368: *1417305 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: X.X.X.X, server: , request: "POST /plugins/unassigned.devices/UnassignedDevices.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "host.local"

     

    Seems to be something to do with Unassigned Devices. The only weird thing there is, I don't have the plugin installed? So a bit of a WTAF there. It didn't seem to impact stability though.

     

    Also noticing that by reverting to the new build defaults vs whatever my settings where that had evolved over so many years - my server is drawing less power. I can only imagine it being down to the default spin down settings. Can't remember what I had my old setup was set to.

     

     

  10. Almost a week on since my start from scratch "rebuild". I am now so glad I did this.

     

    Not a single entry in the log except the nightly trim and backup entries. Outside of that just the smart events as the array spins up and down as it is accessed throughout the day.

     

    1 user driven restart mid week following installation of the latest Nvidia driver.

     

    Solid as a rock.

    • Like 1
    • Upvote 1
  11. .... my journey to unraid stability!

     

    I am a long time user of unraid. Since v5 actually. I have always purchased half decent server hardware (drives aside) and my use case for the os has never been that far outside the box. For my main server, I have always followed the recommended upgrade pathway without a need to rebuild (sans a usb failure which was an easy restore from backup).

     

    In recent times, I have been having stability issues. For a number of reasons. I used bonded dual nic's, vlans, macvlan, btrfs errors, host access, many plugins and of course whatever was left over from my years of tinkering and learning as I upgraded. Here is what I have done to try and regain stability.

     

    - started fresh - maintained parity and drive assignments

    - redesigned my network configuration and switched from bonded dual nics to a single (non bridged) eht1 for unraid and a single (non bridged) eht2 for docker (4 vlans including the vlan for the same network unraid is on so I don't use br0 for eth1)

    - as a result of above, switched to ipvlan from macvlan

    - uploaded (via the file manger plugin) docker templates from my old usb backup and reinstalled docker containers

    - switched docker image to xfs

    - assigned dockers to the network I wanted and started them

    - sorted out my shares (which had been auto discovered) as their config was not there

    - created users again

    - installed only "what I need" plugins (9 from 21)

    - disabled VM service  (don't use them)

    - checked parity

     

    Now I am on my way. Everything is working as I need it to. 29 hours down and not a single event in the log to bat an eye lid over. Server is as idle as I have seen it for a long time. No network issues. gui is snappy. Just the drives spinning up and down. A nice clean fresh install.

     

    I have this feeling that this is going to work and it only took me 90 minutes (tops) to go from where I was to where I am now.

     

    I intend on checking in to this post regularly (ie monthly) with my stability notes and uptime. 

     

    • Upvote 1
  12. 7 hours ago, thatonethur said:

    Trying to rack my brain as to what changed as SWAG as worked perfect for years. 

     

    My scenario:

    Multiple dockers (primarily used for Plex and Emby) proxied using SWAG (latest version).  

    Local SWAG docker is on separate "proxynet" network per SpaceInvaderOne's video and points to local port of 3553. 

    From my router/firewall, I port forward inbound traffic to 3553 docker IP (which is same as Unraid box)

    NOTE: I have another docker container that connects directly to a specific port and while 443 is timing out, the other specific port is working just fine.  This narrows down the issue to SWAG specifically for me.

     

    Problem is that I can check my port 443 on home IP address and it works, and then 3 minutes later the connection is refused for another 2-3 minutes and then it's back up accepting connections again for the next 5 minutes and this pattern repeats.  It alternates from connection succeeded to connection refused.  I've seen nothing in the SWAG logs or Firewall logs for that matter, that is jumping out at me.

     

    I think my next step will be to delete the container and start from scratch.  I had a failure on an ssd that config was stored on a few weeks ago and while every single other docker worked just fine when i restored data to it, perhaps something was missed.

     

    Any ideas?


    Your post made me check my instance and the result was the same as yours. Swag not working. I don’t run anything fancy with mine, just host a basic text landing page. Anyway, immediately disabled my port forwarding and firewall rules while I investigated.

     

    Turns out, my custom network “proxynet” (we must have followed the same guide) has disappeared. 

    Now, like you, I have had a docker config issue in the last few weeks which caused me to delete and recreate my docker image and subsequently reinstall on my containers. Figured all would be ok as config was intact. All I can think of is that this process has resulted in that custom network (done via command line if I remember) did not survive the docker image getting recreated. 

    I’m pretty sure that once I recreate the custom proxynet network again and re-enable the port forwarding and firewall rules I’ll be back up and running. Will try that tomorrow and report back. 

  13. I’m getting a little frustrated with the company and community conversation regarding the macvlan issues.


    I see so many posts where users are taken through the same set of questions as others, post your logs, switch to ipvlan (when things don’t work for the user as they did with macvlan), why do you need this, try that etc etc. 

     

    This seems like a waste of time and or a diversionary tactic as either the outcome ends up being the same or the answers from those providing advice imply they know the issue. Therefore why keep repeating.  

    I run docker containers, all through a dedicated interface seperate to the main unraid interface, many with their own ip address on various vlans on my network. I have all unifi networking equipment except pfsense as my edge device. Host access to guest networks enabled. I get hit by kernel panics very regularly. Switching to ipvlan does not deliver the same outcome as macvlan in my case so is therefore not the answer. 

    I’d really like a comprehensive explanation as to what can be expected to be achieved and what can not with either macvlan and or ipvlan. An honest warts and all answer would be nice as I can see my use case (which is my own) is not unique. 

    My ask: Create a honest comprehensive post containing the issue and current explanation, what can be expected to be achieved from each option and don’t leave anything you know (ie even that which relates to why you might consider the “edge” use cases) out, even add what you “think” might be happening or the answer and then make the post a sticky. When a support request gets asked that fits the issue, link to it. Let people discuss it in that thread. Provide updates on progress of a fix or accept limitations of the product. Please. 

    • Like 1
    • Upvote 9
  14. 14 minutes ago, spants said:

     

    I had the same problems - I spent hours trying to fix and ordered a replacement drive. It is the potential loss of online data and TIME that caused me headaches.

    That makes sense. Loss of data availability and time to fix an issue is I agree something you would prefer to not have to deal with. Hope all is back up and running for you. 

    • Like 1
    • Upvote 1
  15. 3 hours ago, skyfox77 said:

    This was a CRITICAL bug and I almost lost 10Tb of data because of it, do not haste to add newer versions unless they are checked. This should really only happen on a BETA version.

    Edit: Well, we will see in 5 hours if I have lost anything or not, I am not out of the woods yet. I am reffering to the over 2 tb disk format in the old version not this one.


    I’ve stepped back from the forum in recent times but felt compelled to login and respond. 
     

    (Un)Raid is not for backup, it’s for for availability. If you are sensitive to “Loss” of data you should take appropriate steps to back it up based on your risk appetite for the unexpected. 

    • Like 4
    • Thanks 1
    • Upvote 1
  16. 10 hours ago, dlandon said:

    Ok, well I asked for your diagnostics because I wanted to see what version of Unraid and UD you were running, not because I don't believe you.  If you are running pre 6.9, UD handles the spin down and doesn't respect SMART test activity.  On 6.9 and 6.10 it shouldn't be a problem because Unraid is managing spin down on UD disks.


    Well, I’ll post the diags when I get home in the morning. But in the meantime, I can assure you my software version is:

     

    unRAID version 6.10.3 Pro

    Plugin version 2022.06.19 (with UAD Plus 2022.05.17 and UAD Preclear 2022.06.10 also installed)

     

    3 hours ago, JorgeB said:

    With 6.10 Unraid makes you disable spindown before being able to run a SMART test to avoid this exact issue. 


    So does this mean that - irrespective of whether it’s a UAD disk or an Array disk - I have to disable spin down globally on an disk to do a SMART test?

     

    Seems weird almost a bug I’d say that the spin down function doesn’t recognise a SMART test as disk activity to prevent spin down. 

  17. I've observed an issue with the plugin that is repeatable.

     

    When trying to execute an extended SMART test (as part of my disk preparation methodology), the test will timeout (and never complete) as the disk is spun down after ~30 minutes (which I think is the normal unchangeable spin down delay for this plugin).

     

    For some reason the long smart test does not constitute activity and prevent the spin down. I don't think this is expected behaviour.

     

    unRAID version 6.10.3 Pro

    Plugin version 2022.06.19 (with UAD Plus 2022.05.17 and UAD Preclear 2022.06.10 also installed)

  18. On 2/1/2022 at 1:57 AM, ich777 said:

    You can use it in combination with Authelia, SWAG and Redis and simply Reverse Proxy the WebGUI if you really want to so that you have Authelia (with DUO 2FA) in front of the unRAID WebGUI.


    While “do-able” I think this is really poor advice.

     

    To the @JK252 please see the formal security recommendations from @limetech

     

    https://unraid.net/blog/unraid-server-security-best-practices

     

    TLDR: don’t expose your unRAID server to the internet. ESPECIALLY the maintenance GUI. Someone gets access and a web based command prompt with root permissions is a click away.