danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. Understood. I will set some time aside at the weekend to do as you ask.
  2. I had a hard crash today. Local GUI, remote GUI, SSH, network all unresponsive. Had to hard reset. Not very helpful in terms of debugging BUT it does add to the narrative of the RC especially if others have experienced similar. After reboot there was nothing obvious that would have caused it and of course I had just turned off mirroring of the syslog. I have turned it back on again to try and capture what happened if it happens again.
  3. @ljm42 just checked through Safari on my iMac (OSX 11.5.1) and everything works fine. Seems it is only Safari for mobile.
  4. No problem. I have unRAID set to open on the “Main” Tab instead of the “Dashboard” Tab. When I try and navigate to the “Dashboard” unRAID Tab (or any other of the unRAID Tabs for that matter - such as Apps, Settings, Docker, VMs etc) navigation to those tabs is not possible with a “click”. I can however open those unRAID tabs by using the browser feature “Open link in new “Browser Tab” but not with a basic click. I have observed that this behaviour is only from the “Main” unRAID Tab. What I mean by that is - should I be on any other unRAID Tab (e.g “Dashboard”) I can navigate between that and any other unRAID tab with just a click (even to get to the “Main” unRAID Tab) however once I get to “Main” unRAID Tab I’m stuck again. @ljm42 hopefully that is clearer. If not I can do a screen record video. Easiest way to reproduce this is grab in iPhone but I’d suggest there is some difference behind the “Main” unRAID tab than there is the others.
  5. I’ve found a small issue with the new version. When using the GUI on my iPhone (Safari on iOS 14.7.1) none of the tabs work “on click”. They work fine if I open them in a new window / tab though. Problem seems to be localised to Safari as the behaviour is as expected using an alternate browser app (e.g DuckDuckGo). Was able to replicate on my GFs iPhone too which was an IOS version lower. She hasn’t upgraded yet.
  6. You spurred me on to do something I have been meaning to do @ezhik. You're absolutely right, the stock Intel cooler is a POS. It always annoyed me how the system would get to unstable temperatures in some conditions (e.g. 40 degree day and under load) so I have been meaning to switch it out. Well, now I have done so. Ordered a Noctua NH-L9i immediately after I read your post and it came today. It's now installed and stress tested and Temps are visibly lower both idle and under load. EDIT: I have my server in a dedicated comm's cupboard in my home office and I've just noticed (as I ponder the end of the day) how much quieter it is too.
  7. I know this is an old topic but I felt compelled given recent activity. Since I built my server initially back in 2016: It has served me so well and has nary skipped a beat. Temps are still low and it has allowed me to scale up tremendously. Today ... well it makes me glad of the planning and time I put into deciding the right parts. When I built it I speculated that I would be able to get capacity up to 56TB with 8TB drives. Today may array has a usable size of 87TB and there are still some of the original 3TB WD Red's in there too (see my Sig). Heaps of head room especially with sizes of disks today. I think when I upgrade, well, I don't think I will. I think I will keep this bad boy going and retire it to be the backup server and build a new rig. I don't think I could part with it now! LOL! Some photos - then and now!
  8. Haha. Gotchya. I'll keep an eye out. Thanks for the update.
  9. @ljm42 I read some of your comments regarding this Plugin on the 6.10.0-rc1 thread. While I am going to maintain my distance from the remote access component of the Plugin - I do see value and convenience in the functionality to backup / restore the OS USB. The only thing that makes me pause is that, AFAIK, the backup functionality still doesn't use encryption. Are there any plans to introduce enrcyption?
  10. @ljm42 I really appreciate your responses, thank you. I have now been able to restore the behaviour I was happy with. I have also been able to chase down a Windows 10 VM that was not shutting down so that is now fixed. I understand re SSO between UPC and forum. I've been using RC 6.10.0-rc1 for a couple of days now. Nothing significant to report.
  11. Installed v6.10.0-rc1 from v6.9.2 very easily and without fuss. I have observed 2 very noticeable things: - in v6.9.2 when accessing my local dns address, the server would redirect me to the <personalhash>.unraid.net. This doesn't happen anymore. I note the LE certificate provisioning changes BUT I have my Use SSL/TLS setting set to Yes and didn't see a reason why this behaviour would change. - even though I don't use MyServers I decided to log into my account (mainly because I prefer how it looks while logged in as opposed to the ugly orange icon) and noticed that I was not asked to login using 2FA and also when I click to access the Forums through the menu link - it did not auto log me in. - after reboot of the server, following upgrade, the server started a Parity Check. Are these intended behaviours or is there a bug here to be reported?
  12. Interesting reply. I think I will leave it there with remote access using this feature. Given the ease at which an OpenVPN or Wireguard (using either your router or unRAID itself - native or via a Docker) connection can be established I see no reason why I would change that. It was good to experiment though. Thank you @ljm42 for taking the time to make such a thorough reply, it is appreciated.
  13. Interesting feature that I have just setup. Was interested in remote access. Got it up and running fine. For kicks, I went to https://<MYDOMAINNAME>:MYSERVERS_WAN_PORT It brought up the unraid gui. Is this expected? Thinking about it, I guess it is, given I've forwarded the random port number I chose to 443 of the unraid box. For some reason I had in my head that a remote connection not coming from the MyServers server would just be dropped. Im not sure how I feel about it though. Given we bang on (quite rightly) about how unraid isn't hardened for Internet access, isn't this an exploit just waiting to happen? Isn't this ultimately relying on the unraid standard login page being able to withstand an attack? I mean technically - given everyone knows that the login for unraid is 'root' and the port unraid is serving on via my wan address is just a simple port scan of my wan ip away - that would mean an attacker is a guess of my root password away from full access to my server. Yes, my root password is very very complex - I mean I don't even know it - but its still technically a possibility. Given the above and if the unraid gui is always exposed anyway - then why bother with MyServers for remote access at all? Why wouldn't I just access unRAID via that random port and just use my complex root password? Its feels a bit like creating a fancy way to access my house via next door (which has great security) when I still have a gate to the house hidden behind a bush somewhere (albeit largely non visible and hard to find - but not impossible) on the boundary protected by a complex code that someone can just bang away at for as long as possible until they guess it (or get bored). Maybe Im thinking about this all wrong and or the gui shouldn't come up when I access the port directly. *Shrug*. Think Ill turn this off for now and go back to using VPN for remote access. EDIT: Maybe this could be hardened (if it is expected) with a firewall rule that drops any connection made to that WAN port by anything other than the MyServers server? hmmm.
  14. I woke this morning to SWAG not working. In the log I get this: nginx: [emerg] "proxy_redirect" directive is duplicate in /config/nginx/proxy-confs/youtube-dl.subfolder.conf:22 youtube-dl.subfolder.conf in the proxy-confs is there without a .sample at the end. I did not change this.
  15. Thanks for the guidance. For now I’ve just switched to using the graphics card and have switched the BIOS to prioritise the card. All is well. If my use case changes and I need to use the card in a VM then I might try your solution to allow me to use the on board GPU for maintenance. Until then, the easy solution works given I only have unRAID and no VMs using the card.
  16. I know this behaviour (or similar) has been raised recently but I cannot find the definitive solution so I am reaching out for some help. I cannot get to the graphical logon prompt on a screen connected directly to the server in unRAID OS GUI Mode, all I get is a black screen following the text boot sequence (which is displayed on screen as you would expect). Once at the black screen I can ctrl-F1 which will take me to the command prompt where I can log in as if I was in normal unRAID OS mode. Everything else operates (including access to the WEB-GUI from a browser on a seperate machine) just fine. I run a Supermicro - X10SL7-F motherboard which has Aspeed AST2400 onboard video. I have the Nvidia Plugin installed so I can leverage the power of my installed Nvidia GeForce GTX 1050 Ti in Docker Containers. I have done some troubleshooting and run the server in unRAID OS GUI Safe Mode where the graphical login prompt comes up just fine. This helped me identify that it was likely a Plugin causing the behaviour. Through a process of elimination I got to this plugin. When installed I can't access the graphical login prompt, when removed - I can. I am running unRAID version 6.9.2 I am running Nvidia Plugin version 2021.05.19 Nvidia driver version installed is 470.42.01 (latest as of time of writing) I do not boot UEFI. My BIOS is set to prioritise the onboard Aspeed video and my screen is plugged into the VGA port on the motherboard. Diagnostics are attached. unraid-diagnostics-20210623-1323.zip
  17. I find this interesting. Should LT decide not to implement code to require ALL Arrays to start before allowing Docker and VM services to start, then you essentially bypass the current restriction. Pondering this for 2 mins only, I guess you would have to define some sort of Array Hierarchy. Which come online first, what order, which one is responsible for allowing other services to start (#1?). If it was #1, you would just create a simple small (even RAM disk) Array just to get services running. Like I said ... interesting.
  18. No idea what happened. Had nothing to do with 6.9.2 as it was running fine after the upgrade. Plex just stopped responding and was dropping any connection being made to it. Nothing obvious in the Plex logs either. Life is too short to troubleshoot sometimes. I just deleted the whole thing, reinstalled, scanned the library again. All is up and working again. Easy. I seem to recall that sometimes Plex 'Databases'? can get corrupt? Who knows. Anyway, all is well again and family have their tv.
  19. Evening all, Not sure if this issue is localised to me but this evening Plex became suddenly unresponsive. No changes or amendments. Im on unRAID OS 6.9.2. Version tag set to latest. Normal practice for me is to restart the container. Safari now complains that the connection to the server was "dropped". I get this in the log: ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 45-plex-claim: executing... [cont-init.d] 45-plex-claim: exited 0. [cont-init.d] 50-gid-video: executing... [cont-init.d] 50-gid-video: exited 0. [cont-init.d] 60-plex-update: executing... Atempting to upgrade to: wget: unable to resolve host address ‘downloads.plex.tv’ ######################################################## # Upgrade attempt failed, this could be because either # # plex update site is down, local network issues, or # # you were trying to get a version that simply doesn't # # exist, check over the VERSION variable thoroughly & # # correct it or try again later. # ######################################################## [cont-init.d] 60-plex-update: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Starting Plex Media Server. Trying to access downloads.plex.tv takes me to a redirect to: https://www.plex.tv/media-server-downloads/ Plex' uptime page shows that all services are running. Not sure how to troubleshoot this one? Is anyone else having issues? D
  20. Hi All, I am experiencing an unknown issue with my server. Since I upgraded to 6.9.1 I have experienced 2 "freezes". The server (and dockers) is unreachable via any means over the network or via the console. The only solution is to hard reset. By that time, the log is empty and diagnostics are useless. Nothing from initial view of the log following the reset indicates an issue nor is there any obvious disk issue following review of SMART data. It would be really helpful if I could configure unRAID to grab diagnostic data periodically to hopefully grab some meaningful data which would indicate what is causing the crashes. Does anyone have a script to do this I could use outside of just running the command in the user script plugin - maybe something that also manages how many versions of diagnostics are kept so I don't fill the USB drive with diagnostics files? Thanks D
  21. I noticed that after upgrading from 6.8.3 to 6.9.1 that one of my docker containers (which run's on the host) lost its connectivity to a docker container which runs on a user defined network with its own IP. I could not understand why, as all the settings - including the "Host Access to custom networks" option, were checked. After a little playing, I stopped the array, turned the option off - saved. Then turned the option on - saved. Started the array and communication between the docker containers was restored. I am not sure if I can replicate this now that it has been fixed and I have obviously already upgraded but I hope this serves to help someone else who finds themselves in this situation.
  22. @comet424 here is my2c. Home assistant is in its purest sense is Home Assistant Core. This can typically be installed on any Linux OS (although the HA devs would love to stop supporting that method of installation AFAIK) but it seems to me that the most suggested installation is via Docker. Hass.io is an OS that comes pre packed with Home Assistant Core AND with Docker running as well as a Docker “Assistant” that can be accessed within Home Assistant Core to manage the containers. This allows you to install “Addons” like MQTT etc but in essence as it is a Docker implementation you can pretty much install any Docker on it, and people do. This is way Hass.io either is run on a machine (like a Pi or NUC) or a VM in unRAID. Most people who use unRAID that I know of, myself included, see running a VM, which has an OS and ANOTHER version of Docker (however simple it might be to use that Docker version via the Hass.io assistant) as an unneeded extra layer of virtualisation. Therefore I install my HA related Docker Containers (Hass.io Add-Ons) on unRAID directly and point Home Assistant at them. Yes, I don’t have the fancy Hass.io supervisor but I like the control and don’t mind the extra layer of “complexity” of doing so. By also running on unRAID I can have all the network goodies (like running a Docker container on its own IP etc). There is an argument to suggest that running Hass.io in a VM keeps the whole thing contained and all you have to do to keep your automation hub safe is back up the VM file but I back up everything anyway. Hope this helps.
  23. This is a wonderful post, written with insight, sensitivity and purpose. Thank you for taking the time to write it. I read it a few times, I hope others do the same.
  24. This post represents my own personal musings and opinion. This thread (and the broader situation) interests me on a number of levels. We (Royal we) bang on (quite rightly) about our community and how supportive, inclusive, helpful and respectful it is. Values really that any organisation in the world would be lucky for its members to behave consistent with. Saying that, this situation has shown that there is an undercurrent of what I can only call bitterness and to some extent entitlement in some community members. I don’t feel that this is across the board by any means. However, for some, there seems to be a propensity to believe the worst of every (almost like we are waiting to jump on any poorly, Ill-considered or rushed post) word posted by default rather than the positive - which given how together we are supposed to be is very surprising. There could be any number of reasons for this, whether it be the whole keyboard warrior thing, immaturity, mixture of ages of people talking to each other I just don’t know. I think we also have to acknowledge that we are all living in unprecedented times. We are very geographically spread and some are copping it harder than others for sure but we are all in a very in normal place. I have also observed that some (whether that be due to their contribution to this forum or their development work etc.) individuals appear to think that they should be subject to a treatment different to others. I always felt that when doing something in the open source / community space the only reasonable expectation was that there was appreciation from the community for that work and that was enough. It’s volunteer work that plays second fiddle to real life (a fact that many are rightly quick to throw out when the demands of the community get to high). Irrespective of how much those developments have added value to the core product I don’t think those expectations could or should change. Saying that, the community includes the company too and those expectations of appreciation for work done (especially where commercial gain is attained from that work) carries to them too. The thing that surprised me the most though (and again this could be due to the reasons above - or others) is how quick some have been to react negatively (or even just walk) but how slow some have been to react in a more positive way. Perhaps that’s human nature. As I write this I am drawing to a conclusion that we as a community perhaps need to manage our own expectations of what is reasonably expected as a community, developer or company member. This might help (or it might not) help situations like this moving forward.
  25. Unhelpful, inflammatory, provoking and downright unnecessary. Also, if I was to define the set of values that makes this community group so strong I would say there isn’t a word in your post that would align with them.