Leaderboard

Popular Content

Showing content with the highest reputation on 11/22/20 in all areas

  1. This post represents my own personal musings and opinion. This thread (and the broader situation) interests me on a number of levels. We (Royal we) bang on (quite rightly) about our community and how supportive, inclusive, helpful and respectful it is. Values really that any organisation in the world would be lucky for its members to behave consistent with. Saying that, this situation has shown that there is an undercurrent of what I can only call bitterness and to some extent entitlement in some community members. I don’t feel that this is across the board by any means. However, for some, there seems to be a propensity to believe the worst of every (almost like we are waiting to jump on any poorly, Ill-considered or rushed post) word posted by default rather than the positive - which given how together we are supposed to be is very surprising. There could be any number of reasons for this, whether it be the whole keyboard warrior thing, immaturity, mixture of ages of people talking to each other I just don’t know. I think we also have to acknowledge that we are all living in unprecedented times. We are very geographically spread and some are copping it harder than others for sure but we are all in a very in normal place. I have also observed that some (whether that be due to their contribution to this forum or their development work etc.) individuals appear to think that they should be subject to a treatment different to others. I always felt that when doing something in the open source / community space the only reasonable expectation was that there was appreciation from the community for that work and that was enough. It’s volunteer work that plays second fiddle to real life (a fact that many are rightly quick to throw out when the demands of the community get to high). Irrespective of how much those developments have added value to the core product I don’t think those expectations could or should change. Saying that, the community includes the company too and those expectations of appreciation for work done (especially where commercial gain is attained from that work) carries to them too. The thing that surprised me the most though (and again this could be due to the reasons above - or others) is how quick some have been to react negatively (or even just walk) but how slow some have been to react in a more positive way. Perhaps that’s human nature. As I write this I am drawing to a conclusion that we as a community perhaps need to manage our own expectations of what is reasonably expected as a community, developer or company member. This might help (or it might not) help situations like this moving forward.
    9 points
  2. I'm not ephigenie, but I liked the idea, so I'll give my two cents on it, for what it's worth: Maybe something like a broad strokes roadmap. Nothing too concrete, but even a post with headings like: Features being worked on for the next major/minor release (multiple cache pools) Bugs being squashed for the next major/minor release (SSD write amplification) Future possibilities currently under investigation (ZFS) You could make the forum visible to community developers only. Or if you're feeling particularly transparent, forum members with enough posts that they've at least been semi-active, and around for a while. The understanding would be that anything mentioned is subject to change or reprioritisation, and that obviously certain things can't be talked about for market/competitive reasons or whatever. (or just because you don't like spilling all the beans on shiny new things) This would allow you to gauge community interest (at least, the portion of the community active on the forums) around given features which might factor into prioritization. As well, it gives us members a peak at the direction unRAID is heading, and an appreciation for why so-and-so wasn't added to the latest patch, or why such-and-such a bug is still around.
    3 points
  3. Issue appears to be that if you have any VM running that used VNC, then starting it would mess up the displays on the dashboard. Either way, this issue is fixed next rev
    3 points
  4. Hello all, I've been running Unraid on my QNAP TS-853A for about a month now. I intend to use Unraid full-time on this box with this as the primary file server in my home. Before switching to Unraid, I Googled if it was possible to run Unraid on a QNAP and the consensus was: "Probably, just try it." As long as you can access the BIOS and change the boot order, you should be able to boot into Unraid. Some people report not all of their drive bays working depending on which SATA controller was being used (that forum thread is from 2016, so an update may have added compatibility for those controllers). All 8 drive bays are working on my TS-853A. The rest of the hardware is working just fine. The LCD screen on the front of the QNAP constantly says "SYSTEM BOOTING >>>>>>>", but I can overlook that. One day I may try to see if I can modify that. I just want to document that it's possible -- and easy -- to run Unraid on a QNAP. My only experience with QNAP boxes is with my own, so your mileage may vary. This is my first Unraid installation, but everything was incredibly straight-forward. I had no complications getting things set up. As others have stated in the past, these NAS boxes come with 500 MB-ish of DOM flash storage. I didn't try to install Unraid on the DOM because I didn't know if it had a UID. It would be nice to use, but I don't see 500 MB being suitable in the long term. I actually removed the DOM because my BIOS would ignore my boot order and prioritize the DOM over my Unraid flash drive. I'll report back if I ever get around to booting Unraid from the DOM. The Celeron N3160 and 8 GB of DDR3 prevent me from going wild with dockers and VMs, but I had that expectation going in; my primary purpose for this box is a file server for my business. But it's running a Minecraft server for my nephews, Bitwarden, and a few other services that I use for work/play. I let my 2015 Nvidia Shield act as my Plex server pulling content from my Unraid SMB shares. Plus, boot times went from 10+ minutes to just a couple. Unraid has breathed new life into my QNAP. This was a hand-me-down of sorts, but I've always been a Synology guy. This QNAP was a clear upgrade from the hardware of the Synology, but the QNAP software experience was subjectively much worse. So far, it's been more than suitable for full-time use. If anyone else uses a QNAP for Unraid, I've attached the case model design I've made. ts-853a.svg
    2 points
  5. I am using linuxserver.io Plex docker with unraid nvidia plugin and switched to this plugin and hardware transcoding is fine and no other issues. Sent from my iPhone using Tapatalk
    2 points
  6. Nvidia-Driver (only Unraid 6.9.0beta35 and up) This Plugin is only necessary if you are planning to make use of your Nvidia graphics card inside Docker Containers. If you only want to use your Nvidia graphics card for a VM then don't install this Plugin! Discussions about modifications and/or patches that violates the EULA of the driver are not supported by me or anyone here, this could also lead to a take down of the plugin itself! Please remember that this also violates the forum rules and will be removed! Installation of the Nvidia Drivers (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'Nvidia-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads the Nvidia-Driver-Package ~150MB and installs it afterwards to your Unraid server) : Click on 'DONE' and continue with Step 4 (don't close this window for now, if you closed this window don't worry continue to read) : Check if everything is installed correctly and recognized to do this go to the plugin itself if everything shows up PLUGINS -> Nvidia-Driver (if you don't see a driver version at 'Nvidia Driver Version' or another error please scroll down to the Troubleshooting section) : If everything shows up correctly click on the red alert notification from Step 3 (not on the 'X'), this will bring you to the Docker settings (if you are closed this window already go to Settings -> Docker). At the Docker page change 'Enable Docker' from 'Yes' to 'No' and hit 'Apply' (you can now close the message from Step 2) : Then again change 'Enable Docker' from 'No' to 'Yes' and hit again 'Apply' (that step is only necessary for the first plugin installation, you can skip that step if you are going to reboot the server - the background to this is that when the Nvidia-Driver-Package is installed also a file is installed that interacts directly with the Docker Daemon itself and the Docker Daemon needs to be reloaded in order to load that file) : After that, you should now be able to utilize your Nvidia graphics card in your Docker containers how to do that see Post 2 in this thread. IMPORTANT: If you don't plan or want to use acceleration within Docker containers through your Nvidia graphics card then don't install this plugin! Please be sure to never use one card for a VM and also in docker containers (your server will hard lock if it's used in a VM and then something want's to use it in a Container). You can use one card for more than one Container at the same time - depending on the capabilities of your card. Troubleshooting: (This section will be updated as soon as more someone reports an issue and will grow over time) NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.: This means that the installed driver can't find a supported Nvidia graphics card in your server (it may also be that there is a problem with your hardware - riser cables,...). Check if you accidentally bound all your cards to VFIO, you need at least one card that is supported by the installed driver (you can find a list of all drivers here, click on the corresponding driver at 'Linux x86_64/AMD64/EM64T' and click on the next page on 'Supported products' there you will find all cards that are supported by the driver. If you bound accidentally all cards to VFIO unbind the card you want to use for the Docker container(s) and reboot the server (TOOLS -> System devices -> unselect the card -> BIND SELECTED TO VFIO AT BOOT -> restart your server). docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd: unknown device\\n\""": unknown.: Please check the 'NVIDIA_VISIBLE_DEVICES' inside your Docker template it may be that you accitentally have what looks like a space at the end or in front of your UUID like: ' GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd' (it's hard to see that in this example but it's there) If you got problems that your card is recognized in 'nvidia-smi' please check also your 'Syslinux configuration' if you haven't earlier prevented Unraid from using the card during the boot process: Click Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a screenshot of the output of the command 'nvidia-smi' (simply open up a Unraid terminal with the button on the top right of Unraid and type in 'nvidia-smi' without quotes) and the error from the startup of the Container/App if there is any.
    1 point
  7. Those following the 6.9-beta releases have been witness to an unfolding schism, entirely of my own making, between myself and certain key Community Developers. To wit: in the last release, I built in some functionality that supplants a feature provided by, and long supported with a great deal of effort by @CHBMB with assistance from @bass_rock and probably others. Not only did I release this functionality without acknowledging those developers previous contributions, I didn't even give them notification such functionality was forthcoming. To top it off, I worked with another talented developer who assisted with integration of this feature into Unraid OS, but who was not involved in the original functionality spearheaded by @CHBMB. Right, this was pretty egregious and unthinking of me to do this and for that I deeply apologize for the offense. The developers involved may or may not accept my apology, but in either case, I hope they believe me when I say this offense was unintentional on my part. I was excited to finally get a feature built into the core product with what I thought was a fairly eloquent solution. A classic case of leaping before looking. I have always said that the true utility and value of Unraid OS lies with our great Community. We have tried very hard over the years to keep this a friendly and helpful place where users of all technical ability can get help and add value to the product. There are many other places on the Internet where people can argue and fight and get belittled, we've always wanted our Community to be different. To the extent that I myself have betrayed this basic tenant of the Community, again, I apologize and commit to making every effort to ensure our Developers are kept in the loop regarding the future technical direction of Unraid OS. sincerely, Tom Mortensen, aka @limetech
    1 point
  8. 6.9.0-beta35 - SAMBA Issues Anyone else having issues with SAMBA since upgrading? Also about 100000000000 errors in event log it keeps looping these errors in systemlog until what seems like ram is either full and the server dies. or you end samba service in ssh which stops it. using windows 2019 AD, Tried with two profiles which both have permission to join the domain.
    1 point
  9. It was really easy. I took a backup of the container just in case. Then after it was stopped, changed the repository to 'linuxserver/sonarr:preview' and started it up. It all works perfectly, no issues at all - love the new interface. no need to change any ports/mappings whatever - it just worked. It may look a little strange as the container name and the folders etc under appdata are still binhex's name etc. I then used the same process to change the binhex-jackett, i.e the repo to 'linuxserver/jackett:latest' as that was hardly being updated as binhex's.
    1 point
  10. 1. Built in WiFi option would be nice. So you could setup a wireless NIC and use that instead of a WiFi -> Ethernet converter. 2. More statistics such as how much CPU utilisation a docker container/VM is using. Similar to Windows Task Manager.
    1 point
  11. https://forums.unraid.net/topic/76501-google-is-better-than-the-internal-search-function/ Pretty sure Unraid tech support uses google to search the forums. The forum search function has been broken for years.
    1 point
  12. 🙂 Relying on an external service for viewing my local media just seems wrong to me. Especially with the latest snafu. https://forums.plex.tv/t/plex-server-web-client-displays-content-not-mine-prior-to-login/650199
    1 point
  13. Only a free version for one month... Oh man, I'm feel really more comfortable using Emby or Jellyfin...
    1 point
  14. 1 point
  15. It appears this issue has resolved both ip and hostname work as expected once ahain unsure of what changed but due to an unrelated whole system reboot this functionality is back
    1 point
  16. I now got a Plex Pass and tried it with the linuxserver.io version and it works flawlessly. Just be sure that you got on the Setting -> Transcoder -> tick the boxes where it says 'Use hardware acceleration when available' and 'Use hardware-accelerated video encoding' If you don't have that two boxes Hardware Transcoding cannot work. I tried it with a 4K 2160p HEVC HDR h265 file and also a 1080p h264 file.
    1 point
  17. I wouldn’t wait for another hard crash. Much nicer to avoid file system errors with a clean boot than risk issues. Looks like most likely bad memory. Do the memtest now, if there is bad RAM it often shows up pretty quickly and you can got on with a warranty RMA. You should also do a file system check on you array and cache as well. Sent from my iPhone using Tapatalk
    1 point
  18. 1 point
  19. You have your VPN provider set to PIA, not sure if this is the only issue 2020-11-21 12:02:16.199654 [info] VPN_PROV defined as 'pia'
    1 point
  20. I am getting hardware transcoding with Linuxserver's Plex container. I came from beta 30(linuxserver nvidia plugin) to 35 using this plugin. Sent from my iPhone using Tapatalk
    1 point
  21. Thanks! I do believe Ih ad this working where you mentioned actually, however I ended up going with an actual database instead of SQLite for mine.
    1 point
  22. So after removing the drive with errors and doing a new config, parity sync completed, so I guess the bad drive was just causing sync to totally freeze up once it got to a bad sector every time.
    1 point
  23. Any random container out there may have it's own specific requirements for permissions and ownership. My folders in appdata for instance are all owned by nobody:users whereas yours are 1000:1000
    1 point
  24. Issue seems to have been resolved. Stable for 48 hours. And shut it down to move back to it's usual place and been up for 26hrs now Thank you all
    1 point
  25. I hate to ask, but I've had a devil of a time getting this container to work. I can never get the webgui to load (UniFi Controller is starting up... sometimes, other times won't connect). Can anyone provide any guidance? Trying to accomplish: -Clean install (no previous config info/folders/etc) -Running the container on a separate vlan/subnet (eg br0.XX with own net stack/ip). IP address has access to WAN so I don't think it's the container downloading from the 'net that's the issue. >Also tried 'all' the net options, host, bridge, no joy with any of these either -Tried various versions (eg unifi-control:LTS, 5.9, 5.10.24-ls21, etc) Tried the competing unifi controller container, which does work, but the latest 6 controller doesn't support wifi vlans yet (that I can tell) and that's a cornerstone of my network, so I need LTS or something else that still includes vlan support. Still no joy. Any ideas would be greatly appreciated. EDIT Found it. Turns out unifi-controller, for whatever reason, really, really doesn't like docker running from remote storage (I have docker.img and /appdata stashed on another server via NFS). My guess is probably mongoDB doesn't like the NFS bottleneck? Although I've had zoneminder (you would think plenty DB heavy) running over NFS like this for ages with no issues, dunno.
    1 point
  26. Couple Options I found: It is as simple as changing the parity drive to "no device" and then starting the array. If you do that you have to rebuild parity when you reassign it again. or http://lime-technology.com/forum/index.php?topic=47711.msg457342#msg457342 You can't simply disable parity. What you can do is a New Config and simply not assign a parity drive. Make sure you note which drive is the parity drive. Once that is done just assign your data drives to the new array and leave the Parity unassigned. Then you can copy data to the array and it will copy much faster as I describe in my above posts. When you're ready to protect everything, just Stop the Array, assign the parity disk, and let it do a parity sync. Then you're all done and ready to do a parity check to confirm all went well. Remember, for any existing data on the Array (if any) you will be running unprotected while you're copying all your data to the array so make sure you have good backups of any of that data! Check out that thread and see what you think
    1 point
  27. Yes, for the initial data transfer, it is advised not to use Parity and to turn off cache.
    1 point
  28. 1 point
  29. It was my older gaming computer. How it all began....... ---------------------------------------------------------------------------------------------- Let me know what you guys think.
    1 point
  30. @crazykidguy So I hadn't setup any extensions yet, but since you asked I dug into it. You will need to create an appdata folder and link it somewhere (anywhere) in the container (guacamole frontend, not guacd). Then you will need to create a variable GUACAMOLE_HOME of the directory you linked inside the container. See attached screenshot. Then create a folder in the appdata directory named extensions and download and extract guacamole-auth-totp-1.2.0.jar into that folder. Restart the container and you're done. Sources: https://guacamole.apache.org/doc/gug/guacamole-docker.html#guacamole-docker-guacamole-home https://guacamole.apache.org/doc/gug/totp-auth.html#installing-totp-auth TOTP Jar https://guacamole.apache.org/releases/1.2.0/ Edit - You will be prompted to setup TOTP at login. Out-of-the-box it doesn't appear to be optional, all users will have to setup TOTP
    1 point
  31. It's in the help text that the path must end with a trailing slash. But, I did fix the placeholder
    1 point
  32. You're passing through your Intel Graphics P630 to the VM. When you're passing through a video card to a VM, VNC is automatically disabled and you use either the attached monitor on the VM, or RDP.
    1 point
  33. I guess I'll need to do some research on this. If these are kernel modifications that fix bugs with those devices, why not just submit them upstream for merger into the Linux kernel itself? Not saying we wouldn't consider adding this or finding a way to make it supportable, but need to understand all the ramifications before we make any commitments.
    1 point
  34. After 27 days with the new kit of ram it hasnt crashed or shown any errors yet. I call this closed.
    1 point
  35. I have the same memory problem, but not because of usual syslog errors. Its only I because I enabled the mover logs and use xfs defragmentation which adds a massive amount of debug lines to the log. And this bug is not related to the size of /var/log. It's related to the PHP memory limit which is absolutely fine, but in /usr/local/emhttp/plugins/dynamix/include/Syslog.php this line is a problem: foreach (file($log) as $line) { This is a RAM killer as file() reads the complete log file into the RAM before executing further commands. It could be easily solved by replacing it against: $fh = fopen($log, "r"); while (($line = fgets($fh)) !== false) { Or even better (which limits the output to 1000 lines): $i=0; $line_count = intval(exec("wc -l '$log'")); $fh = fopen($log, "r"); while (($line = fgets($fh)) !== false) { $i++; if ($i < $line_count - 1000) { continue; } I tested this and now the RAM usage of my browser dropped by 1.6GB while viewing the syslog page and this is the first time I was able to open the syslogs through my smartphone which took ages before. The fixed file: Syslog.zip EDIT: Ok, I found the file in the repository: https://github.com/limetech/webgui/blob/master/plugins/dynamix/include/Syslog.php Will try to fix it ^^ EDIT2: Yeah, my very first github pull request 😅 https://github.com/limetech/webgui/pull/770 Syslog.zip
    1 point
  36. This is true. Unfortunately communicating development plans like this (ad hoc, not centralized) is probably not the best way to go about it. Rest assured that Tom's post here was only the first step. Him and I have been having conversations about how we can better engage with the community beyond just this issue. Expect to see another post from me sometime this week with more details on this. At the end of the day we all realize how vital our community is to our continued success. I just hope that everyone can appreciate how big this community really is now, because managing expectations for such a large audience is a far more daunting task than it was for us back in 2014 (when I started with Lime Technology).
    1 point
  37. Yes, eventually this will fade into the past. However, the fact that many of us are not surprised this happened is indicative of a larger problem with the way limetech governs unraid in relation to the community. Perhaps limetech should create and fill a communications/community liaison role. Overall communication and particularly communication with the community must improve to prevent future bridges being burnt to a crisp.
    1 point
  38. To utilize your Nvidia graphics card in your Docker container(s) the basic steps are: Add '--runtime=nvidia' in your Docker template in 'Extra Parameters' (you have to enable 'Advanced view' in the template to see this option) Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID' (like 'GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd') Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all' Make sure to enable hardware transcoding in the application/container itself See the detailed instructions below for Emby, Jellyfin & Plex (alphabetical order). UUID: You can get the UUID of you graphics card in the Nvidia-Driver Plugin itself PLUGINS -> Nvidia-Driver (please make sure if there is no leading space!) : NOTE: You can use one card for more than one Container at the same time - depending on the capabilities of your card. Emby: Note: To enable Hardware Encoding you need a valid Premium Subscription otherwise Hardwar Encoding will not work! Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (the text NVENC/DEC is indicating exactly that) : Jellyfin: Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (Jellyfin doesn't display if it's actually transcoding with the graphics card at time of writing but you can also open up a Unraid terminal and type in 'watch nvidia-smi' then you will see at the bottom that Jellyfin is using your card) : PLEX: (thanks to @cybrnook & @satchafunkilus that granted permission to use their screenshots) Note: To enable Hardware Encoding you need a valid Plex Pass otherwise Hardwar Encoding will not work! Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself: After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (the text '(hw)' at Video is indicating exactly that):
    1 point
  39. Normal humans often can't see all the ways messaging can be interpreted (this is why we have comms people) The most offensive sounding things are often not intended in that fashion at all. Written text makes communication harder because there are no facial or audio cues to support the language I expected our community developers (being that they've clearly communicated in text behind screens for many years) would understand that things aren't always intended as they sound. In this regard, I support @limetech wholeheartedly. Nevertheless the only way to fix this is probably for @limetech to privately offer an apology and discuss as a group how to fix, then publish back together that it's resolved. (30 years managing technical teams - seen this a few times and it's usually sorted out with an apology and a conversation).
    1 point
  40. @limetech Not wanting to dogpile on to you, but you would do well to go above and beyond in rectifying the situation with any communality developers that have issues. The community plugins and features that supply so much usability and functionality to unraid that are lacking in the base product actually make unraid worth paying for. if you start loosing community support, you will start to loose it all. with that I am sure I and others will not recommend people to purchase and use your software. @CHBMB @aptalca @linuxserver.io Maybe a cool down period is needed, but don't just pack up your things and go home, it's all too common for companies to ignore users and communities, a large part of the community is often invisible to them hidden behind keeping the business running and profits. it makes it easy for customer facing employees to make statements that might be taken as insensitive.
    1 point
  41. You can use this as the frontend. See screenshot for an idea of what you will need to configure. Note that you will also need a database and will need to import some sql for it to work. I added a path to my appdata folder to copy the sql that will be needed to import into the database. I used maria as the db and used phpmyadmin to create the database, user, and import the 2 sql files which can be find in /opt/guacamole/mysql/schema Hope this helps and good luck!
    1 point
  42. Yes, you need to use copy/delete if you want the file to end up on a different drive. It is a quirk of the Linux implementation of the ‘mv’ where if source and target appear to be on the same mount point (/mnt/user) a rename is issued (which leaves the file on the same drive) rather than a copy/delete action.
    1 point
  43. Big thanks as usual to the team! For those having adoption issues, I vaguely remember having issues before about that. In case you don't already have custom IP in controller settings, you might want to make sure that you have set the IP under settings. I had a issue where the controller was trying to set-inform with my private docker IP (172.17..1.X) instead. See screenshot below, checking "Override inform host with controller hostname/IP" and entering the IP of your server above that.
    1 point
  44. Have you ever used Virtualbox? Or Windows server Hyper-V? In 2009 or so I built a server from parts. I installed Windows 2008-R2. I clicked a button to slide Hyper-V under the install. I started up the app that Windows provides to build virtual machines. I created 6 virtual machines. The entire thing was GUI. The entire process of creating a VM was intuitive. Yes, I had to read how to do it, then I did it. In a matter of hours I had a whole set of VMs running, which run to this day. So how am I supposed to make suggestions about improving this mess? It is a mess. Nothing whatsoever is intuitive. It is a ton of properties that I have no clue what they are doing. How do I suggest improving an entire page of mess that I have no clue what it is doing? How about somebody that knows what this mess is doing clean up the mess and make it point and click? I know, I know, Linux is not windows. I get it. But getting my self worth from a thousand hours of figuring out mess is not my idea of a way to spend my life. I have kids to play with, meat to BBQ, places to go, people to see. I have a very powerful 6 core 12 thread machine (that I built) that I handed over to Unraid. I like what Unraid does, but I hate the mess it makes me figure out to get anything done. Unraid doesn't need those 6 cores / 12 threads so I want to put them to use running VMs, docker containers etc. Holy CRAP batman, hours and hours of reading threads clear back a half dozen years and I STILL don't have a clue what this mess does. So I watch a video where this very nice man guides me step by step through the mess. He doesn't explain the mess, he just says "click here, do this". If ANYTHING goes wrong (and it did, multiple times) I am back out in dozens of threads trying to figure out what the part that went wrong does. Eventually I did get a VM working. After HOURS and HOURS of struggle. And NO, my self worth was not increased by the struggle. And NO, if I wanted to do another I absolutely could not do so without watching the nice man guide me through it again. Now I am dead sure that there are hundreds of Linux Gurus who are going to read this thread and scoff at me. Oh well. I have built computers clear back in the 70s with soldering iron, chips bought from the ads in Popular Electronics and schematics. I have installed CPM on the systems I have built. I have learned Turbo Pascal, Turbo C, Fortran, C++, C# and I keep learning. I have made a good living building SQL Server databases automated with C# which I wrote. I am neither a nubee nor am I incompetent. And this stuff is a friggin MESS!!! And No, I can't recommend how to clean it up since I haven't a clue how it works, nor am I willing to spend hundreds of hours learning it. Unraid is supposed to be a tool to allow me to get on with more important things (to me). Sorry for the rant.
    1 point
  45. I like Total Commander (on a Windows station), but any commander-like tool with a full tree synchronization feature would work. There's a Krusader Docker that probably can (I haven't verified that). They don't provide a report (although perhaps another similar tool does), but this lets us make the changes immediately, instead of manually from a report. It quickly compares complete trees against each other, isolates the differences, allows immediate compares of text files, lets you customize the results, then quickly lets you synchronize the trees. Total Commander and others even have built-in FTP support, if that's needed.
    1 point