HNGamingUK

Members
  • Posts

    134
  • Joined

  • Last visited

Everything posted by HNGamingUK

  1. The plugin would work yes however it automatically stops the sessions without any user input or knowledge. My feature request is for a prompt and list so the user has full knowledge of the process(es) stopping shutdown of the array and can action it all within the unraid UI.
  2. It was mentioned today on the unoffical Discord that when you shutdown the array it has the potential to hang and does not inform you of any open processes and expects you to find them and close them before it will finish the shutdown. This request is therefore to add a prompt in the GUI (maybe also CLI) that when a user issues a command that will shutdown the array to display a message to the user. This message would be something to the affect of "The following processes are stopping shutdown of the array" at which point it lists the processes and PIDs. (Likely will need to have this auto refresh as other processes could start to use the system) In addition to the simple message, it could include a prompt asking the user if they would like to kill the processes and force the array shutdown. I believe adding such a feature will be a very good addition and improve the overall user experience.
  3. Yeh I saw the only thing is that due to this being a new setup there is no wireguard .conf file in the wireguard directory for me to be able to change the endpoint....
  4. Hello I am trying to start a container from fresh using wireguard and PIA but I am getting the following in the docker logs? 2021-04-15 00:15:40,475 DEBG 'start-script' stderr output: parse error: Invalid numeric literal at line 4, column 0 2021-04-15 00:15:40,600 DEBG 'start-script' stderr output: parse error: Invalid numeric literal at line 1, column 7 2021-04-15 00:15:40,600 DEBG 'start-script' stdout output: [warn] Unable to successfully download PIA json to generate token from URL 'https://143.244.41.129/authv3/generateToken' [info] Retrying in 10 secs... I assume this just means it's an issue with PIA and I just have to wait? But was also concerned about the previous two parse errors...
  5. Hello, So this is the 3rd time this has happened now but basically I keep finding that my shares are no longer avaliable and when checking on the server the /mnt/user/ filesystem is gone (notably however /mnt/user0 is not gone) This originally happend once after 28 days of uptime, I found the segfault in the logs and rebooted. It then happened again ~5 days later, again rebooted. Then another time it happened just a couple days ago, which is then where I decided to run a memtest and with 3 runs it had no errors. Finally today it has happened again and I am having difficulties trying to figure out what is causing it... Please find my diags attached. I am having to reboot now since I need my server to be in a usable state. diagnostics-20210203-0757.zip
  6. However the main thing is that SSH users other than root for unraid currently isn't supported. As such adding non-root users that can SSH goes outside the standard convention of unraid. In turn meaning that they likely don't test SSH access and usage with users other than root. I do agree however that allowing other ssh users would be good and to add to that proper user management with maybe even RBAC?
  7. Appologies to sound like an annoying teenager, but do you have an ETA or maybe roadmap of features. Equally is there a github page for this so that maybe people can help develop the feature?
  8. Hello Limetech, Talking in the unoffical discord we noticed when you add a new container and click apply it runs a docker run however when you edit a stopped container it does a docker create Now this method is okay however some people would like the option to not have a container to start on creation. Essentially the feature request is to add a tickbox called something like "Start container on creation/edit" on a container creation and edit page. By default this would be ticked so that it does not affect the current usage of the containers and does not cause any confusion. Potentially in addition an option within the Docker settings page could be added to change how the tickbox works by default. Hopefully this makes sense.
  9. This has been requested atleast once before by @aptalca back in 2018 however this still doesn't seem to have been implemented? Basically instead of one tarball of all the appdata directories, please could we have the option to have it seperated per directory? This will make a restore much easier for people who only want to restore one application. Currently they would have to manually untarball the file and move the directory. So the feature request is: Backup: In the backup config page have a option called "Backup Directories Seperately" If this option is selected it will backup each directory in the provided appdata directory into a time and dated folder within the provided backup directory. In addition to this if the "Compression" option is ticked then compress each directory. Restore: When someone chooses the restore option, along with the date/time selection provide a directory selection to restore (using checkboxes to allow for multiple selections and an "all" option to restore everything) Hopefully this is enough detail for the feature request to be actioned... @Squid
  10. So I seem to be having a strange issue with the above mentioned plugin... I am not sure if @dlandon just needs to provide an update so it will work with 6.9? Basically what it means is I can't attach or detach any USB devices... When I click attach or dettach I get the following: Which seems to just be HTML of the whole page I am on.... I have tried reinstalling the plugin but it still shows the same when trying to attach or dettach a USB device 🙁
  11. Reading the SMART data suggests the disk is fine, I would first check the cables to the disk (power and SATA) If everything looks fine you should be able to: 1. Stop Array 2. Un-assign drive 3. Start Array 4. Stop Array 5. Assign Drive 6. Start Array Once this has been complete the drive should be added back to the array and any data changes that have happened to the emulated drive in the meantime will be written back to it.
  12. +1 This feature is deffinately required from a UX perspective!
  13. I could be thinking things wrongly but my understanding is that @CHBMB is stopping unraid deleopment so no plugins for unraid from them... I am unsure how this affects dockers since they can and always have been able to manually install them pulling directly from DockerHub instead of using CommunityApps. It would be nice (if not already provided somewhere) to have offical word from the whole @linuxserver.io team. But I agree with multiple replies that if there was better communitcation from @limetech then @CHBMB quiting development likely would not of happened.....
  14. For reference to anyone in the future I have fixed this by doing the following: Removed br0.20 from unraid network settings Set Windows VM network to br0 Set vlan tagging inside the Windows VM for VLAN 20 This has allowed for my Windows VM to connect to the correct subnet and also allows me to manage my unraid webui and docker uis also.
  15. I would check the "Cache" setting of the share in question and confirm that it hasn't been set to "No"
  16. Hello, So as the title suggests I am unable to access my dockers that are in default bridge mode from a Windows VM (on unraid) from a VLAN (br0.20) Here is what my unraid network config looks like: Here is a screenshot of my dockers : While on my Windows VM (which has it's network as br0.20) I am unable to access the docker containers on the standard bridge mode (eg Grafana) however I am able to access the dockers using custom networks (eg UNMS). My network setup as follows: br0 (Standard bridge) is on a 10.0.1.0/24 subnet br0.20 (VM assigned) is on a 10.0.2.0/24 subnet br0.5 (2 of the dockers) is on a 10.0.0.0/24 subnet I am able to access any of the dockers webUIs from a mobile device on the same subnet as my Windows VM. I believe this has something to do with the network isolation between dockers and VMs am I right? How would I go around allowing access from the Windows VM to the dockers on the default bridge?
  17. Okay brilliant so setting the following in Auto Update: and then the following in backup: Will backup every Sunday at 3:05 and once complete trigger an update.
  18. Could someone explain how the "Update Applications On Restart?" setting works on this plugin? If I have this set to yes what should I set in the "CA Auto Update Applications" settings?
  19. Yeh I didn't want my VM to die randomly as I use it for gaming. I will have a look at working out some better docker memory limits. Thanks for helping me understand how it works.
  20. Yes I set the MySQL and other docker limits before MySQL was getting killed. I assumed that if the container got close to the limit it would just not use more, but reading your messages suggests that if it goes over the limit I set it will just kill the docker?
  21. Hello, For reference my server has 32GB of RAM. I wonder if anyone else is able to help me, I am unsure on what is causing this and what I can do to stop it. So far for the past couple of nights I have had fix common problems send an alert about out of memory errors and when I login I see that the MySQL docker has stopped. (Only this one stops so I am guessing OOM killer decides to kill that one for whatever reason) However my memory on the dashboard is hovering around 70% max so I am unsure on what is causing it. I have set each docker to have max memory so that they can't randomly use more memory, however it still keeps happening. The only other item that I run contstantly is a Windows VM (16GB assigned to it) and as such should leave 16GB for the dockers and system. I have also attached my diagnostics, if anyone can help me find out what is causing this it would be wonderful. I don't have the capcity to just upgrade my RAM currently and so need to fix this issue. diagnostics.zip
  22. Okay great that information helped. As I had the secondary cache working and all the data was there I just copied it from the single drive cache pool to the array. I then stopped the array, assigned the primary cache back and started the array. I then ticked the format box and started the format of the primary cache. As expected this formatted the cache pool, so all I had to do is restore the files to the cache and everything was back to normal. Just so I know for future, reading the errors shown can someone explain what happened?
  23. Yeh it seems that way, unsure if it is possible to do with the current setup unraid/lime tech use. But it would be a nice feature to allow live disk expansion as most newer operating systems would support it.