Capt.Insano

Community Developer
  • Posts

    296
  • Joined

  • Last visited

Everything posted by Capt.Insano

  1. I have a HomeAssistant VM that often gets put into a "Paused" state meaning that all of my home automation ceases to work. I cannot figure out the likely cause of the pausing as my cache drive seems to have plenty of room. It started happening about 3 weeks ago and I made a big effort to watch cache usage but it is still happening. Attached is my diagnostics zip and below is some info I think may be relevant, I would really appreciate any help!! On my unRAID server: root@Tower:/mnt/cache/appdata/Emby# df -h Filesystem Size Used Avail Use% Mounted on rootfs 16G 1.1G 15G 7% / tmpfs 32M 316K 32M 1% /run devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 512M 67M 446M 14% /var/log /dev/sda1 7.5G 973M 6.5G 13% /boot /dev/loop0 8.2M 8.2M 0 100% /lib/modules /dev/loop1 4.9M 4.9M 0 100% /lib/firmware /dev/md1 2.8T 2.0T 788G 72% /mnt/disk1 /dev/md2 2.8T 1.8T 975G 66% /mnt/disk2 /dev/md3 2.8T 1.6T 1.2T 58% /mnt/disk3 /dev/md4 2.8T 1.5T 1.4T 52% /mnt/disk4 /dev/md5 2.8T 1.4T 1.4T 51% /mnt/disk5 /dev/md6 2.8T 1.2T 1.6T 42% /mnt/disk6 /dev/sdg1 224G 80G 143G 36% /mnt/cache shfs 17T 9.3T 7.2T 57% /mnt/user0 shfs 17T 9.4T 7.3T 57% /mnt/user /dev/loop2 30G 15G 13G 54% /var/lib/docker /dev/loop3 1.0G 18M 904M 2% /etc/libvirt shm 64M 0 64M 0% /var/lib/docker/containers/5d3a91297475dc76e4452f5274219116b50cf8acc1e114d23408361ced25dfa3/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/986c16755882f961f47fca87257bcc956001fac4566246d0787af80d7c03ed6b/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/4db3070196a30a5fd4fd8640d6219253192dad50100df1e5024388a84d6d1b02/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/e014692a6bf6f5ca5f95520a96b531efa46f5393e26454220900428125c6184d/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/3fa248fbccfd5c167afd499f0bc7ac8ec8d95264db69d1ab3dd323411059fd9e/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/0ef4dd4ce03e13bc06ab64ef606286867d9a7af9416da1a8f05f96bd544dc7c9/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/c456e84c6473518d7f35e82d0c6b6d9e5e983e529b9128856f475fadd5582e07/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/478db1159d0542a224092517686a9631b5c98f5113c9aee1fdc5b284945da10c/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/ad0c3033a45e734d63239a1b90a6d669471e55a210a7bee27bc209196c3e95b9/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/0a371e1da920fe34d6ac978efdb919cbe09505d807c319c7ad1d9243da60bf4c/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/75a880a8c4aca6806df9c89aa39efad385e5421b39201d54d3b97bd3f33edff9/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/41d90b2ea7011c9edc0eaf5272311639123bfe9a5219c5b3bfc801da0806f8e1/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/63b844d285a7fa29edc697a92104bc7524f90f12586064bf5da62b5e0a7e32e5/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/076dc4418d8fec1b68d1fe4360d051085b202a0abf65890c6804726d4b2a834b/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/67d6957eb2cfc064653ce0a405addc74b3653ef5284dfa76c151991ebdc07588/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/4ab10bafc8140e7cf16eff894d7bde93ba9c21bd9153ed3cd824619ba6f0e036/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/938340d5cc2adb21249d1666fda047319244f65cfbc7ba35c55992a1c4333380/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/30f338122358484799e49218f6cbfaaef243394154af6fea6080ddd4b3384798/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/89a1ed26ad5a1036607c68648bdb1307cd7bedbd74b6da4af2fb6f2ac6f77f47/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/2000963f7424c204931d8e45b85e12156c86abce77866ca38bcd6b3246ef802e/mounts/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/106a9ec9ba5793b782f6d685acf7f9af4a22903e4971db4fc945317c07021196/mounts/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/e1545ec7c79ce36656bea2cacb9860322929eda7cd612dcd6b5efcb61a4a1539/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/a6ae209936e5cc61237b87d094ca03549d93a3a654a49f9cbbf5e01cd8c463db/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/62b46bfa70678933d4e8e48cd41c8fbdef6c4b59d43a8ef541100027546b81c7/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/1119f358d9848e41a824cb216806244495771f0c64cbfb2b1935a6a8fc2fe297/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/535b72b6aefceb7a918a2a4977134102bd3fd3570c43e24ce0cfb0ad87cd000a/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/c6854038d0874bd4f659c95fa4d2063f5305b2994b546c347fdb801c260f90fc/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/3b266e0276f44f13e272f683bbd7714de08dbcd4c20f2f44a10d3a59f6079658/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/92c01e87c1e5a0a9bc2419b10ac40f786d8fda5805bcc42a48226f3a1c29c09c/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/7a3f861d6a2645cb83834fe4e4c2c422761c7d6da86b7ce24f675e5fd323c923/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/b0505370a3b3e10f921000a0caa165b215481464ce4922c39b0a299cd5c2e0ef/mounts/shm Inside the HomeAssistant VM: USER@HomeAssistant:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 486M 0 486M 0% /dev tmpfs 100M 13M 88M 13% /run /dev/vda2 14G 2.5G 11G 20% / tmpfs 497M 0 497M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 497M 0 497M 0% /sys/fs/cgroup /dev/vda1 511M 132K 511M 1% /boot/efi tmpfs 100M 0 100M 0% /run/user/1000 I will gladly provide any other information that may help. Thanks again for any help! tower-diagnostics-20181221-1810.zip
  2. Scratch that, Container is now updated to latest Tonido version! (tested on my server and running well here) The Capt.
  3. Hi lads, So sorry for the delay on this I have not been around and am only seeing this now. I am away this weekend but I have set a reminder to update this docker next week. Sorry again, The Capt.
  4. Apparently, I have been running it hourly for the last 315+ days!! You cannot blame me, it is a great plugin !! +1 This would be a great feature!
  5. Great plugin dmacias! I have been using it pretty much since launch, there in lies the problem!!: I recently moved house and moved ISP and I was eager to compare speeds and ensure no problems with my line: When I fired up the speedtest UI I realised that I have a total of 7579 results!! This stalls my browser (Firefox and Chrome) which prompts me to kill the script, if I allow the script to continue the webUI does load but it is impossible to navigate. I realise that I could just delete the speedtest.xml from /boot/config/plugins/speedtest but that would mean I would lose all of my past data. Would it be possible to set an option in the plugin to only keep X amount of weeks/months/years of results and then start deleting older ones, or even just keep X amount of results themselves? As an intermediate solution; I went through the xml and realised that the earlier the result in the file the earlier it was in time although none of the xml entries have a date stamp on them so I just deleted the first half of the xml entries. As always, thanks a million for your work in unRAID. The Capt.
  6. Good suggestion Probably best solution but TBH a little overkill for my original plans! I thought it would be easy to just put an RDP password on it! Thanks
  7. I have been looking into a Docker based WebUI file management solution for my server and I have started using Krusader by Sparklyballs. My Question: Is it possible to password protect an RDP based Docker app? I have looked around online and changed some of the xrpd.ini settings with no success: Attempt 1: I changed the xrpd.ini to specify a username and password and restarted the container but still gucamole does not prompt me for a password and connects to the container. [xrdp1] name=Krusader lib=libxup.so username=<SomeUsername> password=<SomePassword> ip=127.0.0.1 port=/tmp/.xrdp/xrdp_display_1 chansrvport=/tmp/.xrdp/xrdp_chansrv_socket_1 xserverbpp=16 code=10 Attempt 2 I changed the xrpd.ini to ask for a username and password and restarted the container, this time gucamole asks me for a password but any entry in username and password allows entry into the container. (even leaving username/password blank allowed entry to container) [xrdp1] name=Krusader lib=libxup.so username=ask password=ask ip=127.0.0.1 port=/tmp/.xrdp/xrdp_display_1 chansrvport=/tmp/.xrdp/xrdp_chansrv_socket_1 xserverbpp=16 code=10 Is there anyway to secure it? Thanks for any help!
  8. It is defined in the template as container port 8080 to host port 9876. I was getting permission denied issues if trying to run it on port 80.
  9. OK, first and foremost this is very much a Proof of Concept. It is buggy as hell and probably the worse way of implementing Koel as a Docker Container. I would gladly have someone show me how its done and release a MUCH better version. Steps to achieve this buggyness for your self: 1. Download the MariaDB/MySQL container and set up a database (I have the default name as "koel" in the container template) 2. Add my docker container repo to your "Docker Repositories" list: https://github.com/CaptInsano/docker-containers/tree/templates 3. Add Container and select Koel from the list under Capt.Insano 4. Populate your info as relevant to yourself: DB_HOST: IP address and port of your MariaDB/MySQL install (eg xxx.xxx.xxx.xxx:3306) DB_DATABASE: Database name (default "koel") DB_USERNAME: Database username (default "root") DB_PASSWORD: Database password ADMIN_EMAIL: needed for koel login, does not need to be a real email ADMIN_NAME: Admin name ADMIN_PASSWORDAdmin password for koel login 5. After the koel container has downloaded the koel install needs to be initialised, issue the following command in the unraid commandline: docker exec Koel su nginx -c "cd /DATA/htdocs && php artisan koel:init" 6. You then need to issue the following command so that koel runs on the correct port and also allows connection outside localhost: docker exec Koel su nginx -c "cd /DATA/htdocs && php artisan serve --port=8080 --host 0.0.0.0" 7. Visit unRAIDIPADDRESS:9876 ***known problems*** the last command (#6) needs to be ran interactively (closing the ssh instance may stop koel (needs to be tested)) for some reason loads of my song titles start with "??", not a clue why ***conclusion*** by all means report any problems back but I am unsure if I will be able to help, I am decidedly a noob when it comes to this stuff!! Hopefully some of the bigger players on the forums will swoop in and be able to offer help!!
  10. I am aware that the docker container we are using was last built 9 months ago and it does not pull newest release from git on startup. Meaning I think it is running a koel build as of 9 months ago, the problems we are facing may have been fixed in the master branch since then. We need to build a new container to check if errors still exist. If no one can, I will try and get to it maybe later in the week.
  11. +1 for this request. Koel looks great. @mgworek: Thanks for your template, I have tidied up a few of the variables from your template below if you don't mind <?xml version="1.0"?> <Container version="2"> <Name>Koel</Name> <Repository>etopian/docker-koel</Repository> <Registry>https://hub.docker.com/r/etopian/docker-koel/</Registry> <Network>bridge</Network> <Privileged>false</Privileged> <Support/> <Overview/> <Category/> <TemplateURL/> <Icon>https://raw.githubusercontent.com/phanan/koel/master/resources/assets/img/logo.png</Icon> <WebUI>http://[iP]:[PORT:9876]</WebUI> <ExtraParams/> <Description/> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>9876</HostPort> <ContainerPort>80</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data> <Volume> <HostDir></HostDir> <ContainerDir>/music</ContainerDir> <Mode>rw</Mode> </Volume> </Data> <Environment> <Variable> <Value></Value> <Name>DB_HOST</Name> <Mode/> </Variable> <Variable> <Value>forge</Value> <Name>DB_DATABASE</Name> <Mode/> </Variable> <Variable> <Value>root</Value> <Name>DB_USERNAME</Name> <Mode/> </Variable> <Variable> <Value></Value> <Name>DB_PASSWORD</Name> <Mode/> </Variable> <Variable> <Value></Value> <Name>ADMIN_EMAIL</Name> <Mode/> </Variable> <Variable> <Value></Value> <Name>ADMIN_NAME</Name> <Mode/> </Variable> <Variable> <Value></Value> <Name>ADMIN_PASSWORD</Name> <Mode/> </Variable> <Variable> <Value>False</Value> <Name>APP_DEBUG</Name> <Mode/> </Variable> <Variable> <Value>production</Value> <Name>AP_ENV</Name> <Mode/> </Variable> <Variable> <Value>99</Value> <Name/> <Mode/> </Variable> <Variable> <Value>100</Value> <Name/> <Mode/> </Variable> </Environment> <Config Name="Host Port" Target="80" Default="" Mode="tcp" Description="Container Port: 80" Type="Port" Display="always" Required="false" Mask="false">9876</Config> <Config Name="DB_HOST" Target="DB_HOST" Default="" Mode="" Description="Container Variable: DB_HOST" Type="Variable" Display="always" Required="false" Mask="false"></Config> <Config Name="DB_DATABASE" Target="DB_DATABASE" Default="" Mode="" Description="Container Variable: DB_DATABASE" Type="Variable" Display="always" Required="false" Mask="false">forge</Config> <Config Name="DB_USERNAME" Target="DB_USERNAME" Default="" Mode="" Description="Container Variable: DB_USERNAME" Type="Variable" Display="always" Required="false" Mask="false">root</Config> <Config Name="DB_PASSWORD" Target="DB_PASSWORD" Default="" Mode="" Description="Container Variable: DB_PASSWORD" Type="Variable" Display="always" Required="false" Mask="false"></Config> <Config Name="ADMIN_EMAIL" Target="ADMIN_EMAIL" Default="" Mode="" Description="Container Variable: ADMIN_EMAIL" Type="Variable" Display="always" Required="false" Mask="false"></Config> <Config Name="ADMIN_NAME" Target="ADMIN_NAME" Default="" Mode="" Description="Container Variable: ADMIN_NAME" Type="Variable" Display="always" Required="false" Mask="false"></Config> <Config Name="ADMIN_PASSWORD" Target="ADMIN_PASSWORD" Default="" Mode="" Description="Container Variable: ADMIN_PASSWORD" Type="Variable" Display="always" Required="false" Mask="false"></Config> <Config Name="APP_DEBUG" Target="APP_DEBUG" Default="" Mode="" Description="Container Variable: APP_DEBUG" Type="Variable" Display="always" Required="false" Mask="false">False</Config> <Config Name="AP_ENV" Target="AP_ENV" Default="" Mode="" Description="Container Variable: AP_ENV" Type="Variable" Display="always" Required="false" Mask="false">production</Config> <Config Name="Key 1" Target="" Default="" Mode="" Description="Container Variable: " Type="Variable" Display="always" Required="false" Mask="false">99</Config> <Config Name="Key 2" Target="" Default="" Mode="" Description="Container Variable: " Type="Variable" Display="always" Required="false" Mask="false">100</Config> <Config Name="music" Target="/music" Default="" Mode="rw" Description="Container Path: music" Type="Path" Display="always" Required="false" Mask="false"></Config> </Container> However even with this template above and with MariaDB setup with database "forge", I am still not able to initiate a scan. I would love to see a correctly implemented Koel Container with auto start etc. Hopefully someone with the knowlege would be interested in setting one up!
  12. not really 1 single container but 1 application: I have been having problems with linuxserver:deluge since its move to alpine so I was trialing needo:deluge and also binhex:deluge to see which would be better while also offering newest build of deluge 1.3.13. The problem happened with both binhex:deluge (x2) and linuxserver:deluge (x1) after returning to linuxserver:deluge to see if binhex:deluge could possibly contribute to the problem. It has happened 3 times in total. I then removed my deluge container outright for the night unusual I know
  13. It has now happened a third time this evening. As stated earlier, this has never happened before and the only solution I have is to restart my server. Every time it happens I am interacting with a Docker container, I am trying to fix. (during this process the container has been complete removed and re-added). Attached is another set of diagnostics. tower-diagnostics-20160908-2305.zip
  14. Not 100% sure if this issue resides with 6.2RC5 but I have never had this problem before. I moved from 6.1.9 to 6.2 at RC3 and still no problems until now. I have been trying to get a new docker setup this evening and twice my array disappeared and I got "Transport endpoint is not connected" errors. The only solution I could find was to restart my server. Diagnostics attached: Many thanks for the continued work! tower-diagnostics-20160908-2209.zip
  15. Is it possible to request modules for lm-sensors? sensor-detect has advised me to load the module i5500_temp but that module is not included in unRAID. I put in a feature request here: https://lime-technology.com/forum/index.php?topic=46439.0 back in Feb but no reply from lime-tech. Would anyone mind having a look and see is this is a possibility to add the module to unRAID myself? (currently running 6.2 rc5) I think I have found 2 places from where to compile the module: http://jdelvare.nerim.net/devel/lm-sensors/drivers/i5500_temp/ http://lxr.free-electrons.com/source/drivers/hwmon/i5500_temp.c but I am honestly a little out of my depth compiling modules!
  16. @bonienl Just a FYI: While trying to update this plugin I get the following: plugin: updating: dynamix.disk.io.plg plugin: downloading: https://raw.githubusercontent.com/bergware/dynamix/master/archive/dynamix.disk.io.txz ... done plugin: bad file MD5: /boot/config/plugins/dynamix.disk.io/dynamix.disk.io.txz
  17. I am just about to upgrade my server with a Dell Perc H200, currently my MB (Intel S5500WB) only supports SATA 300 and the Perc H200 should give me SATA 600 while also allowing expansion options down the road. I am running unRAID 6.2 RC4 at the moment and I know the script currently does not support 6.2.X. I am wondering, is it worth my while waiting on an update to this script in the near future to test the before/after difference in drive speed or will you be waiting until unRAID 6.2 Final prior to releasing an updated script? Thanks a million for you work on this.
  18. Sorry to resurrect an old thread but I found this thread while wondering the same thing: Is it possible to set KVM/lib-virt to hibernate a VM rather than shut it down if triggered through the host eg. (Stop Array) but still shutdown if given an explicit shutdown command through host or from within VM itself? Probably not, but I would love to preserve my windows VM in hibernation if the array needs to be stopped for some reason.
  19. Is it possible to hide a device from unassigned devices? I have an SSD that I have passed thru to a VM and I would prefer if if was not listed under unassigned devices incase someone tries to mount it while attached to the VM and cause corruption. Thanks for all your work!
  20. Should this work with unRAID 6.1.9? I have not updated to 6.2 RCs yet but would like to retire my linux VM whose sole purpose is running virt-manager!
  21. Thanks a million lads! Always appreciate the support on these boards!
  22. So this 0 value is indicitive of the drive not reporting the temp rather than the PCI-E SATA Card failing to passthrough/report value, yeah?
  23. Shite, sorry I was looking in the wrong place. Attached is the atributes of the cache drive NOT reporting temp: 194 Temperature celsius is listed