MulletBoy
-
Posts
25 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by MulletBoy
-
-
Im seeing similar symptoms, extremely high CPU usage in the UNRAID dashboard, by very normal cpu usage when i run 'top' or 'htop' in terminal.
Restarts didnt fix it for me, and upgrading the latest unraid did not fix it either. ( I went from 6.6.3 to 6.6.6).
I am running standard stuff, plex, sonarr, couch potato, sabnzbd, letsencrypt (with some wordpress sites), mysql, mariadb, nextcloud, rutorrent, unifi, muximux.
I have found the culprit in my case to be couchpotato, it is running the mover on my completed torrents folder on repeat erroneously.. as in copy completed torrent xyz to the library location leaving a copy in the completed folder (as it should for seeding), however then not remembering that its processed that file already, and just doing it again and again, cycling through all the files in the folder....
I havent figured out what is wrong with couchpotato or how to fix it yet, but at least i know what the issue is and disabling couchpotato docker for now has stopped the super high CPU usage.
-
Hey All
So basically, I have a fish tank that I want to live stream to a website that I host.
I have an unraid server which is running many dockers, including the Linuxserver LetsEncrypt docker which is my main webserver for various websites I host.
I have 2 potential locations for the camera
- The best location has power but would need to use wifi for connectivity
- The alternative location has Ethernet (POE) and direct Power so is very flexible... but I would only use this if a wifi enabled camera is going to be too difficult/expensive/bad for other reasons
I am looking for advice as to a good way of going about setting this up... ideally with minimal cost, and utilizing my unraid server + some dockers for the entire solution.
Any ideas? Looking for software and hardware recommendations!
-
In the last week or so, this docker has stopped connecting to all my trackers that require HTTPS for the tracker announce URL's
I have tried blowing it away entirely and rebuilding with no luck, the same torrents added to Linuxserver transmission or deluge dockers work perfectly,
Any ideas?
-
Hey All
So i successfully configured this docker to run reverse proxies for about 8 other dockers, they are all exposed (with a .htpasswd for access) over https on my duckdns domain pointing to my server. Super happy with it, thanks to many within this thread for various code snippets that helped me, and my working code is pasted below
What I want to do now is host a public wordpress blog, can anyone give me some insight as to how to do this? So far I have
-
installed apache docker (linuxservers.io) -
copied (unzipped) wordpress into a folder in the www folder -
installed the mysql docker (linuxserver.io) -
hit wordpress config page, configured it to use the mysql db -
wordpress is kinda locally now at 192.168.1.100:89 (via apache) -
so now i started some random config in the NGINX site-confs/default file that I have pasted below
edit: solved my problem, here is how I did it
- unzipped wordpress into a folder called insertmydomain2.com the www folder of ngynx
- installed the mysql docker (linuxserver.io)
- used the mysql workbench tool to configure a 'wordpress' schema, and create a user that can access it
- configured my nginx default config file as per below
- pointed A record of the the domain i am using for this wordpress to my ip address
- hit my domain and boom, wordpress config page, easy mode from there
i really have no idea what i am doing i figured this shit out with brute force, some copy paste and lots of reading through this forum
# wordpress server running without https server { listen 80; server_name insertmydomain2.com; root /config/www/insertmydomain2.com; index index.html index.htm index.php; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(/)(/.*)$; } } # main server with most of my dockers running on https server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name mydomain1.com; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; #muximux location / { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:99; } #sonarr location ^~ /sonarr { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:8989/sonarr; } #couchpotato location ^~ /couch { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:5050/couch; } #transmission location ^~ /transmission { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:9091/transmission; } #sabnzbd location ^~ /sabnzbd { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass https://192.168.1.100:9090/sabnzbd; } #plexpy location ^~ /plexpy { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:8181/plexpy; proxy_bind $server_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Ssl on; } #rutorrent location ^~ /rutorrent { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:82/; } #ombi - non-restricted access location ^~ /ombi { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.100:86/ombi; } #pi-hole location ^~ /pihole { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.4/admin/; } }
TODO any help would be appreciated:
Figure out how to host the wordpress site on port 443 https on its unique domain, without breaking everything else currently running in the 443 server block on their seperate duckdns.org domain
-
-
17 hours ago, memphisto said:
After the last update, I can not add a torrent via webui addon, what happened?
Same with me, i cant manually add torrents anymore, also couchpotato and sonar cant add torrents either.... seems to be after the last update, I haven't changed anything else
-
I GOT IT WORKING!!!
I forked a different terraria docker, tinker tinker a bit...
But it works!
Add this repo to your docker template repositories
https://github.com/mulletboy/docker-template
Then add the container from the option that will appear in your template list, configure it like my screenshot
Once the it builds successfully, give it another 5 minutes to generate the world
Then join using your terraria client at [unraidserverip]:7777
For some reason its laggy as hell and plays in slow motion until you turn frame skipping on in the video settings menu, but then it plays fine
*note* i have absolutely no idea what I am doing and I probably did everything wrong,
- I forked another terraria docker which worked but was way out of date and tinkered with it,
- created a docker xml thing and tinkered with it some more,
- created a hub.docker.com account (im not sure what this is for... but i noticed everyone else had one so i did it too) somehow got them linked and did *stuff* eventually it built successfully
- and after that it loaded in Unraid properly
If someone wants to tell me how to make this better, fix any issues, and generally setup a docker template properly please give me simple instructions
-
I have been messing around with getting a Terraria server running as a docker in unraid
So far I have found the closest to being successful i have been is this docker
https://hub.docker.com/r/sixarm/terraria/
Add that using community apps, set a path for it to save stuff in your appdata folder, and wait a minute or two for it to generate the world
on my pc i join at [unraidserverip]:7777
it will briefly run then it crashes with some exception that is beyond me... if anyone who knows what they are doing could look into this and see if its an easy fix to get it working, would be so awesome!
-
Thats what I thought too updating to 6.2 is what prompted me to give this docker another go
however it still didnt work until I tried the above
-
I had the same problem with this Docker as a few other people have had, but that I've not seen a definitive solution for, and I thought I'd document the best solution I've come up with. I was having an issue where the Docker would start, but the web UI would fail to become accessible. Looking at the log files, the issue was with MongoDB, which was giving the error:
Fri Aug 12 18:22:34.017 [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x35d4000 errno:22 Invalid argument Fri Aug 12 18:22:34.017 [initandlisten] Fatal Assertion 13515
This also resulted in the following errors appearing in the unRAID system log:
Aug 12 18:22:34 fileserver shfs/user: shfs_write: write: (22) Invalid argument
As best as I can tell, the issue is that the MongoDB service is trying to write data to the mapped folder, but the operation it's trying to perform is unsupported by the unRAID file system. (This may be a completely incorrect analysis!)
I believe some people have mitigated this by having the docker data stored on an SSD instead of on the main array, but I haven't yet added an SSD to my setup, so I can only store the data on the main array. However, by changing the config path mapping from /mnt/user/appdata/unifi to /mnt/disk1/appdata/unifi, I was able to successfully start the docker. I guess that because MongoDB is now directly accessing the ext4 filesystem on the drive, it can perform whatever operation it's trying to do successfully.
This might be worth documenting for others who don't have an SSD and map the config path directly to the main array. I've seen a few reports of this error in this thread and in some of the other Unifi docker support threads (it doesn't appear to be specific to the LinuxServer image, but to any Docker image that runs MongoDB and maps its folders).
I was having this exact issue still however your solution of mapping app data direct to a disk e.g. /mnt/disk1/appdata/unifi instead of /user/ did work for me...
I would add however that simply remapping the user share and restarting the docker didnt fix it, I had to fully delete the docker image AND delete all traces of the old /appdata/unifi folders from my /user/ share, then reinstall it mapped to /disk1/ before it would work
-
Updated mostly fine and all my things are currently live and working however I have noticed that all my docker apps have the update ready message next to them... but every one of them results in an error "layers from manifest don't match image configuration" when i attempt to update
-
I want to use ruTorrent to create torrent files based on files I have moved out of the downloads directory and into another share location
As such i have exposed more directories as per screenshot below
However in ruTorrent when navigating to any location other than the /downloads/... it errors "Incorrect directory ()"
It is like it is hardcoded to only allow me to access the folder mapped to /downloads/
Any ideas?
-
I got it working
My problem was that I was an idiot.... i was entering a username and password when i had not configured a username and password in ruTorrent duuuur herp durp
For anyone else's reference
host: IP of unraid server (192.168.1.100 for me)
port: 8080 is default, (i remapped mine to 82)
urlpath: RPC2
category: TV
everything else default
-
Has anyone got this Sonar working with Linuxserver.io's ruTorrent docker?
http://lime-technology.com/forum/index.php?topic=47299.0
I for the life of me cant figure out what config is required on this screen to make it work? If someone could post a screenshot of their working Sonar config for ruTorrent i would really appreciate it
-
Stop --> Start did not update it for me
I had to Stop --> Edit --> Save
Triggered a rebuild and downloaded the updated version
Then it was updated to 0.10.5 fine
-
How do I change the URL Base for RUTorrent?
Currently I access the webgui through http://192.168.1.100:82/
I need to be able to access it from http://192.168.1.100:82/rutorrent/
I have already done this for sonar, couchpotato, plexpy, transmission and other things... but cant for the life of me find out how to do it for the RUTorrents webgui...
This is so I can easily configure it with reverse proxy under apache and then have it exposed through muximux
Mulletboy, you don't need to change anything in rtorrent add this to your Apache default.conf
RewriteRule ^/rtorrent$ /rtorrent/ [R] <Location /rtorrent> ProxyPass http://192.168.0.100:82/ ProxyPassReverse http://192.168.0.100:82/ </Location>
This worked perfectly for me! (once i found my typo in the IP address lol)
Thanks
-
How do I change the URL Base for RUTorrent?
Currently I access the webgui through http://192.168.1.100:82/
I need to be able to access it from http://192.168.1.100:82/rutorrent/
I have already done this for sonar, couchpotato, plexpy, transmission and other things... but cant for the life of me find out how to do it for the RUTorrents webgui...
This is so I can easily configure it with reverse proxy under apache and then have it exposed through muximux
My apache sites config currently has the following settings
<VirtualHost *:443>
ServerName mydomain.com
ServerAlias mydomain.com
ServerAdmin [email protected]
DocumentRoot /config/www
SSLEngine on
SSLProxyEngine On
RewriteEngine On
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
SSLProxyVerify none
SSLProtocol -ALL +TLSv1 +TLSv1.1 +TLSv1.2
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
SSLHonorCipherOrder on
SSLCertificateFile "/config/keys/cert.crt"
SSLCertificateKeyFile "/config/keys/cert.key"
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
RewriteEngine On
ProxyPreserveHost Off
<Location />
ProxyPass http://192.168.1.100:99/
ProxyPassReverse http://192.168.1.100:99/
AuthUserFile /config/keys/.htpasswd
AuthType Basic
AuthName "Muximux"
Require user someusernamegoeshere
</Location>
<Location /couch>
ProxyPass http://192.168.1.100:5050/couch
ProxyPassReverse http://192.168.1.100:5050/couch
</Location>
<Location /sonarr>
ProxyPass http://192.168.1.100:8989/sonarr
ProxyPassReverse http://192.168.1.100:8989/sonarr
</Location>
<Location /transmission>
ProxyPass http://192.168.1.100:9091/transmission
ProxyPassReverse http://192.168.1.100:9091/transmission
</Location>
<Location /sabnzbd>
ProxyPass https://192.168.1.100:9090/sabnzbd/
ProxyPassReverse https://192.168.1.100:9090/sabnzbd/
</Location>
<Location /pihole>
ProxyPass http://192.168.1.4/admin/
ProxyPassReverse http://192.168.1.4/admin/
</Location>
<Location /plexpy>
ProxyPass https://192.168.1.100:8181/plexpy/
ProxyPassReverse https://192.168.1.100:8181/plexpy/
</Location>
-
So I kinda threw in the towel here...
I installed the Unifi docker from pducharme's repository, same settings as i had before (remap all ports to 7xxx) from the originals and same appdata config location
It "just worked"... straight away, no errors in the logs, web gui came straight up from the url given by the app icon...
I would still be keen to get Linuxserver.io's docker working, because i am currently using 8 of their other dockers perfectly and now this is the only docker i use that itsn't one of theirs...
But for now I am up and running and managing my Unifi AP's no problems
-
Aaaand no dice
Delete the docker image and container, deleted the unifi appdata folder and config, reinstalled from scratch
exact same result. docker webgui loads page cannot be displayed
log files full of the same errors as above...
here 3 screenshots of my setup... does this help? can anyone see anything wrong with my setup?
-
Looks like it should be up and running (takes a min or two before you cab access the web GUI.
I was afraid you changed your ports backwards (Container port / Host port), but you remapped it just fine.
This is a super simple container to get running. Just specify a location for the volume mapping and change any conflicting ports if required. You have done that all just fine.
Sorry i don't have much more for you, as this was a trouble free one for me.
Maybe the generic completely delete the image and reinstall?
let us know how you make out
Yeah I can see the docker build works fine.... but I can see in the Unifi server logs (the first logs i posted) that the app is crashing
Sat Apr 30 00:08:03.515 [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x2b26545fe000 errno:22 Invalid argumentSat Apr 30 00:08:03.515 [initandlisten] Fatal Assertion 13515
Sat Apr 30 00:08:03.517 [initandlisten]
***aborting after fassert() failure
Sat Apr 30 00:08:03.517 Got signal: 6 (Aborted).
unfortunately I am not very good at interpreting this... the stack trace keeps repeating and it keeps trying to restart.... if i leave the docker running the log files get pretty big...
I will totally delete and re-add the docker like you suggest and report back
-
I did a force update to simulate re-installing (am i right to do that?)
Results quoted below
The 8080 port is already in use (sabnzb) so i changed the port to 7080 for unifi
Preparing to update: linuxserver/unifiPulling image: linuxserver/unifi:latest
IMAGE ID [latest]: Pulling from linuxserver/unifi.
IMAGE ID [f7eef3e8d2a5]: Already exists.
IMAGE ID [f504bc93666d]: Already exists.
IMAGE ID [9e58b20b58ab]: Already exists.
IMAGE ID [97b492236f76]: Already exists.
IMAGE ID [1e4ce1496f97]: Already exists.
IMAGE ID [b3260a41280d]: Already exists.
IMAGE ID [a4e1f7d8ab53]: Already exists.
IMAGE ID [35100966404c]: Already exists.
IMAGE ID [f6953bdb8339]: Already exists.
IMAGE ID [613b1fd6af34]: Already exists.
IMAGE ID [4bdf01eea2ad]: Already exists.
IMAGE ID [f8cff8ab4885]: Already exists.
IMAGE ID [f2a32351f2ae]: Already exists.
IMAGE ID [d0353cbf4516]: Already exists.
IMAGE ID [c9c6b86be268]: Already exists.
IMAGE ID [ce044d251a65]: Already exists.
IMAGE ID [a1b6b146e3c6]: Already exists.
IMAGE ID [5f76b89093c3]: Already exists.
IMAGE ID [328b150725c5]: Already exists.
IMAGE ID [961648b93697]: Already exists.
IMAGE ID [1370654e6055]: Already exists.
IMAGE ID [0e9241f88932]: Already exists. Digest: sha256:4d96ea3702f03e5ae1ced38b9d54c682005927e1953fd6fa3b3dd5f9e726bf58. Status: Image is up to date for linuxserver/unifi:latest.
TOTAL DATA PULLED: 0 B
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker rm -f unifi
unifi
The command finished successfully!
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="unifi" --net="bridge" -e PUID="99" -e PGID="100" -e TZ="Australia/Sydney" -p 7080:8080/tcp -p 8081:8081/tcp -p 8443:8443/tcp -p 8843:8843/tcp -p 8880:8880/tcp -v "/mnt/user/appdata/unifi/":"/config":rw linuxserver/unifi
a03b95b29fa9bacd6ca6f4fef84cce8da9b5d59507900c6e7b682d64206b4026
The command finished successfully!
-
Hey I am having trouble getting this docker working, cant access the webui and the Unifi logs are quoted below, can anyone help me?
Sat Apr 30 00:08:03.506 [initandlisten] MongoDB starting : pid=253 port=27117 dbpath=/usr/lib/unifi/data/db 64-bit host=f7435e009913Sat Apr 30 00:08:03.506 [initandlisten] db version v2.4.14
Sat Apr 30 00:08:03.506 [initandlisten] git version: 05bebf9ab15511a71bfbded684bb226014c0a553
Sat Apr 30 00:08:03.506 [initandlisten] build info: Linux ip-10-154-253-119 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49
Sat Apr 30 00:08:03.506 [initandlisten] allocator: tcmalloc
Sat Apr 30 00:08:03.507 [initandlisten] options: { bind_ip: "127.0.0.1", dbpath: "/usr/lib/unifi/data/db", logappend: true, logpath: "logs/mongod.log", nohttpinterface: true, port: 27117 }
Sat Apr 30 00:08:03.514 [initandlisten] journal dir=/usr/lib/unifi/data/db/journal
Sat Apr 30 00:08:03.514 [initandlisten] recover : no journal files present, no recovery needed
Sat Apr 30 00:08:03.515 [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x2b26545fe000 errno:22 Invalid argument
Sat Apr 30 00:08:03.515 [initandlisten] Fatal Assertion 13515
Sat Apr 30 00:08:03.517 [initandlisten]
***aborting after fassert() failure
Sat Apr 30 00:08:03.517 Got signal: 6 (Aborted).
Sat Apr 30 00:08:03.520 Backtrace:
0xdea831 0x6d1179 0x2b2652698d40 0x2b2652698cc9 0x2b265269c0d8 0xdaa6ee 0xdc6b3f 0x92eda2 0x92f24f 0x9321a5 0x924a35 0x6d802c 0x6d885d 0x6df570 0x6e1319 0x2b2652683ec5 0x6cfde9
bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdea831]
bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0x6d1179]
/lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x2b2652698d40]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x2b2652698cc9]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x2b265269c0d8]
bin/mongod(_ZN5mongo13fassertFailedEi+0xde) [0xdaa6ee]
bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x14f) [0xdc6b3f]
bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x122) [0x92eda2]
bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2f) [0x92f24f]
bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x135) [0x9321a5]
bin/mongod(_ZN5mongo3dur7startupEv+0x85) [0x924a35]
bin/mongod(_ZN5mongo14_initAndListenEi+0x3ec) [0x6d802c]
bin/mongod(_ZN5mongo13initAndListenEi+0x1d) [0x6d885d]
bin/mongod() [0x6df570]
bin/mongod(main+0x9) [0x6e1319]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x2b2652683ec5]
bin/mongod(__gxx_personality_v0+0x499) [0x6cfde9]
***** SERVER RESTARTED *****
[2016-04-30 00:07:29,483] <launcher> INFO system - *** Running for the first time, creating identity ***[2016-04-30 00:07:29,485] <launcher> INFO system - UUID: 89369afc-319c-4635-8228-5acee5cffdcb
[2016-04-30 00:07:29,485] <launcher> INFO system - ======================================================================
[2016-04-30 00:07:29,485] <launcher> INFO system - UniFi 4.8.15 (build atag_4.8.15_7440 - release) is started
[2016-04-30 00:07:29,485] <launcher> INFO system - ======================================================================
[2016-04-30 00:07:29,488] <launcher> INFO system - BASE dir:/usr/lib/unifi
[2016-04-30 00:07:29,491] <launcher> INFO system - Current System IP: 172.17.0.7
[2016-04-30 00:07:29,491] <launcher> INFO system - Hostname: f7435e009913
[2016-04-30 00:07:32,000] <db-server> ERROR system - [exec] error, rc=14
[2016-04-30 00:07:32,000] <db-server> INFO db - DbServer stopped
[2016-04-30 00:07:37,267] <db-server> ERROR system - [exec] error, rc=14
[2016-04-30 00:07:37,267] <db-server> INFO db - DbServer stopped
[2016-04-30 00:07:42,534] <db-server> ERROR system - [exec] error, rc=14
[Support] Linuxserver.io - Quassel-Web
in Docker Containers
Posted
I am having a similar issue to the person above me, I have the quassel-core docker running and can access it fine from the desktop client, but when using the quessel-web docker i input the login credentials and it just spins at "connecting" forever.
In the quassel-core logs I can see the following
2024-03-30 18:47:14 [Info ] Client connected from 172.18.0.1
If I connect from my desktop client the logs show
2024-03-30 18:47:58 [Info ] Client connected from 192.168.1.10
2024-03-30 18:47:58 [Info ] Client 192.168.1.10 initialized and authenticated successfully as "username" (UserId: 1).
im using identical credentials for both, but the web client never triggers the second authentication line in the logs (sucess or fail...) its like it just isnt pushing the full request through?
any support on this would be appreciated, i suspect this is a pretty rarely used docker given how quiet this thread is...