taifleh

Members
  • Posts

    2
  • Joined

  • Last visited

taifleh's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Edit: solved it, posting the solution Hi there, some sort of issue seems to have gotten its way into my system since my last login in NPM in January. My list of proxy hosts and SSL Certificates will not be displayed. the UI shows me an error "The owner is null". The reason behind this error is the deletion of two user accounts ind NPM. Those users happen to be the owners of those proxy host entries. In order to fix the issue, i went inside the container and edited the sqlite database, resetting the owner to the id of the remaining user (1). unraid# docker exec -it NginxProxyManager bash bash-5.0# sqlite3 /data/database.sqlite sqlite> UPDATE proxy_host SET owner_user_id = 1 where owner_user_id != 1; sqlite> UPDATE certificate SET owner_user_id = 1 where owner_user_id != 1; TypeError: owner is null exports https://nginx.x.duckdns.org/js/7.bundle.7.js:1 render https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:306 <anonymous> https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:299 _renderTemplate https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 render https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 ae https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 _getBuffer https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 H https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 _getBuffer https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 _renderChildren https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 filter https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 sort https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 render https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 ae https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 show https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 showChildView https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 onRender https://nginx.x.duckdns.org/js/7.bundle.7.js:1 O https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 render https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 ae https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 show https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 showChildView https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 onRender https://nginx.x.duckdns.org/js/7.bundle.7.js:1 promise callback*onRender https://nginx.x.duckdns.org/js/7.bundle.7.js:1 O https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 render https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 ae https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 show https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 showChildView https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 showAppContent https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:306 showNginxProxy https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 showNginxProxy https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 promise callback*showNginxProxy https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 xe https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 i https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 Y https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 execute https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 route https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 loadUrl https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 F https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 loadUrl https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 navigate https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 navigate https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 click ui.links@https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:306 dispatch https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 handle https://nginx.x.duckdns.org/js/main.bundle.js?v=2.8.1:27 7.bundle.7.js:1:1293
  2. Hi knex666 I’m a fairly experienced user, but I still had trouble getting this to work. This post has gotten longer than I intended it to, but I think we both can learn something from it. I will list the problems I experienced, when I set up this container. In the Unraid setup the description of the field “config“ is misleading, because the path is actually mounted to (container: /etc/mopidy.conf) instead of (container: /mopidy.conf) The path that is mentioned in the field “config“ now is (container: /mopidy.conf), which, in the container, is a directory. If that’s intended, you should call it "/mopidy.conf.d" Running `ps aux` in the container reveals that Mopidy is run with two config files. These are (container: /etc/mopidy.conf) and (container: /mopidy.conf), which, again, is a directory. I'm not sure if that really works, but if it does, see point above. The description of the field “Host Path 2“ is also misleading, because the path is mounted to a directory (container: /tmp/) instead of (container: /tmp/snapfifo) mounting the FIFO for Snapcast on a share results in an open file handle on the spinning hard drives, if appdata is located on one. In consequence, they won’t be able to be spun down. If appdata is located on an SSD-cache, then the SSD will be worn out, because of data being written to it continuously. The appropriate place to mount a FIFO is /tmp, which should result in a FIFO in ram. I set the Host path to (host: /tmp/snapcast/) for this container and for the Snapcast container Then, if i try to set Snapcast to enabled in the Iris-UI, it tells me that i have to modify the config file, maybe you could hint at this in the XML: Snapcast has to be enabled as an option to iris. Or you just set it to enabled in the image. As long as no host and port are set, one can still disable Snapcast from UI. [iris] snapcast_enabled = true Running the library scan from iris-ui fails, because some script there seems to be depending on sudo, which isn’t installed in this container-image Running the library scan from within the container `mopidy --config /etc/mopidy.conf local scan` scans my library fine, but none of the frontends care. The default library for this plugin is a JSON file, that isn’t persisted. The File is only read on start of Mopidy. Let’s use [local] library = sqlite, so library updates are reflected immediately Since the images default config sets local-sqlite as active anyway, i don’t see a reason not to make this a default too Of course, that sqlite DB isn’t mounted to any unraid share, so all changes are dropped, when mopidy restarts. mount (container: /var/lib/mopidy/.local/share/mopidy/local-sqlite/) to /mnt/user/appdata/mopidy/sqlite/ and finally the local db works oh but then, of course, something always seems to be deleting the sqlite database, when redeploying the container from unraid. That really isn’t so bad, if the collection is small enough, but for big collections it could be bothersome. MPD plugin is set to listen to 127.0.0.1 only per default. i think it should be set to 0.0.0.0 per default. One can still choose not to expose the port within docker. Regarding Snapcast docker: snapcast-docker always throws these two errors when starting: 2020-12-22 10-16-58 [Err] Error reading config: [json.exception.parse_error.101] parse error at 1: syntax error - unexpected end of input; expected '[', '{', or a literal 2020-12-22 10-16-58 [Err] Failed to create client: Daemon not running While debugging all this, i also noticed that there are some old logs left over from before image creation. You might wanna get rid of these. root@9cedf0bbf4a4:/# ls -l /var/log/mopidy/ total 24 -rw-r--r-- 1 root root 21987 Feb 25 2019 mopidy.log