[Support] selfhosters.net's Template Repository


353 posts in this topic Last Reply

Recommended Posts

  • Replies 352
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

A template repository from a group of Unraid users that wants to bring more awesome containers into this community. We already have 40 containers in Community applications, Including; Bitwarden

you sir are a legend !! running it and restarting container done the trick    Many thanks for your time !!

You should be able to go into the prefs.xml file located in your Crushftp9 folder located in your Appdata (if you use the standard locations) share.  Open the XML file and the banned ip's should be li

Posted Images

So with Duplicacy, it's happened twice lately, once with the last update (Jan 21), and then again just last night when my weekly CA appdata backup/restore runs (but no Duplicacy update happened this time) where it will ask for the master password (not the admin WebUI password). If I do NOT give it the master password, then backup schedules fail to run.

 

This wasn't ever an issue until Jan 21, trying to find out how to avoid that, because having to open the container's web UI on a regular basis to enter a password is killing my automated backup plan.

 

Any advice on what I can do to try to find out what's going on? Thanks in advance! 

 

PS - when I start the container I do see the following lines, not sure if this is related. 

 

Duplicacy Web Edition 1.5.0 (BAFF49)
Starting the web server at http://[::]:3875
2021/01/31 09:11:33 The license may have been issued to a different computer
2021/01/31 09:11:33 Failed to get the value from the keyring: keyring/dbus: Error connecting to dbus session, not registering SecretService provider: dbus: DBUS_SESSION_BUS_ADDRESS not set

 

Link to post

In addition to my keyring issue above, I'm also having issues where the prune commands will lock up ONLY for backblaze, when I run them on local backups they run fine and snappy, but when running against backblaze they just lock up,

 

Had one going where the last line on the prune log was from 5:30AM, and 10 hours later it hadn't moved. 

Cancelled and restarted it and now it's stuck here pretty quickly without any movement for ~45 minutes.

 

Running prune command from /cache/localhost/all
Options: [-log prune -storage Backblaze -keep 0:15 -all -exhaustive]
2021-02-01 15:49:02.228 INFO STORAGE_SET Storage set to b2://************
2021-02-01 15:49:02.652 INFO BACKBLAZE_URL download URL is: https://f002.backblazeb2.com
2021-02-01 15:49:04.782 INFO RETENTION_POLICY Keep no snapshots older than 15 days
2021-02-01 15:49:11.795 INFO SNAPSHOT_DELETE Deleting snapshot Flash at revision 74
2021-02-01 15:49:11.847 INFO SNAPSHOT_DELETE Deleting snapshot Nextcloud at revision 70
2021-02-01 15:49:12.318 INFO SNAPSHOT_DELETE Deleting snapshot Paperless at revision 25
2021-02-01 15:49:12.337 INFO SNAPSHOT_DELETE Deleting snapshot Photos at revision 70
2021-02-01 15:49:12.865 INFO SNAPSHOT_DELETE Deleting snapshot AppData at revision 71
2021-02-01 15:49:13.297 INFO SNAPSHOT_DELETE Deleting snapshot Backups at revision 70

 

I've also opened a thread on their forums here: https://forum.duplicacy.com/t/backblaze-prune-failing-to-process/4787

Not getting too much in the way of answers though that are helping so I thought I'd post here as well.

Link to post

Hi guys. Is anyone able to get NVIDIA HW acceleration working with the Shinobi docker? I modified the template to include --nvidia=all and added the appropriate GPU ID and 'all' to the capabilities variables. However, Shinobi throws errors int he log:

 

Unknown decoder 'h264_cuvid'

 

Shelling into the container and running nvidia-smi shows the GPU status. However, ffmpeg does not show "--enable=nvidia" as a compiled in configuration. I searched apt for ffmpeg with nvidia but it doesn't exist and seems you need to compile it yourself. Is this correct or is there an easier way to make this work?

 

BTW, Plex and Tdarr both work with my GPU so the issue is not Unraid or the passtrhough.

Link to post
On 1/24/2021 at 8:49 PM, Banuseka said:

I removed and installed it several times, and than suddenly it worked again... Dunno why but it is working now since couple of weeks.

 

The links that this docker repository is using to download the required binaries are no longer in existens, so unless the owner of the docker repository updates the links to deemix this docker is dead.

 

Link to post

Hello,

I've been testing this docker and i've ran into some issue where when i delete files in the shinobi interface the files on unraid just gets renamed to .fuse files until i shut down the shinobi docker completely these files will not disappear. It's like the docker container is locking the files as "in use" even though they shouldn't be.

 

My configuration is as follows:

Cache only share called "CCTV Files"

Stored Video Location = /mnt/user/CCTV Files

 

I added 3 variables to make the file permissions work properly with unraid

PUID 99

PGID 100

UMASK 000

 

Could this be causing the issue i am seeing where files are becoming .fuse rather then disappearing when deleted via shinobi gui?

Link to post

Hi, I am the owner/supplier of the deemix container.

The container is working, the links are not dead or anything.

A bit of history and a little background:
Originally, this was hosted on dockerhub. But dockerhub killed all deemix related stuff a while ago, so I moved to gitlab (even though it has slightly less functionality in regards to stats and so on).

Since the code for deemix-pyweb does not follow any release-versioning (the api behind it, deemix, does but pyweb, the webclient doesnt), the strategy is just to always pull the latest code from the repo and run it in the container, downloaded at build time. So in difference to other containers, the code is not already in the built container but downloaded in the beginning. This posed some problems when the source was moved around a few times, so I've moved to a different approach:
A second project runs a download of the source every night and provides the downloaded source as a zip file as a so called "artifact" for download. If a nightly run fails, the last working artifact will still be available, thus, there should never be no source available (until the project gets nuked or so), even if the source repo is down or the project got nuked.

If you have issues with the content of the container, please open an issue in the repo (I'm not using unraid, so I'm not really in this forum).

I understand there is an issue with displaying a version. Tell me where I would have to put versioning information (I would come up with a repo-download-date versioning or so) and I can do that. As the gitlab registry works differently than dockerhub, I dont know if thats possible at all.

 

Greetings

PS: The links in the first post are old. Check https://faq.deemix.app/ for all current links to the repo and so on.

Edited by Bocki
Link to post

Hey,

a question to Cloudflare-DDNS.

 

How can i update an A and AAAA record at the same time?

Just A or AAAA works fine, but not a the same. I tried this in many combinations:

ZONE=example.com;example.com
SUBDOMAIN=subdomain;subdomain
RRTYPE=A;AAAA

 

 

Link to post
14 hours ago, DirtyLew said:

I can't reach the webui.

I have docker set to pass 6595 to the container

Screenshot_20210214-130400_Chrome.jpg

Hi,

 

please post in reddit or open an issue in the repo, I really dont watch this forum :)

Link to post
On 2/13/2021 at 2:02 AM, sonic6 said:

Hey,

a question to Cloudflare-DDNS.

 

How can i update an A and AAAA record at the same time?

Just A or AAAA works fine, but not a the same. I tried this in many combinations:


ZONE=example.com;example.com
SUBDOMAIN=subdomain;subdomain
RRTYPE=A;AAAA

 

 

 

I don't know the actual answer here, but until then you can just run two instances of the same container, it's super lightweight.

Link to post

another question about the Cloudflare-ddns app (It's fantastic BTW)

Noob q. this app doesn't seem to update the www A record, is this an issue? 

the domain.com A record updates perfectly :)

If i don't have both A names cloudflare complains...

Link to post
On 2/14/2021 at 7:14 PM, DirtyLew said:

I can't reach the webui.

I have docker set to pass 6595 to the container

Screenshot_20210214-130400_Chrome.jpg

 

Hi, I had the same issue and I solved it by :

- removing the deemix folder in appdata

- editing container and updating the ARL

Let few minutes to Deemix to be reinstalled, and I hope it will work again for you !

Edited by Fefepaille
Link to post
On 2/14/2021 at 6:14 PM, DirtyLew said:

I can't reach the webui.

I have docker set to pass 6595 to the container

Screenshot_20210214-130400_Chrome.jpg

This happens if the ARL is not completed in the template or if it is expired/incorrect

 

Link to post

Hi all, 

 

I am new here and only recently installed youtube-dl-material. It worked great for the first while and then I added too many channels and filled my cache drive. The app stalled and since then will not start. Here is the log:

 

SyntaxError: Malformed JSON in file: ./appdata/db.json
Unexpected end of JSON input
at FileSync.parse [as deserialize] (<anonymous>)
at FileSync.read (/app/node_modules/lowdb/adapters/FileSync.js:37:30)
at LodashWrapper.db.read (/app/node_modules/lowdb/lib/main.js:32:21)
at module.exports (/app/node_modules/lowdb/lib/main.js:51:13)
at Object.<anonymous> (/app/app.js:45:12)
at Module._compile (internal/modules/cjs/loader.js:999:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
at Module.load (internal/modules/cjs/loader.js:863:32)
at Function.Module._load (internal/modules/cjs/loader.js:708:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)
at internal/main/run_main_module.js:17:47
/app/node_modules/lowdb/adapters/FileSync.js:42
throw e;
^

 

If you have any suggestions for a fix please let me know. Thanks in advance. 

Link to post

So recently i switched to this docker from the spaceinvader one, simply cause its more up to date.

Am running this docker for 1 - 2 weeks now.

However, since yesterday it seems to have gone haywire.

 

I log my events to SQL so i can more precisely pinpoint the motion.

The timestamps of event logging are now 9 hours off and i am on UTC+1.

The wallclock is showing the proper time and i've set 'use wallclock' to yes.

 

What is happening here??

 

Next to this, the motion detection seems to have gone haywire too.

All of a sudden it is making 2 - 24 hour vids, while timer is set to only record 5 minutes.

And since i am recording 5mp at 25fps, these files are  . . . quite large, also since i'm using RTMP and not using a variable bitrate.

 

Any1 know where i can even start to analyze these issues?

 

Here a screenshot of literally the last moments.

Also the motion detected seems it is outside of the motion area, like it is completely neglecting it.

 

image.png.f9d1de76b4b3ffdedf04b9cf51bb4277.png

Edited by Caennanu
added screenshot
Link to post
10 hours ago, Caennanu said:

So recently i switched to this docker from the spaceinvader one, simply cause its more up to date.

Am running this docker for 1 - 2 weeks now.

However, since yesterday it seems to have gone haywire.

 

I log my events to SQL so i can more precisely pinpoint the motion.

The timestamps of event logging are now 9 hours off and i am on UTC+1.

The wallclock is showing the proper time and i've set 'use wallclock' to yes.

 

What is happening here??

 

Next to this, the motion detection seems to have gone haywire too.

All of a sudden it is making 2 - 24 hour vids, while timer is set to only record 5 minutes.

And since i am recording 5mp at 25fps, these files are  . . . quite large, also since i'm using RTMP and not using a variable bitrate.

 

Any1 know where i can even start to analyze these issues?

 

Here a screenshot of literally the last moments.

Also the motion detected seems it is outside of the motion area, like it is completely neglecting it.

 

image.png.f9d1de76b4b3ffdedf04b9cf51bb4277.png

I looked back in some of the other posts and it looks like this docker is dead. 

Link to post

Does anyone know how to speed up Duplicacy?

 

I have 1Gbps fiber, so I was hoping to get closer to that speed when uploading my backup. I have thread set to 1 in Duplicacy since I have mechanical drives, But I am only getting 9MB/s. Any way to get a little faster uploads, or am I bottle necked by B2 or something else?

 

Link to post

I cannot get Deemix to work at all, here is my log, can anyone help?

 

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 15-checks: executing...
[cont-init.d] Testing Access
[cont-init.d] Container Builddate : 2020-12-03T11:46:19UTC
[cont-init.d] Download Folder Write Access : Success
[cont-init.d] Config Folder Write Access : Success
[cont-init.d] Internet Access : Success
[cont-init.d] 15-checks: exited 0.
[cont-init.d] 20-download: executing...
[cont-init.d] Downloading and unpacking
[cont-init.d] First start, downloading repo
[cont-init.d] Using Deemix UI
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
rm: cannot remove '/deezui': No such file or directory
[cont-init.d] 20-download: exited 0.
[cont-init.d] 30-config: executing...
[cont-init.d] Setting permissions this may take some time
[cont-init.d] 30-config: exited 0.
[cont-init.d] 40-install: executing...
[cont-init.d] Installing
ERROR: spotipy 2.17.1 has requirement requests>=2.25.0, but you'll have requests 2.23.0 which is incompatible.
ERROR: spotipy 2.17.1 has requirement urllib3>=1.26.0, but you'll have urllib3 1.25.9 which is incompatible.
[cont-init.d] Installation done
[cont-init.d] !!! ARL from environment variable does not match current ARL. Using environment variable. !!!
[cont-init.d] !!! Please update your ARL in the variable instead of the webinterface or remove the ARL from the container call. !!!
[cont-init.d] 40-install: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] Starting with ARL
[services.d] done.
[cont-init.d] Installation done
[cont-init.d] !!! ARL from environment variable does not match current ARL. Using environment variable. !!!
[cont-init.d] !!! Please update your ARL in the variable instead of the webinterface or remove the ARL from the container call. !!!
[cont-init.d] 40-install: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] Starting with ARL
[services.d] done.
INFO:deemix:Linux-5.10.1-Unraid-x86_64-with - Python 3.8.5, deemix 2.0.16
Server-wide ARL enabled.
Saved ARL mistyped or expired, please enter a new one
Traceback (most recent call last):
File "/deemix/server.py", line 376, in <module>
run_server(host, port, portable, server_arl=serverwide_arl)
File "/deemix/server.py", line 345, in run_server
arl = app.getConfigArl()
File "/deemix/app.py", line 134, in getConfigArl
return self.getArl(tempDz)
File "/deemix/app.py", line 117, in getArl
arl = input("Paste here your arl: ")
EOFError: EOF when reading a line
Paste here your arl: [services.d] Starting with ARL
INFO:deemix:Linux-5.10.1-Unraid-x86_64-with - Python 3.8.5, deemix 2.0.16
Server-wide ARL enabled.
Saved ARL mistyped or expired, please enter a new one
Traceback (most recent call last):
File "/deemix/server.py", line 376, in <module>
run_server(host, port, portable, server_arl=serverwide_arl)
File "/deemix/server.py", line 345, in run_server
arl = app.getConfigArl()
File "/deemix/app.py", line 134, in getConfigArl
return self.getArl(tempDz)
File "/deemix/app.py", line 117, in getArl
arl = input("Paste here your arl: ")
EOFError: EOF when reading a line
Paste here your arl: [services.d] Starting with ARL
Server-wide ARL enabled.
Saved ARL mistyped or expired, please enter a new one
Traceback (most recent call last):

 

Link to post

Hey Guys, 

 

I am hoping someone here can provide me with some guidance on an issue I am having. I installed the CloudFlare-DDNS docker because I didn't want to continue using No-IP, having to verify my account like twice a month was getting annoying.

 

My domain is registered with GoDaddy. I don't have an actual web page so GoDaddy had my A and WWW record pointing to one of their Parked IPs. But I had created a few CNAME records for my subdomains (NextCloud, OnlyOffice, and Remote) that pointed to mydoamin.ddns.net from No-IP.  This was all working correctly with SWAG (I upgraded from LetsEncrypt) 

 

I found IBRACORPS YouTube Video on the CloudFlare-DDNS and figured I would give it a try since it sounded like a nice solution and I no longer would need to deal with No-IP :) 

 

I went ahead and created an account on CloudFlare and moved my DNS from GoDaddy to CloudFlare. I made sure that everything was pointing in the right direction and that my CNAMEs were still in place. I removed the old A record, and the www record that were holdovers from GoDaddy just like the video said to to. 

 

I went ahead and installed the CloudFlare-DDNS docker and configured it with my email, API key, and domain zone, I left subdomains empty because my subdomains are using CNAMES that point to my domain address. And I left CloudFlare proxied to true. Once the docker downloaded and ran I checked the Cloudflare DNS page and I was able to see the updated IP address in a new A record. I verified that the IP was indeed my assigned IP which is was. 

 

At this point I lost connection to NextCloud, OnlyOffice, and Remote subdomains. I restart SWAG and the corresponding Docker Apps as well just in case. After about 20 minutes I noticed that if I did an NSLookup the domain and subdomains were pointing to CloudFlare IPs, not the direct IP but what I can only assume are the proxy IPs. I couldn't get any of my sites to come up correctly. The only way I could get my sites to come up where if I disabled the Proxy feature on the CloudFlare DNS page and Docker. Once I did this and waited a few minutes I was able to pull up my applications with no issues. 

 

So I am not sure what is up with that. Sucks I can't use the proxy feature. But if anyone has any ideas as to why its doing that please let me know. Sorry for the long post, but I hope this makes sense. 

 

Link to post
4 hours ago, Roxedus said:

Define this

 

I could no longer access my nextcloud.domain.com page ... it would just time out ... after looking at the SWAG documentation they recommend to turn off proxy when using CloudFlare

 

Quote

On Cloudflare, we'll click on the orange cloud to turn it grey so that it is dns only and not cached/proxied by Cloudflare, which would add more complexities.

 

I haven't tried going to back to the proxy setup to see if it breaks again. But right now with proxy turned off everything is working correctly. 

Edited by SiRMarlon
Link to post
7 minutes ago, SiRMarlon said:

time out

This was the missing part.
 

 

9 minutes ago, SiRMarlon said:

SWAG documentation they recommend to turn off proxy when using CloudFlare

We do indeed recommend that. 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.