Gog
-
Posts
323 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Gog
-
-
3 hours ago, vast said:
[info] qBittorrent process listening on port 8080
Do you have a passthrough for port 8080 and is WEBUI_PORT set to 8080?
-
Yes, I tried going back to recent posts but I don't have the keysize or the ncp-disable.
I see a few complaints about deprecated parameters but they are in the working build too
-
I just did an image update and I started to get: AUTH: Received control message: AUTH_FAILED
I rolled back to test a few images 4.5.0-1-01 connects and 4.5.1-1-01 doesn't connect.
I have saved a set of debug level supervisord if it can help
edit: no keysize or ncp-disable in my config
-
I must be missing something obvious but is there a way to save the selections in the drive dropdowns? My selections reset every time
-
Is anybody using a web based form builder with MariaDB?
-
On 12/29/2020 at 1:48 PM, HybridNoodle said:
That is using hard links
I'm not familiar with the way files are copied with a seedbox. Could you set the files to a different owner until they are completed or is everything handled by sonarr?
-
Any way you can use hardlinks?Running Radarr Version 3.0.1.4259 having a strange issue where the files are attempted to be imported before the file(s) are fully downloaded onto the server. The problem stems I think that the files are originally downloaded onto a seedbox and then copied down towards a local server. As Radarr believes the files have finished downloading in the mind of Radarr as soon as it sees the file available it begins the import, which fails and corrupts the file, until the file is fully downloaded to the local server and then it is imported successfully as a corrupted file (Part of the show is just gone). This is not something that I had issue with in previous version and if I manually import the files everything works as it should.
Is there a setting or item I need to tweak for this to wait before the file is imported? I have tried raising the "Check for finished download interval" with the same results.
(This issue is also occurring with the 3.0.4 version of Sonarr)
Sent from my SM-G930W8 using Tapatalk
-
Thanks @spants, I did not have any issues with the container.
@mrm, I pointed the import to the 2020 directory, which has about 2k pictures but there are only 3 photos imported. I'm following up by email
-
On 11/10/2020 at 12:05 PM, casperse said:
Hi All
No reason to create a new Thread.... so 2020 what are people using for their Photo management and backup to Unraid across devices?
I have +150K old pictures from devices upload by App to my Synology server and have been trying to come up with at replacement so I can sell it!
Piwigo - http://piwigo.org/25 Mio downloads! - been playing with this for 3-4 days and it have crashed or time-out every day!
(Even when changing PHP timeout to 5 min and the Memory to 1024 MB - and the recommended mgmt. plugins)
Way to unstable and synch. is not automated and when you try to synch - then it crashes...
(I think it just corrupted the MariaDB this last time the UI disappeared and even after a reboot its dead! - good thing I have auto appdata backups!)
So what to use instead ? - I found the following:
lychee - ? (Looks like Piwigo?) - App docker on Unraid - 10 Mio downloads! https://lychee.electerious.com/ - Author gave it to the community...Photostructure - Demo available could be promising? https://photostructure.com/server/photostructure-for-docker-compose/ - Not as unraid docker
Photoprism - https://docs.photoprism.org/ (App docker on Unraid!) 2,5 Mio downloads
PhotoShow - App docker on Unraid 5 Mio downloads (Development looks dead!)DigiKam - https://www.digikam.org/ not a lot of people using this only 100K download from our App community - looks more like a editor?koken - http://koken.me/ - sold and not supported anymore (Dead)Does anyone have other candidates or suggestions on what to use on ones Unraid server
This is THE hole in my home setup. Nothing really does what I want. I use the nextcloud app to sync my devices and that works well enough but is not good enough for the presentation part.
I want this:
- Don't copy the data, use a path to the server. A DB for metadata is fine but don't put 2TB of data in the DB
- sync with changes in the photo path
- picture AND video support
- user and group management
- Sharable tags between users or album from a tag or a way to have an album other than a directory
- maybe face recognition
I agree with your list, I tried them all except Photostructure and koken nothing is close and nothing is stable with tens of thousands of files. Photoprism looks promising but without user management it's a deal breaker. I think the appdata directory was pretty big too.
Looks like I need to test photostructure now...
-
16 hours ago, LSL1337 said:
Hello
Is it possible to change the paths of the backup/cache/log directory?
it says "Controlled by Docker Container" which doesn't say much to me.
I'd like to put some of these to outside of /config
Looks like it's in /config/config.ini
But the easiest thing is probably to add a mapping to /config/backups to /mnt/user/backup/whatever in the container parameters.
Backup your backups before poking around
-
Subtitles in the /Subs subdirectory are not copied by radarr. The devs are going back and forth on what to do and they don't seem to be interested in fixing it right so I use this. Maybe it can be of use to someone else. I do a symlink to the original position to keep seeding over different drives, remove it if that's not your thing.
edit: I'm a stupid boy who should do better QA before posting. Script will be back soon
edit2: $%?#$%? white spaces. this works.
#!/bin/bash LOGFILE="/config/logs/subtitle.log" SUBSSOURCEPATH="$radarr_moviefile_sourcefolder/Subs" DESTINATION="$radarr_movie_path" MOVIENAME="$radarr_moviefile_relativepath" echo "$(date "+%Y%m%d %T") : Starting subtitle subfolder work" >> $LOGFILE 2>&1 if [ -d "$SUBSSOURCEPATH" ]; then echo "Subs directory exists" >> $LOGFILE 2>&1 else echo "No ${SUBSSOURCEPATH} directory, exiting" >> $LOGFILE 2>&1 exit 0 fi SUBSSOURCEPATH="$SUBSSOURCEPATH/*.srt" shopt -s nullglob SAVEIFS=$IFS IFS=$(echo -en "\n\b") for file in $SUBSSOURCEPATH do #echo "$f" BASENAME=$(basename "$file") mv "${file}" "${DESTINATION}/${MOVIENAME}.$BASENAME" ln -s "${DESTINATION}/${MOVIENAME}.$BASENAME" "$file" echo "Created symlink from source location: ${DESTINATION}/${MOVIENAME}.$(basename $file) to symlink location $file" >> $LOGFILE 2>&1 done # restore $IFS IFS=$SAVEIFS shopt -u nullglob echo "$(date "+%Y%m%d %T") : Ending subtitle subfolder work" >> $LOGFILE 2>&1
-
6 minutes ago, afsilver said:
i did edit my first post, axplaining that i "fixed" mariadb by deleting the mariadb folder in appdata. you must have missed it 😃
it now works. but onto nextcloud which is also f'd.thanks for the help anyways
Better that then the other way! Good luck
-
4 minutes ago, afsilver said:
...now i cannot find any appdata folders on any disks.
but i later read in another thread that i should manually move from mnt/disk'X' -> mnt/cache, not from mnt/user -> mnt/cache.
but to answer your question. my mapping for mariadb was /mnt/user/appdata/mariadb, but now it is /mnt/cache/appdata/mariadb.
So appdata is only present on your cache, that's good and your mariaDB path is OK. And you are right, moving from a share (/mnt/user) directly to a disk is a bad idea. Always copy share to share or disk to disk.
Have you tried installing MariaDB to a new appdata like /mnt/cache/appdata/mariadb2, verify that this works, stop the container, copy the content of the mariadb dir to the mariadb2 dir, pray a bit and start it back up? Without real backup, that's the only thing I can think of.
- 1
-
33 minutes ago, afsilver said:
Hi, yesterday i added a cache drive to my unraid server, which i belive somehow messed up the appdata share for the docker containers.
all the containers need to be set up again.
currently i am stuck at fixing mariadb, as the "mysqld.sock" is missing and the sql server wont start. it is supposed to be located in "/var/run/mysqld/"
trying to login to the sql server gives me this error:
"# mysql -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)"i have removed and readded the mariadb docker from the community applications plugin. but still the file is missing.
edit:
turns out i am missing alot of files in appdata folder after my screwup.
deleting mariadb folder from appdata and setting up mariadb from scratch solved my issue.
What are your mappings for the mariaDB container? What do you use for your appdata directory? Is it possible that your dockers are configured to use only the cache content like /mnt/cache/appdata/... and that you have application data on the array and not on the cache like /mnt/user/appdata/...?
Make sure your appdata is a cache-only share so that data in /mnt/cache/appdata never gets moved to the array and becomes unavailable for your containers.
- 1
-
4 hours ago, Amorbellum said:
but the webui shortcut takes me to SAB? how to fix?
flip the Basic View/Advanced View togle top right in the docker config. It's configured in the webui field to use the variables declared in the form but instead of http://[IP]:[PORT:8080]/ you should be able to enter http://[IP]:9090/
- 1
-
remove it and create a new onewell I thought so but its Greyed out, wont let me edit.
If I create a new port entry is it the same?
Sent from my SM-G930W8 using Tapatalk
- 1
-
48 minutes ago, Amorbellum said:
i edited the WEBUI port?
do you mind looking at some screenshots of my configs?
almost there. click on the Edit button for host port 3. Container port and host port BOTH need to be 9090
- 1
-
3 hours ago, Amorbellum said:
I changed host port 3 to 9090 and the webui is just blank, not even an error
As documented in the container's readme, the WEBUI_PORT variable needs to match the port setting and you can't remap the port
so set 9090=9090 and set the variable WEBUI_PORT=9090
- 1
-
2 hours ago, Amorbellum said:
for the love of god and all that is holy, why did you share port 8080 between SAB and qbitorrent? this is brutal man, I cant get Qbitorrent to run
There is a paragraph for that at the end of the readme:
Due to issues with CSRF and port mapping, should you require to alter the port for the webui you need to change both sides of the -p 8080 switch AND set the WEBUI_PORT variable to the new port. For example, to set the port to 8090 you need to set -p 8090:8090 and -e WEBUI_PORT=8090
-
8 minutes ago, J05u said:
So basically i need to do everything from scratch, or stay with bihex, i seen that you can use sonarr 3 as preview with linuxserver...
Does V3 support multiple instances?
-
30 minutes ago, J05u said:
Can Sonarr setings from one contrainer moved to another? I mean series which i added.
I've seen a few discussions on that but no clean way of doing it. You could start the second sonarr from a backup of the first but you won't get new series added to the second sonarr...
- 1
-
45 minutes ago, mihcox said:
I think the remote path mapping is not done because there is nothing mapped to /home11/mihcox/files.
You mapped /home11/mihcox/files/Completed but completed is not part of the path in your art of killing example.
Can you try to rename /downloads/Torrents/Completed - SeedBox to /downloads/Torrents/Completed and change your remote path mapping to /home11/mihcox/files -> /downloads/Torrents/
-
On 1/28/2020 at 11:31 AM, whauk said:
Now I have something: The downloads are still in sonarr's activity list but with a red x (physical remove from downloader) and a manikin beside it (for manual moving) and the logs show "...path \downloads not exitant or not accessible." 😣
It seems, I still do not understand the principle.
I was at a point where I thought you can basically do anything you like, it must work as long as both variables (i.e. container paths) have the same name in both containers - in one container I achieved this by renaming the respective path from "/data" to "/downloads" - and both variables point to the same host path ("/mnt/user/Downloads").
OT: Does it make any difference (and if so, what) whether the path is "/mnt/user/Downloads" or "/mnt/cache/Downloads"?
Thanks for your help.
Looks like for some reason sonarr doesn't see your downloaded file.
And yes, there is a difference. Say you have a file /mnt/cache/Downloads/1.txt, it also exists as /mnt/user/Downloads/1.txt BUT, /mnt/disk1/Downloads/1.txt, which is visible in the user path, will NOT be visible in the cache path. So once the mover script has run, the /mnt/cache path will not see those files.
-
40 minutes ago, mihcox said:
Maybe i dont understand what youre suggesting. This is for "(AKA Linking between containers)". This is not my case, i have one local container, and one external container on a seedbox. I used the following guide for setup from spaceinvaderone:
Again, apologies if that applies to me too.
OK, I understand what you're trying to do now.
Can you screenshot your sonarr's download client remote path settings and your sonarr docker container's path mapping?
Dashboard & Main tabs not showing any data
in General Support
Posted
Happens to me now. Same behavior on all browsers and on unraid connect.
Full disclaimer, my cache drive filled to 100% about at the same time this started. Haven't rebooted yet