strike
-
Posts
775 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by strike
-
-
Have you had any luck figuring this out? If not, does your download client(s) speed fluctuate from a few kb/s to full speed almost every minute?
-
TLDR; Having a lot of docker issues after updating beyond 6.12.6. Is it safe to downgrade to unraid 6.12.6 with the recent docker vulnerability that was patched after 6.12.6? I have multiple containers exposed to the internet through a reverse proxy.
Thinking about downgrading from 6.12.10 to 6.12.6 as docker and the unraid webui has become almost unusable after updating beyond 6.12.6. It's like docker itself has become very slow as updating or restarting a container now takes about 15-30 minutes. And the containers seems to run more slowly as well. Now I'm having problems with emby loading slow an movies stops to buffer as well and I have never had playback issues before. Emby is on the same version as it was on 6.12.6.
My Audiobookshelf container was litteraly unusable on their latest version on 6.12.10 and I had to downgrade that to an older version which works. Everything was super slow, and wasn't able to load most times. But the newst version of audiobookshelf shoud run just fine though slower than version 2.3.3 which I had to downgrade to and even that is slower then it was before.
And updating or restarting a container takes up to 15-30 min as I said and while doing that the unraid webui is unreachable, it's just loading. More often than not when restarting a container it comes back with an exceution error "server error", but it does restart anyway when it finanlly decides to.
Uploading my diags here from a couple hours ago when my emby and audiobookshelf container was unracahble again, it was just loading. Tried to restart emby but i gave up after about 30 min and decided to stop the array and start it again which also took about 15-20 more min. I guess restarting the docker service would have done the same thing getting my containers up again.
And when starting the arrray wasn't all containers started simultaneously before? Now they start one by one from the top and takes ablout 30 min that too before all are started. Before everything was up in about 5 min after starting the array. Can this has something to do with the appdata backup plugin? I have that set to stop, backup and start for each container.
I can also add that I was using docker directory, but tried to switch to image again just to see if that would solve the issues, but it didn't. I've also changed all the hardware. But that didn't help. (i have two servers, but one isn't in use right now so I just swapped the all the drives to the other server.)
My next step before downgrading is to reboot in safemode to rule out any plugins. Docker runs in safe mode right?
Hoping to get some help diagnosing this and to tell me it safe to downgrade to 6.12.6 again if I can't figure this out.
-
Following this as I have the same issue and I have seen at least 2 more users with the same issue. Updating containers sometimes takes more than 15 min and while updating unraid gui is unreachable. Restarting a container takes up to 10 min sometimes and usually comes back with an execution error, but it does restart anyway. And several of my containers runs verry slow compared to before updating beyond 6.12.6.
I'll create my own thread here when I have more time to pursue this issue. Just following this in case solutions pop up.
Edit: I was previous using docker folder but changed back to image just to see if that would solve it, but it didn't.
Edit2: Not to hijack this thread, but adding my diags as well if someone can spot something we have in common.
-
I'm seeing this as well since updating to 6.12.8. Looks like at least some of my containers are considerably more slow/unresponsive as well. And like you the unraid webui is unresponsive whenever updating a container and takes longer time then usual.
-
You can change it back by ssh into your server and run this command
use_ssl no
- 1
-
Just wanted to say thanks and to follow. Just migrated and all went smooth.
- 1
-
Tired of adding movies/shows manually in ombi/sonarr/radarr? Here is a script that fetches movies/shows from the top trakt.tv lists: "popular", "recommended", "anticipated", "trending", "boxoffice" and adds them to ombi. I'm by no means any coding expert, this is all with the help of ChatGPT.
I know about traktarr, but I wanted something integrated with ombi so I thought why not make it myself and maybe learn some things along the way.
You can set the limit for how many movies/shows it should pull in the .env file (if not set it defaults to 5). The script checks your already requested items in ombi and ignores them if present in the lists it pulls from trakt. Now some of the list stays fairly static, like the "popular" list for example. And the "anticipated" list changes more frequently. So that's why you can set your limits per list in the .env file. I've not tested the script with anything over 22 as a limit for each list. The script only pulls from the first page for each list as of right now, so it should be about 30ish items max per list. This suits my needs, but it's not a hard task to get it to check more pages if you want to. I run my script once a week. I thought about making the script push a notification to unraid saying how many movies/shows was requested, but that's for the next version. I have email notification set up in ombi so I get all the requested items in my inbox anyway, but a number of total requests would be nice.
You need a few things to run this:
1. If you plan to run the script directly on unraid you need Nerd tools from CA to install python3 and pip.
2. The user script plugin from CA
3. A free trakt.tv account, to get the API key for trakt.
4. A reverse proxy and a domain.
5. Create a .env file with the following content and save it in a folder on the flash drive (or wherever you're running the script from). Maybe the Extras folder if you have it. Or simply create a folder called scripts. Place the .env file and the script in the same folder. Remember to replace the placeholders with your info.TRAKT_ENDPOINT=https://api.trakt.tv TRAKT_API_KEY=YOUR_TRAKT_API_KEY OMBI_MOVIE_ENDPOINT=https://YOUR_DOMAIN/api/v1/Request/movie OMBI_MOVIE_API_KEY=OMBI_API_KEY OMBI_TV_ENDPOINT=https://YOUR_DOMAIN/api/v2/Requests/tv OMBI_TV_REQUESTS_ENDPOINT=https://YOUR_DOMAIN/api/v1/Request/tv?status=Available&status=Processing OMBI_TV_API_KEY=OMBI_API_KEY OMBI_USER=YOUR_OMBI_USERNAME MOVIE_POPULAR_LIMIT= MOVIE_RECOMMENDED_LIMIT= MOVIE_ANTICIPATED_LIMIT= MOVIE_TRENDING_LIMIT= MOVIE_BOXOFFICE_LIMIT= SHOW_POPULAR_LIMIT= SHOW_RECOMMENDED_LIMIT= SHOW_ANTICIPATED_LIMIT= SHOW_TRENDING_LIMIT=
6. You need the following python dependencies, install with pip. If running directly on unraid create a userscript with the following and set to run at array start.
#!/bin/bash pip install requests pip install logging pip install os pip install datetime pip install time pip install python-dotenv
7. And the script it self, name it what_you_want.py
import requests import logging import os import time from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from dotenv import load_dotenv # Load environment variables from .env file load_dotenv() # Set up logging with a specific format and file handler logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s') log_file = "ombi_requests.log" if not os.path.exists(log_file): open(log_file, 'w').close() file_handler = logging.FileHandler(log_file) file_handler.setLevel(logging.DEBUG) logging.getLogger().addHandler(file_handler) # Helper function to get limit values from environment variables def get_limit_from_env(var_name, default): try: return int(os.getenv(var_name, default)) except ValueError: logging.error(f"Environment variable {var_name} must be an integer. Using default value: {default}") return int(default) # Define the list names to pull from Trakt.tv for movies and shows separately movie_list_names = ["popular", "recommended", "anticipated", "trending", "boxoffice"] show_list_names = ["popular", "recommended", "anticipated", "trending"] # Load limit values from environment variables movie_limits = {name: get_limit_from_env(f"MOVIE_{name.upper()}_LIMIT", '5') for name in movie_list_names} show_limits = {name: get_limit_from_env(f"SHOW_{name.upper()}_LIMIT", '5') for name in show_list_names} requested_movies = [] requested_shows = [] # Create a retry strategy retry_strategy = Retry( total=3, status_forcelist=[400, 401, 403, 404, 500, 502, 503, 504], allowed_methods=["GET", "POST"], backoff_factor=2 ) adapter = HTTPAdapter(max_retries=retry_strategy) http = requests.Session() http.mount("https://", adapter) http.mount("http://", adapter) # Delay time between requests delay_time = 15 # Set up the Trakt.tv API endpoint and headers trakt_endpoint = os.getenv('TRAKT_ENDPOINT') trakt_headers = { "Content-Type": "application/json", "trakt-api-key": os.getenv('TRAKT_API_KEY'), "trakt-api-version": "2", } # Set up the Ombi API endpoint and headers ombi_movie_endpoint = os.getenv('OMBI_MOVIE_ENDPOINT') ombi_movie_headers = { "Content-Type": "application/json", "ApiKey": os.getenv('OMBI_MOVIE_API_KEY'), "UserName": os.getenv('OMBI_USER'), } ombi_tv_endpoint = os.getenv('OMBI_TV_ENDPOINT') ombi_tv_requests_endpoint = os.getenv('OMBI_TV_REQUESTS_ENDPOINT') ombi_tv_headers = { "Content-Type": "application/json", "ApiKey": os.getenv('OMBI_TV_API_KEY'), "UserName": os.getenv('OMBI_USER'), "OmbiVersion": "3", } # Function to fetch existing requests from Ombi def fetch_requests(endpoint, headers, request_list): try: response = http.get(endpoint, headers=headers) response.raise_for_status() if "json" in response.headers.get("Content-Type", ""): requests_data = response.json() for request in requests_data: ids = { 'tvDbId': request.get('tvDbId'), 'theMovieDbId': request.get('theMovieDbId') } request_list.append(ids) except requests.exceptions.HTTPError as err: logging.error("Failed to get current requests from Ombi: %s", err) except ValueError as err: logging.error("Failed to decode response as JSON: %s", err) # Function to request an item from Ombi def request_item(endpoint, headers, item, item_type, request_list): tmdb_id = item[item_type]["ids"].get("tmdb") tvdb_id = item[item_type]["ids"].get("tvdb") # Check if the item has already been requested by either ID already_requested = any( (req.get('tvDbId') == tvdb_id or req.get('theMovieDbId') == tmdb_id) for req in request_list ) if already_requested: logging.debug("%s '%s' already requested", item_type.capitalize(), item[item_type]['title']) return # Prepare the data payload for the request data = {"theMovieDbId": tmdb_id} if item_type == "movie" else {"theMovieDbId": tmdb_id, "requestAll": True} # Make the POST request to Ombi try: response = http.post(endpoint, headers=headers, json=data) response.raise_for_status() except requests.exceptions.HTTPError as err: logging.error("Failed to request %s '%s' from Ombi: %s", item_type, item[item_type]['title'], err) return # Log the successful request and append the requested ID to the list logging.info("Requested %s: %s", item_type, item[item_type]['title']) # Append the ID used for requesting to the list to prevent future duplicates request_list.append({'tvDbId': tvdb_id, 'theMovieDbId': tmdb_id}) time.sleep(delay_time) # Fetch existing movie and TV show requests fetch_requests(ombi_movie_endpoint, ombi_movie_headers, requested_movies) fetch_requests(ombi_tv_requests_endpoint, ombi_tv_headers, requested_shows) # Process movie lists for list_name in movie_list_names: limit = movie_limits[list_name] list_endpoint = f"{trakt_endpoint}/movies/{list_name}?extended=full&page=1&limit={limit}" try: response = http.get(list_endpoint, headers=trakt_headers) response.raise_for_status() results = response.json() for item in results: if "movie" in item: request_item(ombi_movie_endpoint, ombi_movie_headers, item, "movie", requested_movies) except requests.exceptions.HTTPError as err: logging.error("Failed to get list of movies from Trakt.tv: %s", err) # Process show lists for list_name in show_list_names: limit = show_limits[list_name] list_endpoint = f"{trakt_endpoint}/shows/{list_name}?extended=full&page=1&limit={limit}" try: response = http.get(list_endpoint, headers=trakt_headers) response.raise_for_status() results = response.json() for item in results: if "show" in item: request_item(ombi_tv_endpoint, ombi_tv_headers, item, "show", requested_shows) except requests.exceptions.HTTPError as err: logging.error("Failed to get list of shows from Trakt.tv: %s", err)
8. You can create a user script and run the script once a week or as often as you like. The content of the script should just be: python3 /boot/scripts/name_of_your_script.py Replace the path with your own obviously
-
Then I don't really know what your issue is. The symptom you're seeing is usually due to misconfigured volume mapping or a permissions error. But looks like that's not it in your case.
-
3 minutes ago, Globe89 said:
download to: /data/incomplete
I haven't changed any default Deluge settings.
Looks ok. Due you see any permissions error in the log? If you bash into the container and cd to that dir are you able to write to it?
-
9 minutes ago, Globe89 said:
However, zero torrents are downloading. It is showing seeds and peers are online, but 0 bytes are downloaded. Not sure where to go from here to debug?
What path do you have in the downloads section in the deluge settings?
-
You have to read the 6.12.0 release notes, the answer is in there under "Network Improvements" https://docs.unraid.net/unraid-os/release-notes/6.12.0/
-
4 hours ago, Lons said:
Hi,
I can not wake my backup server with this plugin.
Neither with "etherwake xx:xx:xx:xx:xx:xx" or "etherwake -D -i br0 -b xx:xx:xx:xx:xx:xx" in cmd
With my windows machine and the "Wake on Lan" tool (https://www.gammadyne.com/cmdline.htm#wol) it works, as well with etherwake from my OpenWRT router.
But not through my unraid server.
When I use the scan function in the plugin, it detect no other machines in my network...Any ideas?
Try "etherwake -b xx:xx:xx:xx:xx:xx" Where xx:xx:xx:xx:xx:xx is the MAC address.
-
3 minutes ago, OrdinaryButt said:
What if older version is removed from repo? I have trust issues with "the cloud"
Not very likely, but it could happen, I guess. In that case you can use docker directory instead of the default docker image, then backup those too in addition to appdata.
- 1
-
41 minutes ago, OrdinaryButt said:
So, to match the topic's subject line, how can I go about backing up a docker container entirely, say before update of container, and if the updates breaks something (often for me with my luck), restore the previous version?
Backup of appdata is the only files you need. If you need to install the container again from scratch you can do that from the previous apps section in the apps tab. You can install all of your containers in like 2 min from there if you need to, in a single click. And you be up an running with the same settings as before provided you have backups of your appdata. As all your container template settings are saved on the flash drive.
And with docker, roiling back to a previous version is so much easier then windows, or normal app install I should say (you can run docker on windows too) You just go to the docker template, switch to advanced view and put in the tag (version) you want in the repository filed. You can find all the tags for a container on dockerhub. No need to uninstall, just put in the tag hit apply and you're up in about 10 sec.
-
3 hours ago, Drogon said:
I just changed the existing Sonarr path wording from data to downloads and this fixed it.
Yeah, that's what I meant you should do. You're probably using containers from different maintainers, that's why there are different mappings. But as you figured out you can change anything you want, The only thing you must never change is the container ports of an app. If you need to change ports use bridge network and only change the port on the host side.
-
36 minutes ago, ericswpark said:
I was hoping there would be some sort of Docker thing available,
There is, SFTPGO. I use it myself, you can set up how many users you want, don't need root access. A vm just for this is very overkill. Just remember to det the correct permissions using the extra parameters IIRC.
- 1
-
Your volume mapping should be:
Sonarr - /data /mnt/user/Downloads/
Transmission - /data /mnt/user/Downloads/
Or
Sonarr - /downloads /mnt/user/Downloads/
Transmission - /downloads /mnt/user/Downloads/
See how they match? You can use either one of them or any other path as long as they both match. Your last change doesn't work because the path on the container side doesn't match.
-
On 9/22/2023 at 11:29 AM, Rkpaxam said:
Hi im looking for some advice on my torrent setup as im sure im overcomplicating things.
i have 4 versions of Qbittorrent
1) Series
2) Films
3) music
4) Other
i have it running this was as at the time it was the only way i was able to separate where the files were stored for plex and i didnt want it pulling everything through.
is there a way a single instance can do this if so how?
I don't use Qbittorrent, but you need to set up categories. That way you can sort which types of torrents go where. You don't need to run 4 instances.
- 1
-
2 minutes ago, dlandon said:
There are several ways:
- Set the array to never spin down disks and then set your array disks to spin down on the timer you want.
- Set the array to never spin down disks and manually spin down the array disks
I forgot I could change each disk setting by clicking the disk... Thanks!
-
Maybe this have been answered before, but is there a way to disable disk spin down for UD disks? Without going to unraids disk settings and disable it there. I have a bunch of disks I want to run extended smart tests on, two at a time. And to run smart test the disks can't spin down as you probably know. I don't want to have all my array disks spinning if I don't have to.
-
24 minutes ago, CiscoCoreX said:
Hi, did you upgrade?
Not yet, gonna upgrade during next week.
-
1 hour ago, CiscoCoreX said:
Hi,
Is this update gonna make some problems for me when I'm running like this?
I do have ab firewall that don't like when I use IPVLAN... when you have over 30 containers and all of the came up with same MAC address with different IP, I had problems to access my containers. That's why I use MACVLAN. Never hade any CALL TRACES error before.Almost all my containers are using br0 network.
Correct me if I'm wrong anyone, but as I understand it you can still use macvlan, but instead of br0 it will now be eth0 and everything should work as before. That's what I read when I skimmed through the release notes anyway. I'm also using macvlan and will continue to do so if I can.
- 3
-
I can't see anything obvious in you last syslog either. I have no clue what the issue might be. But have you tried running in safe mode?
-
2 minutes ago, SinoBreizh said:
On a side note, does Unraid have an issue tracker of some sort? A place where I can follow a specific issue to see if it's resolved, rather than reading every single release note?
You can follow the bug report discussions here: https://forums.unraid.net/bug-reports/ But release notes is a good idea to read anyway.
- 1
Unraid dockers running and updating slowly
in General Support
Posted
I can confirm that my docker image file and folder (when I used that) has always been and still are on the cache drive and no other drives. I don't know why but yes the system folder was on multiple drives, but there was no files there,just the folder. Looks like some point last year my system share was set to use the array or move files there. I have now deleted the systems folder on all drives except cache to not confuse anyone when I post my next diags.
I changed back to docker image about two weeks ago and couldn't remember what size I used earlier when I was using image so I just put 60GB.
I had one thought yesterday that my issue might be network related since my speed on my download clients fluctuate from almost 0 to full speed almost every minute, it has never done that before. So I will try to change the ethernet cable after work today if I can find a new one. Maybe it was a coincidence that all the issues started right after updating from 6.12.6. Since I saw at least 2 users with the exact same issues I didn't think the issue could be on my side. Will report back.