Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

strike

Members
  • Posts

    848
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

strike's Achievements

Collaborator

Collaborator (7/14)

115

Reputation

11

Community Answers

  1. I don't see any issue with this. The cache drive is not part of the main array and should not be be affected by the user share copy bug. Please correct me if if wrong anyone. And also as the docs says it has to be the same "path" to the file for the "bug" to happen. /mnt/user/media/tv/file and /mnt/cache/media/download/file is not the same path to the file. They're to different locations. What would happen if you copy a file from /mnt/cache/media/download/file to /mnt/user/media/download/file I do not know since I've never tried that so sombody else has to answer that.
  2. Well, I would say it's that too, but outside the array. You can write directly to the pool drives bypassing fuse if you want. So I would say that makes it a disk share.
  3. Are you referring to the "user share copy bug"? If so I think that applies only to disk shares and user shares in the array and not to and from pools. I copy/move files to and from my cache drives to user shares on the array all the time and I never had any issues. But yeah, never transfer files from a disk share to a user share in the array and vice versa. That can lead to data loss.
  4. Nice. I can't think of any downside to doing it that way, if it works.
  5. Try it. I don't think it'll work since hardlinks only works within the same share. Technically /mnt/user/media/ and /mnt/cache/media/ is the same share, but not the same path even if it leads to the same place files.. But try it and check in the terminal if it's working. I assume you have read the trash guide? He explains the command there how to check if hardlinks are working.
  6. So I've been using this script since I posted it and it's been working great for shows, but I noticed the last few months that it was requesting very few movies. So I finally got around to fixing that. Turns out the script was using both themoviedb id and the thetvdb id for both movies and shows, which caused a mismatch thinking some movies were requested when they were not. I can't remember why I did that, but i think there was a reasoning behind it. So I hope I didn't break the show request now The script already requested shows earlier this evening before I updated the script. So I guess I will know next week when it runs if I broke it or not. I just tested it once now and it did request 2 shows after I changed the show limit just to test, so I think I'm good. And it requested a whole bunch of movies so I know that's working. I also did a small update like 6 months ago. I mentioned in my first post adding a notification to undraid, so I did that. Since I never got any response here I thought no one was interested so I didn't bother posting it. But I see now that the thread has had some views at least, so maybe someone is using it. So here is the new version for anyone who wants it. import requests import logging import os import time import subprocess from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from dotenv import load_dotenv # Load environment variables from .env file load_dotenv() # Set up logging with a specific format and file handler logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s') log_file = "ombi_requests.log" if not os.path.exists(log_file): open(log_file, 'w').close() file_handler = logging.FileHandler(log_file) file_handler.setLevel(logging.DEBUG) logging.getLogger().addHandler(file_handler) # Initialize counters for new requests new_movie_requests_count = 0 new_tv_show_requests_count = 0 # Helper function to get limit values from environment variables def get_limit_from_env(var_name, default): try: return int(os.getenv(var_name, default)) except ValueError: logging.error(f"Environment variable {var_name} must be an integer. Using default value: {default}") return int(default) # Define the list names to pull from Trakt.tv for movies and shows separately movie_list_names = ["popular", "recommended", "anticipated", "trending", "boxoffice"] show_list_names = ["popular", "recommended", "anticipated", "trending"] # Load limit values from environment variables movie_limits = {name: get_limit_from_env(f"MOVIE_{name.upper()}_LIMIT", '5') for name in movie_list_names} show_limits = {name: get_limit_from_env(f"SHOW_{name.upper()}_LIMIT", '5') for name in show_list_names} requested_movies = set() requested_shows = set() # Create a retry strategy retry_strategy = Retry( total=3, status_forcelist=[400, 401, 403, 404, 500, 502, 503, 504], allowed_methods=["GET", "POST"], backoff_factor=2 ) adapter = HTTPAdapter(max_retries=retry_strategy) http = requests.Session() http.mount("https://", adapter) http.mount("http://", adapter) # Delay time between requests delay_time = 15 # Set up the Trakt.tv API endpoint and headers trakt_endpoint = os.getenv('TRAKT_ENDPOINT') trakt_headers = { "Content-Type": "application/json", "trakt-api-key": os.getenv('TRAKT_API_KEY'), "trakt-api-version": "2", } # Set up the Ombi API endpoint and headers ombi_movie_endpoint = os.getenv('OMBI_MOVIE_ENDPOINT') ombi_movie_headers = { "Content-Type": "application/json", "ApiKey": os.getenv('OMBI_MOVIE_API_KEY'), "UserName": os.getenv('OMBI_USER'), } ombi_tv_endpoint = os.getenv('OMBI_TV_ENDPOINT') ombi_tv_requests_endpoint = os.getenv('OMBI_TV_REQUESTS_ENDPOINT') ombi_tv_headers = { "Content-Type": "application/json", "ApiKey": os.getenv('OMBI_TV_API_KEY'), "UserName": os.getenv('OMBI_USER'), "OmbiVersion": "3", } # Function to fetch existing movie requests from Ombi def fetch_movie_requests(endpoint, headers, request_list): try: response = http.get(endpoint, headers=headers) response.raise_for_status() if "json" in response.headers.get("Content-Type", ""): requests_data = response.json() for request in requests_data: request_list.add(request.get('theMovieDbId')) except requests.exceptions.HTTPError as err: logging.error("Failed to get current movie requests from Ombi: %s", err) except ValueError as err: logging.error("Failed to decode movie response as JSON: %s", err) # Function to fetch existing TV show requests from Ombi def fetch_tv_requests(endpoint, headers, request_list): try: response = http.get(endpoint, headers=headers) response.raise_for_status() if "json" in response.headers.get("Content-Type", ""): requests_data = response.json() for request in requests_data: tvdb_id = request.get('tvDbId') tmdb_id = request.get('theMovieDbId') # Log ids for better debugging logging.debug(f"Fetched TV show request - TVDB ID: {tvdb_id}, TMDB ID: {tmdb_id}") if tvdb_id: request_list.add(tvdb_id) if tmdb_id: request_list.add(tmdb_id) except requests.exceptions.HTTPError as err: logging.error("Failed to get current TV requests from Ombi: %s", err) except ValueError as err: logging.error("Failed to decode TV response as JSON: %s", err) # Function to request a movie from Ombi def request_movie(endpoint, headers, item, request_list): global new_movie_requests_count tmdb_id = item["movie"]["ids"].get("tmdb") # Check if the movie has already been requested by its TMDB ID if tmdb_id in request_list: logging.debug(f"Movie '{item['movie']['title']}' already requested with TMDB ID {tmdb_id}") return # Prepare the data payload for the movie request data = {"theMovieDbId": tmdb_id} # Make the POST request to Ombi try: response = http.post(endpoint, headers=headers, json=data) response.raise_for_status() except requests.exceptions.HTTPError as err: logging.error(f"Failed to request movie '{item['movie']['title']}' from Ombi: {err}") return # Log the successful request and append the TMDB ID to the list logging.info(f"Requested movie: {item['movie']['title']} (TMDB ID {tmdb_id})") request_list.add(tmdb_id) new_movie_requests_count += 1 time.sleep(delay_time) # Function to request a TV show from Ombi def request_tv_show(endpoint, headers, item, request_list): global new_tv_show_requests_count tmdb_id = item["show"]["ids"].get("tmdb") tvdb_id = item["show"]["ids"].get("tvdb") # Check if the TV show has already been requested by TMDB ID or TVDB ID if tvdb_id in request_list or (tmdb_id and tmdb_id in request_list): logging.debug(f"Show '{item['show']['title']}' already requested with TVDB ID {tvdb_id} or TMDB ID {tmdb_id}") return # Prepare the data payload for the TV show request data = {"theMovieDbId": tmdb_id, "requestAll": True} # Make the POST request to Ombi try: response = http.post(endpoint, headers=headers, json=data) response.raise_for_status() except requests.exceptions.HTTPError as err: logging.error(f"Failed to request show '{item['show']['title']}' from Ombi: {err}") return # Log the successful request and append both the IDs to the list logging.info(f"Requested show: {item['show']['title']} (TVDB ID {tvdb_id}, TMDB ID {tmdb_id})") if tvdb_id: request_list.add(tvdb_id) if tmdb_id: request_list.add(tmdb_id) new_tv_show_requests_count += 1 time.sleep(delay_time) # Fetch existing movie and TV show requests fetch_movie_requests(ombi_movie_endpoint, ombi_movie_headers, requested_movies) fetch_tv_requests(ombi_tv_requests_endpoint, ombi_tv_headers, requested_shows) # Function to process lists from Trakt.tv for movies and shows def process_lists(list_names, trakt_type, limit_dict, endpoint, headers, request_list, request_func): for list_name in list_names: limit = limit_dict.get(list_name, 5) list_endpoint = f"{trakt_endpoint}/{trakt_type}/{list_name}?extended=full&page=1&limit={limit}" try: response = http.get(list_endpoint, headers=trakt_headers) response.raise_for_status() results = response.json() for item in results: if trakt_type in item: request_func(endpoint, headers, item, request_list) except requests.exceptions.HTTPError as err: logging.error(f"Failed to get list of {trakt_type}s from Trakt.tv: {err}") # Process movie lists process_lists(movie_list_names, "movies", movie_limits, ombi_movie_endpoint, ombi_movie_headers, requested_movies, request_movie) # Process show lists process_lists(show_list_names, "shows", show_limits, ombi_tv_endpoint, ombi_tv_headers, requested_shows, request_tv_show) # Send Unraid notification of how many movies and shows were requested def send_unraid_notification(movies_count, shows_count): notification_command = [ '/usr/local/emhttp/webGui/scripts/notify', '-s', 'Ombi requests', '-i', 'normal', '-d', f'number of movies: {movies_count} and tv shows: {shows_count} requested' ] subprocess.run(notification_command, check=True) # Call the function with the counters at the very end send_unraid_notification(new_movie_requests_count, new_tv_show_requests_count) Key Changes: 1. Separated fetch_movie_requests and fetch_tv_requests: Different functions to handle fetching movie and TV show requests separately and append only theMovieDbId for movies and both the tvDbId and theMovieDbId for shows. 2. Separated request_movie and request_tv_show functions: Different functions to handle requesting movies and TV shows separately, ensuring we only use relevant IDs. 3. Added code to send notification to unraid for how many movies/shows were requested. There's still a few things I'd like to fix, but I don't think it's the script's fault. For some reason there are a few shows that gets requested which are already requested and available. I think the problem is my ombi database is missing the tvDbid for those shows. I've confirmed that some of the show IDs are 0 (just don't know which ones) when fetching the list from ombi. I will add code to print the show names as well to the log and not just the IDs so I can confirm my suspicion. I should really learn to use git and github so I can post the code there..
  7. If you use MC you shouldn't have any issues copying the files. I didn't look at your diags but the drive is probably good or I think @JorgeB would have suggested replacing it. File system corruption can happen some times. And there can be several things that can cause it. One of them is bad ram. A power outage could also cause corruption, so if you don' have a UPS, get one. if you encounter this issue again I would suggest running memtest. But for now I would just get the data of it and re-format.
  8. Just use the terminal and midnight commander (file manager in the terminal: Type mc in the terminal and hit enter. Then copy all your files to the array. All your shares and drives (if disk shares are enabled) will show up under /mnt/user/ or /mnt/diskX or the name of your pool Edit: You can also use the built-in file manager in the unraid webui
  9. Post your docker run command or a screenshot of your delugvpn docker template settings. And also a screenshot of the download section in the deluge webui.
  10. Do you have the fix common problems plugin installed? If not try to install that and check if it pops up.
  11. You don't have this one? It might be part of fix common problmes plugin I can't remember. I don't use rclone myself I just picked up that part about adding the uid/gid somewhere on the forum when somebody had permissions issues. But yeah you out that in your rclone mounting script.
  12. I can understand that would be a hard sell yes. Been some years sice i used zerotier, but talscale's subnetrouter and magic dns feautures are SO nice. With magic dns you can use hostnames instead of ip address. And with subnetrouter set up you can connect to devices which don't even have tailscale installed. Like printers and other devices it's hard to install clients on. I can't remember if zerotier has those features, but that is what made me install Tailscale.
  13. What IP are your trying to reach unraid on, the zerotier IP og the unarid IP? Not a soultion to your issues, but set up Tailscale instead, it's very easy and just works. If you decide to try that I recomend the plugin rather then the docker contaier. As it can run without the array started as well. So you can still connect to your server even if the array and the docker service is down.
  14. Found the post: https://forums.unraid.net/topic/53807-support-binhex-radarr/?do=findComment&comment=1134701 Run the docker safe new permission tool on your the shares you mentioned first, then add the lines I mention in that post to your script and try again.
  15. Yes, you probably have the wrong permissions set up in rclone. I'm on mobile right now. But if you search for rclone in my posts you will find a post that can help you. I can find it for you later tonight when I'm home from work.
×
×
  • Create New...