Cessquill

Members
  • Posts

    764
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Cessquill

  1. Is there any way around this? I need to move Plex onto its own pool, but still need it backed up. Or might there be a different setup that puts Plex onto its own drive but still in the same pool?
  2. They may well have changed the files again - I'll take a look shortly.
  3. The downloads have changed since I first wrote it - they had numbers in. At the top of part 3 there is the text... It may not be clear, but they are intended to be run as two separate commands (hence the line break). I'll edit the post to clarify that.
  4. Are you sure? All working fine for me
  5. If you mouse over "What Is Cron" at the top of the User Scripts page you get a popup of what this implementation of cron accepts.
  6. With you - makes sense. Thanks for the info.
  7. Have you tried enabling/installing the video player app from within Nextcloud? (or have I misunderstood)
  8. The original issue that this is for was that drives were being disabled by Unraid and had to be rebuilt. Is one of your drives exhibiting this behavior, or are you getting different errors? And if so, what? Also, parity drives can spin down if there is no other activity on the array, no?
  9. This script was used for a while when Plex encoding needed patching from a github script. Whilst it's not exactly what you're after it might get you going... #!/bin/bash ############################### DISCLAIMER ################################ # This script now uses someone elses work! # # Please visit https://github.com/revr3nd/plex-nvdec/ # # for the author of the new transcode wrapper, and show them your support!# # Any issues using this script should be reported at: # # https://github.com/Xaero252/unraid-plex-nvdec/issues/ # ########################################################################### # This is the download location for the raw script off github. If the location changes, change it here plex_nvdec_url="https://raw.githubusercontent.com/revr3nd/plex-nvdec/master/plex-nvdec-patch.sh" patch_container_path="/usr/lib/plexmediaserver/plex-nvdec-patch.sh" # This should always return the name of the docker container running plex - assuming a single plex docker on the system. con="$(docker ps --format "{{.Names}}" | grep -i plex)" # Verify plex container is running if [ -z $con ]; then echo -n "<font color='red'><b>Error: Cannot find Plex container. Make sure it's running and has "plex" in the name.</b></font>" exit 1 fi # Uncomment and change the variable below if you wish to edit which codecs are decoded: #CODECS=("h264" "hevc" "mpeg2video" "mpeg4" "vc1" "vp8" "vp9") # Turn the CODECS array into a string of arguments for the wrapper script: if [ "$CODECS" ]; then codec_arguments="" for format in "${CODECS[@]}"; do codec_arguments+=" -c ${format}" done fi echo "Applying hardware decode patch..." # Grab the latest version of the plex-nvdec-patch.sh from github: echo -n 'Downloading patch script...' wget -qO- --show-progress --progress=bar:force:noscroll "${plex_nvdec_url}" | docker exec -i "$con" /bin/sh -c "cat > ${patch_container_path}" # Verify that wget was successful if [[ $? -ne 0 ]]; then echo -n "<font color='red'><b>Error: wget download failed, non-zero exit code.</b></font>" exit 1; fi # Make the patch script executable. docker exec -i "$con" chmod +x "${patch_container_path}" # Run the script, with arguments for codecs, if present. if [ "$codec_arguments" ]; then docker exec -i "$con" /bin/sh -c "${patch_container_path}${codec_arguments}" else docker exec -i "$con" /bin/sh -c "${patch_container_path}" fi
  10. If all of the videos from one day were in the same folder, you'd be better off setting the split level of the share to make sure they all stayed together. That way you're only spinning up one disk. More economical.
  11. I'd concentrate on solving the real problem first, but your cron schedule has WED in it. Use a number to signify the day of the week.
  12. Maybe somebody else could help with this part - I just collated other people's findings. Personally I don't think there's anything else needed. Whilst the instructions look long, it's actually a quick and simple job. I didn't find a way to 100% trigger the symptoms, so I don't know whether there's any way of testing the Pro drive prior to adding it to the away.
  13. Cheers. Some of my Ironwolf's are from there - just depends who's cheaper on the day. That's a good price for the Pro - could swap my parity with it. If you've got a spare machine you could use the standalone USB version. Haven't tried Windows for this.
  14. Out of interest, where are you looking at getting the Pro drive from? I'm running low on space again, will need to upgrade a drive soon.
  15. To be fair, it's a system-wide thing. If you had several plugins using PERL (say, Speedtest), and you uninstalled one of them, would you expect PERL to still be there?
  16. The redacted conf file that NPM creates is below. Not sure of the ssl and proxy files - will need to dig around... server { set $forward_scheme http; set $server "192.168.1.10"; set $port 8088; listen 8080; listen [::]:8080; listen 4443 ssl http2; listen [::]:4443; server_name my.domain.com # Let's Encrypt SSL include conf.d/include/letsencrypt-acme-challenge.conf; include conf.d/include/ssl-ciphers.conf; ssl_certificate /etc/letsencrypt/live/npm-8/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/npm-8/privkey.pem; # Asset Caching include conf.d/include/assets.conf; # Block Exploits include conf.d/include/block-exploits.conf; # HSTS (ngx_http_headers_module is required) (31536000 seconds = 1 year) add_header Strict-Transport-Security "max-age=31536000;includeSubDomains; preload" always; access_log /config/log/proxy_host-18.log proxy; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 16384m; client_max_body_size 0; location / { # Force SSL include conf.d/include/force-ssl.conf; # HSTS (ngx_http_headers_module is required) (31536000 seconds = 1 year) add_header Strict-Transport-Security "max-age=31536000;includeSubDomains; preload" always; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_http_version 1.1; # Proxy! include conf.d/include/proxy.conf; } # Custom include /data/nginx/custom/server_proxy[.]conf; }
  17. Just had a friend check my URL remotely and it took just under a second to load. I'm using Nginx Proxy Manager with my advanced settings at the top of this page. Using the URL locally quick too. Occasionally use it to deliver/transfer work files and have had no issues.
  18. Have been too busy this week to look into it, but have been OK since reboot and parity sync. Why do you think you've got the same issue?
  19. Interesting, thank you. Will investigate.
  20. Hi - woke up this morning to a "500 Internal Server Error" on the Unraid web UI. I was able to access my file shares and some of my dockers (didn't try all, but Plex was not available). My dockers are backed up Monday nights - not sure if that went awry. Could putty onto the server and get a diagnostics file (attached). Could not run a powerdown though. After 10 minutes tried an orderly shutdown via IPMI, which failed. Performed an unclean shutdown, powered back up and all appears OK and it's parity checking now. Any idea what caused it? unraid1-diagnostics-20210511-0919.zip
  21. Just saw your post in the unbalance thread, and was about to suggest you check here. No need now
  22. Looks good to me. Would "docker start <name of container>" work, rather than docker run...? I was thinking of writing a generic user script which ran every 5 minutes (say). At the top you fill in an array of all the start/stop events you want to happen, each entry having container name, start time and stop time (or array of time ranges). Then when the script was run it would decide whether it needed to do anything by comparing the current state of a docker against its schedule. Puts it all into a single script then, but prevents the user from overriding the schedule by manual start/stop. Parked it until I could think of better logic.
  23. Because it was a fault with either the drive or the controller, and the fix was a change to the drives settings. I understand that other systems have also had problems with this drive/controller combo. Any future upgrade could theoretically break a previously unfound issue with anything. If the manufacturers don't step up then I'd reconsider whether to use their hardware for server work in future. Just as I wouldn't set up a pfSense box using Realtek NICs. If it helps, I've had zero issues since reining in the drive's settings.
  24. I only had issues with one model of IronWolf drives (mentioned in first post). All others were fine. Trouble is, out of 16 Ironwolf's, 4 were that model.
  25. I haven't tried the bootable Seagate utility, but I was assuming it would just load to a command prompt with the tools preinstalled. For me it was easier to go via Unraid (plus no downtime). In other news, I haven't had a single issue since applying the above (before I had three issues in about a week).