Leaderboard

Popular Content

Showing content with the highest reputation on 03/19/19 in all areas

  1. There is no problems with a 9211 (or a H200) flashed to IT mode
    2 points
  2. How do I replace/upgrade my single cache device? (unRAID v6.2 and above only) This procedure assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't. Stop all running Dockers/VMs Settings -> VM Manager: disable VMs and click apply Settings -> Docker: disable Docker and click apply For v6.11.5 or older: Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer" For v6.12.0 or newer: Click on all shares that are using the pool you want to empty and change them to have the pool as primary storage, array as secondary storage and mover action set to move from pool to array Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page When the mover finishes check that your cache is empty (any files on the cache root will not be moved as they are not part of any share) Stop array, replace cache device, assign it, start array and format new cache device (if needed), check that it's using the filesystem you want For v6.11.5 or older: Click on Shares and change to "Prefer" all shares that you want moved back to cache For v6.12.0 or newer: Click on Shares and change the mover action to move from array to pool for all shares that you want moved back to cache On the Main page click "Move Now" When the mover finishes re-enable Docker and VMs
    2 points
  3. There are several things you need to check in your Unraid setup to help prevent the dreaded unclean shutdown. There are several timers that you need to adjust for your specific needs. There is a timer in the Settings->VM Manager->VM Shutdown time-out that needs to be set to a high enough value to allow your VMs time to completely shutdown. Switch to the Advanced View to see the timer. Windows 10 VMs will sometimes have an update that requires a shutdown to perform. These can take quite a while and the default setting of 60 seconds in the VM Manager is not long enough. If the VM Manager timer setting is exceeded on a shutdown, your VMs will be forced to shutdown. This is just like pulling the plug on a PC. I recommend setting this value to 300 seconds (5 minutes) in order to insure your Windows 10 VMs have time to completely shutdown. The other timer used for shutdowns is in the Settings->Disk Settings->Shutdown time-out. This is the overall shutdown timer and when this timer is exceeded, an unclean shutdown could occur. This timer has to be more than the VM shutdown timer. I recommend setting it to 420 seconds (7 minutes) to give the system time to completely shut down all VMs, Dockers, and plugins. These timer settings do not extend the normal overall shutdown time, they just allow Unraid the time needed to do a graceful shutdown and prevent the unclean shutdown. One of the most common reasons for an unclean shutdown is having a terminal session open. Unraid will not force them to shut down, but instead waits for them to be terminated while the shutdown timer is running. After the overall shutdown timer runs out, the server is forced to shutdown. If you have the Tips and Tweaks plugin installed, you can specify that any bash or ssh sessions be terminated so Unraid can be gracefully shutdown and won't hang waiting for them to terminate (which they won't without human intervention). If you server seems hung and nothing responds, try a quick press of the power button. This will initiate a shutdown that will attempt a graceful shutdown of the server. If you have to hold the power button to do a hard power off, you will get an unclean shutdown. If an unclean shutdown does occur because the overall "Shutdown time-out" was exceeded, Unraid will attempt to write diagnostics to the /log/ folder on the flash drive. When you ask for help with an unclean shutdown, post the /log/diagnostics.zip file. There is information in the log that shows why the unclean shutdown occurred.
    1 point
  4. Ordered a shot glass today (March 19, 2019), those always come in handy.
    1 point
  5. Love the coffee mug. Had to buy one. Been using unraid for about 3-4 years and couldn't be happier. Ordered today. It's about time you guys had some swag!
    1 point
  6. Got myself a coffe mug for the growing collection of beverage receptacles today. :)
    1 point
  7. That was the beauty of this card it is unRAID ready flashed to IT mode you do have to buy a forward breakout cable here is the one I purchased: https://www.amazon.com/gp/product/B01KFEVQ4E/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&psc=1 And yes this card handles 8 drives it has 2 Mini SAS(SFF-8087) ports each handles 4 drives and no config needed with the parts I have shown here. One word of note if you are planning on ssd's for a cache pool you will need a daughter board for pcie as the drives will not trim on the HBA there is 2 discrete sata ports on my mobo so if yours is the same you should also have them 1 is for the CD drive and the other is unpopulated at least on mine I have not done allot of research but I do think I read that they are sata 2 so don't know how performance would be if you put ssd's on them but you should have 4 pcie x8 slots wired by x8 x4 and the dedicated slot for the PERC6i not sure of the config of those but should have plenty of options to make something work or there is an option card on think its riser 2 for 1 pcie x16 slot wired x16 in replacement of 2 x8 Lots of options for this Dell R710 to make a nice little server, VM machine
    1 point
  8. Yop... 😞 Downgrade rutorrent. Use repository: linuxserver/rutorrent:1fcd6618-ls15
    1 point
  9. Sorry after reading my response I kinda went right into the downside but that was for the PERC6i here is the HBA I ordered I have 2 of them work great right outta the box: https://www.ebay.com/itm/Dell-H310-6Gbps-SAS-HBA-w-LSI-9211-8i-P20-IT-Mode-for-ZFS-FreeNAS-unRAID/162834659601?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649
    1 point
  10. Yes Sent from my EML-L29 using Tapatalk
    1 point
  11. No you don't need to copy them out of that folder. You just need to edit the file and put your website name where it is in the template. Ex. plex.thisismyrandomexamplewebpage.edu Make sure you enable viewing file extensions cause the templates are all inactive by default. To activate the file rename it from subdomain.radarr.config.sample to subdomain.radarr.config
    1 point
  12. TLDR: I've been a Kodi (local, synchronized databases), Plex (remote) user for my media for almost a decade until recently when most of my devices are now Fire TV Sticks (4k) because Netflix gets a high usage. Plex has been used for local media on the sticks for some reason. Finally loaded Kodi to a stick and then read about Emby maintaining the library rather than using the Kodi-Headless docker or running a Kodi VM again. Didn't know anything about Emby until reading about it today. Curious as to why people use Emby, what it has replaced, and how it integrates into their media setup. ****************************************************************** I have always seen Emby in the app section, but never looked through it's uses until very recently. I was a long time Kodi user for close to a decade. My set up until recently was all Kodi devices throughout the house with a synchronized database. One of the clients was a VM which was on 24/7, so library updates were pushed to this device. I had issues in the past with the Kodi Headless Docker not updating movies because of a scraping issue, so directing to the VM was a fine solution. In the past year, with the increase in Netflix usage and the accumulation of a few FireTV (4k) sticks, 95% of media is being consumed through Netflix/Plex. I cut the cord with FiOS TV completely and sold my HDHR. No long recording through MythTV was another reason my Kodi usage was no longer a high priority. With that said, I had some time today so I looked into side loading Kodi onto the Fire Stick. I was toying around with the headless Kodi docker again since I no longer use the Kodi VM, and read a post about how the Linuxserver.io guys were mostly on Emby these days as it related to Kodi. Wasn't sure what that meant so I dug a bit further and it appears people use Emby as the backend server that gets updated with media. This data can be pulled with the Emby/Kodi add on, and run natively on Kodi clients. Seems useful. So now I'm juggling between a Emby backend where I'm only using it to monitor my library, Kodi front ends to display my media library, and Plex for people when outside of the household to connect to the library. Mostly curious here, but what are the primary use purposes people have for Emby as it relates to Kodi and Plex? I know this is certainly a use case dependent thing, but have there been any types of prevailing opinions on the most efficient ways of handling media libraries running to many clients locally and remotely?
    1 point
  13. Let me know when you get the cache drive and we can go through the steps needed to get your dockers running on cache.
    1 point
  14. Also know that the encode process is heavily impacted by read and write performance. I'm not sure how nvdec handles it's buffer queueing, but if the buffer isn't filled with enough data, you will notice the video will stop playing. This is READ limited performance, and would be heavily impacted by a parity check, especially for high bitrate media. The nvenc side of the house is limited by how much data is being fed into it by the decoder, and the write speed of the destination media. If you are transcoding to tmpfs (ram) this will almost never be your bottleneck as the encoded media is typically much smaller and lower bitrate than the source media.
    1 point
  15. Parity check must read all disks in the parity array. If any of your docker files (docker.img, appdata) are on the array then your dockers will be impacted. Typically you want your system, domains, and appdata shares to be cache-prefer, and to have all of their files on cache. If any of these files are on the array then docker performance will be impacted due to the slower writes to the parity array, and your dockers will keep parity and array disk(s) spinning. And of course, parity check will be competing for the same disks. You can easily see which disks any user share is using by clicking Compute... for the share on the User Shares page.
    1 point
  16. Well, i decided to pull the trigger on this build. My current unraid box won't even boot anymore. Couldn't wait any longer. I've done enough research that i'm confident this should work out well for my needs. Going to be a few weeks before i have all the parts. Will report back here when it's all up and running.
    1 point
  17. Below is the script I am using so you can get an idea how it works for me. I do not use an always-on Raspberry Pi in this scenario, but, other users have done so on the remote side of an over-the-Internet VPN connection. I would not be the best person for giving you a step by step for something I have not done. Again, my servers are both on the local LAN. The source server is on 24x7 and the destination server has IPMI and is powered on and off as needed for backups: !/bin/bash #description=This script backs up shares on MediaNAS to BackupNAS #arrayStarted=true echo "Starting Sync to BackupNAS" echo "Starting Sync $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log # Power On BackupNAS ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxxxx chassis power on # Wait for 3 minutes echo "Waiting for BackupNAS to power up..." sleep 3m echo "Host is up" sleep 10s # Set up email header echo To: [email protected] >> /boot/logs/cronlogs/BackupNAS_Summary.log echo From: [email protected] >> /boot/logs/cronlogs/BackupNAS_Summary.log echo Subject: MediaNAS to BackupNAS rsync summary >> /boot/logs/cronlogs/BackupNAS_Summary.log echo >> /boot/logs/cronlogs/BackupNAS_Summary.log # Backup Pictures Share echo "Copying new files to Pictures share ===== $(date)" echo "Copying new files to Pictures share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Pictures share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Pictures.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Pictures/ [email protected]:/mnt/user/Pictures/ >> /boot/logs/cronlogs/BackupNAS_Pictures.log # Backup Videos Share echo "Copying new files to Videos share ===== $(date)" echo "Copying new files to Videos share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Videos share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Videos.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Videos/ [email protected]:/mnt/user/Videos/ >> /boot/logs/cronlogs/BackupNAS_Videos.log # Backup Movies Share echo "Copying new files to Movies share ===== $(date)" echo "Copying new files to Movies share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Movies share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Movies.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Movies/ [email protected]:/mnt/user/Movies/ >> /boot/logs/cronlogs/BackupNAS_Movies.log # Backup TVShows Share echo "Copying new files to TVShows share ===== $(date)" echo "Copying new files to TVShows share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to TVShows share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_TVShows.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/TVShows/ [email protected]:/mnt/user/TVShows/ >> /boot/logs/cronlogs/BackupNAS_TVShows.log # Backup OtherVids Share echo "Copying new files to OtherVids share ===== $(date)" echo "Copying new files to OtherVids share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to OtherVids share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_OtherVids.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/OtherVids/ [email protected]:/mnt/user/OtherVids/ >> /boot/logs/cronlogs/BackupNAS_OtherVids.log # Backup Documents Share echo "Copying new files to Documents share ===== $(date)" echo "Copying new files to Documents share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Documents share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Documents.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Documents/ [email protected]:/mnt/user/Documents/ >> /boot/logs/cronlogs/BackupNAS_Documents.log echo "moving to end ===== $(date)" echo "moving to end ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log # Add in the summaries cd /boot/logs/cronlogs/ echo ===== > Pictures.log echo ===== > Videos.log echo ===== > Movies.log echo ===== > TVShows.log echo ===== > OtherVids.log echo ===== > Documents.log echo Pictures >> Pictures.log echo Videos >> Videos.log echo Movies >> Movies.log echo TVShows >> TVShows.log echo OtherVids >> OtherVids.log echo Documents >> Documents.log tac BackupNAS_Pictures.log | sed '/^Number of files: /q' | tac >> Pictures.log tac BackupNAS_Videos.log | sed '/^Number of files: /q' | tac >> Videos.log tac BackupNAS_Movies.log | sed '/^Number of files: /q' | tac >> Movies.log tac BackupNAS_TVShows.log | sed '/^Number of files: /q' | tac >> TVShows.log tac BackupNAS_OtherVids.log | sed '/^Number of files: /q' | tac >> OtherVids.log tac BackupNAS_Documents.log | sed '/^Number of files: /q' | tac >> Documents.log # now add all the other logs to the end of this email summary cat BackupNAS_Summary.log Pictures.log Videos.log Movies.log TVShows.log OtherVids.log Documents.log > allshares.log zip BackupNAS BackupNAS_*.log # Send email of summary of results ssmtp [email protected] < /boot/logs/cronlogs/allshares.log cd /boot/logs/cronlogs mv BackupNAS.zip "$(date +%Y%m%d_%H%M)_BackupNAS.zip" rm *.log #Power off BackupNAS gracefully sleep 30s ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxxx chassis power soft
    1 point
  18. server { listen 80; server_name calibre.<yourdomain>.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; server_name calibre.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_calibre calibre-web; proxy_pass http://$upstream_calibre:8083; } } the first part just re routes from port 80 to port 443 so if you type http://calibre.<yourdomain>.com it changes it to https//calibre.<yourdomin>.com
    1 point
  19. Thanks a lot for your posts! I got it working! For anyone else looking for a way to do this in the future, here you go: I am assuming you have a domain that you want to serve as an access point to a container on your server. Let's assume your domain is www.dexter.com and you want to access books.dexter.com Your CNAMES should be: Host Record Points to TTL books yourname.duckdns.org 14400 Feel free to add as many of these CNAMES as you'd like. I am using duckDNS because it has a container that I can run on my server. What I think it does is when my IP changes, my unRAID server sends an update request to duckDNS to make sure my url (ie yourname.duckdns.org) is still pointing to my IP. If you dont have something similar with your dns service, i think you will need to manually update it everytime your ISP updates your IP (maybe someone can correct me here). Next, you go to letsencrypt's docker and you put this: Domain Name: dexter.com (dont put your dns here) Subdomain(s): books (if you ever want to add future subdomains, remember to add them here) Only Subdomains: true Validation: http (People of the future, refer to documentation to see if this is still the correct way to do this) Now, navigate to appdata\letsencrypt\nginx\site-confs\ and open default. These are the configs that I am using and it seems to be working perfectly. Obviously change dexter.com to your domain and change the local IP and ports with whatever you are accessing. This was adapted from https://technicalramblings.com/blog/how-to-setup-organizr-with-letsencrypt-on-unraid/ so if you have anything more complicated you wish to do, go there and there are templates. default: ################################################################################################################ #////////////////////////////////////////////////SERVER BLOCK\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\# ################################################################################################################ # REDIRECT HTTP TRAFFIC TO https:// server { listen 80; server_name dexter.com .dexter.com; return 301 https://$host$request_uri; } ################################################################################################################ #////////////////////////////////////////////////MAIN SERVER BLOCK\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\# ################################################################################################################ # MAIN SERVER BLOCK server { listen 443 ssl http2 default_server; server_name dexter.com; ## Certificates from LE container placement ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ## Strong Security recommended settings per cipherli.st ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; # Custom error pages error_page 400 401 402 403 404 405 408 500 502 503 504 $scheme://$server_name/error.php?error=$status; error_log /config/log/nginx/error.log; } ################################################################################################################ #////////////////////////////////////////////////SUBDOMAINS\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\# ################################################################################################################ #CalibreWeb SERVER, accessed by books.dexter.com server { listen 443 ssl http2; server_name books books.dexter.com; location /error/ { alias /www/errorpages/; internal; } location / { proxy_bind $server_addr; proxy_pass http://LOCAL-IP:PORT; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Scheme $scheme; } } #Copy + Paste the same "CalibreWeb SERVER" block if you want to add another domain such as plex. It may require a different set up though. Thank you everyone for your help!
    1 point