Leaderboard

Popular Content

Showing content with the highest reputation on 03/05/19 in all areas

  1. If someone does want to help, then in essence the problem is the runc version included in Unraid v6.6.7 and v6.7.0rc5 doesn't currently have a Nvidia patch available for it, so one approach we've thought of is replacing the runc available in Unraid completely with a separate version that does have the patch available although we're not sure if this will cause problems at all. Bassrock and me have been working on it, but he's very busy at the moment and I'm on a family holiday with my wife and daughter so have been limited to working on this after everyone else has gone to bed. I'm not willing to spend any time in the day working on it as I see little of my daughter/wife when we're working and at home, so I'm cherishing every minute of the time I am spending with them on holiday, and for me that is way more important than anything Unraid or Nvidia related. Sent from my Mi A1 using Tapatalk
    3 points
  2. ok thanks for the above, so looks like you arent low on space, so your issue must be down to one or both of the following issues:- 1. corruption to cache drive 2. corruption of docker img (this contains all docker images and containers) so its most probably docker image corruption, you will need to stop the docker service, delete your docker image and then re-create it, then restore your containers, steps to do this (stolen from a previous post):-
    2 points
  3. I have the same scenario, here's how I handle it. pfsense VM set to autostart user scripts at array start kicks off pfsense monitor script pfsense monitor script waits until successful ping of internal gateway, signifying pfsense is online, then runs additional VMs using virsh start vmname unraid docker delays stagger starts with a docker that doesn't need internet with a 2 minute delay, which is normally plenty of time for pfsense to get rolling. Other dockers are set for many seconds delay, in a logical order. If I couldn't rely on a quick connect to the internet, I'd add the dockers to my pfsense monitor script and remove them from auto start. As it is right now, I've had no issues with things firing off correctly. Then again, I hardly ever restart my server, it's generally up for several months at a time without restart.
    2 points
  4. I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
    2 points
  5. Has anybody run into this issue with the WD White Label drives? I thought I'd share my ordeal just in case anyone else stumbles upon this problem. Had a helluva time finding this issue online for some reason. I recently found the need for more storage space and after researching a bit, chose the 8TB EasyStore from Best Buy for shucking. I drove on over, it was $200, great stuff. Came home, followed a visual guide to take it apart, also easy and quick! I noticed it had a white label, which seemed odd, but whatever, this was the drive, right? Made in Thailand with 256MB cache, that's the one! So I removed an older drive from my computer and stuck this one in, only to find it didn't spin up. Crap. I googled around for awhile only to find that this model is the WD80EMAZ rather than the expected WD80EFZX. A lot of forums had people saying they were the same drive, same specs, everything... anyway, I thought maybe it was the PSU I was using (I had to RMA my newer one, using a much older one) so I reassembled the WD and used DC power. Worked immediately. I noticed a couple people on reddit and slickdeals complaining that these drives wouldn't power on with SATA and only worked externally, but tons of other people saying "no problems here I have 3/6/51824 of these in my tower since [the big bang] and they're fine." I really didn't want to reassemble it and go back to Best Buy to exchange it for... what, maybe another of the same? Maybe a real red? The box doesn't list which internal drive you're getting. So I kept searching and found this PDF from HGST, a company WD owns detailing a "power disable feature" found on some of their drives. This feature uses the 3.3V pin to send a hard reset signal to the drive, a pin which I believe was never utilized by hard drives before this. Anyway, as the PDF states, my PSU was basically forcing the drive into a constant "reset" state, preventing it from spinning up at all. The external board it came with must just bypass this. My "solution" was to pull the 3.3V (orange) wire from that particular sata connector and cap/tape it off (shown in linked pics). Look at that, immediately spun up and working. I've seen other people say use a molex to sata adapter as that also ignores the 3.3V line, but I've also seen a lot of posts about these melting, so do with that what you will. Unfortunately, I can't figure out how to make it work in my NORCO SS-500's. Pics: https://imgur.com/a/PHSYH
    1 point
  6. Hi all, Would be lovely to have settings to configure access to shares on an individual user's page as well. Depending on the user-case, it's easier to configure things on a per-share basis, or a per-user basis. Would be nice to have the option, see wonderfully artistic rendering below:
    1 point
  7. id like to see Rysnc as gui based this trying to figure out the command lines also shows you the status of completion of the rysnc… as I don't see it in the command line.. statiting the complete overall progress complete of rysnc syncing but it be nice a gui based like free nas… check box this etc... its just a thought
    1 point
  8. I wouldn't trust an overclocked server with my files regardless of whether anything else worked or not.
    1 point
  9. I understand what you are saying, but that is why I included the link to the CPU benchmarks comparing the CPU you have to the Min-Spec CPU... While you might think that the Processor clock has alot to do with it, it really doesn't... For instance AMD has always had a much lower clock speed to the Intel chips, and yet somehow usually kept up on real world benchmarks... That is why I included this link: https://cpu.userbenchmark.com/Compare/Intel-Xeon-E5-2660-v2-vs-Intel-Core-i5-2400/m13068vs803 It shows that that CPU has about 2/3's of the single core performance of the CPU that ARK requires as min spec, but as a whole that CPU has about 50% of the minimum spec... It really is a limiting factor here... I get it, I actually used a Core 2 Quad Q9550 with 4 Cores, @2.83 GHz for WAY longer than I should have because I was still able to get it to do what I needed it too (Even had my GTX1070 plugged into it), and went straight from that to an i7-7700K... Bottom line, you are trying to use a processor that is somewhere between 2/3's to 1/2 of the Minimum required processor for that game, and you are not running just that game on it, you are running UnRaid + KVM/QEMU + a storage server + Windows VM + Ark... I really do believe that that is a large part of why the VM's are unstable, they are not meant to run like that... I am not going to stop you, and more power to you if this sort of thing is what floats your boat, but this will likely boil down to making that CPU perform like a much better CPU with extreme overclocking or something, and UnRaid is not really an ideal environment to try and run a stable system while also pushing the envelope just a little too far...
    1 point
  10. 1 point
  11. Nope, it's done with array started. The key is advanced view, and understanding that the delay is seconds to wait AFTER starting that specific docker. I actually want delay BEFORE starting some dockers, so I compensate by putting a docker that doesn't need network to autostart at the top of the list, and set a significant delay after that one. Start order is literally top to bottom of the items with autostart set to yes, so drag the dockers into the order you need. I had some issues with delay times not saving, so be sure to go back to the page and check your work to make sure everything is as you expected. When you add a docker it will likely screw up your order and delays, so be sure to verify periodically.
    1 point
  12. I thought it would be cool to compare storage prices over time to see how trends are keeping up. I chose the WD Blue and Green lines of 3.5" 5400 rpm SATA drives because they've been around for several years. The numbers were digitized off the Camelizer price history graphs using Engauge Digitizer.
    1 point
  13. yeah possibly something got corrupted during the move over to cache pool. if you have the space and want to save a bit of time pulling down all the images then there is nothing stopping you making a backup of that docker.img file, just stop the docker service, copy the file somewhere (array would be fine) then start docker again, if you ever then get into a state where things go bad then you can always just stop docker and overwrite the docker.img with the backup and you will be up and running very quickly, only downside being that you might be slightly out of date (depending on how frequently you backup), but that is a simple press of the update all button to fix it.
    1 point
  14. Set to "At startup of array" in user scripts #!/bin/bash echo "/boot/config/plugins/user.scripts/scripts/StartVMs/script" | at now Named StartVMs in user scripts #!/bin/bash printf "%s" "waiting for pfSense ..." # Change IP to match an address controlled by pfSense. # I recommend pfSense internal gateway or some address guaranteed to be up when pfSense is finished loading. # I don't use external IP's because I want my internal network and appliances to be fully available # whether the internet is actually connected or not. while ! ping -c 1 -n -w 1 192.168.1.1 &> /dev/null do printf "%c" "." done printf "\n%s\n" "pfSense is back online" virsh start VMName1 # Insert optional delay to stagger VM starts #sleep 30 virsh start VMName2
    1 point
  15. I guess you could specify just some folders of the backup set to be scattered. Depends on the size of your data mostly.
    1 point
  16. But why? It's incredibly inefficient, straining your server needlessley and you have configure 2 dockers. You can have both, local and WAN access to the same docker. You just need to configure it well. So your DuckDNS doesn't need to be on the docker network. It can just be in host mode on your Unraid box. For your LE docker I would also give that docker it's own IP and make sure your redirect your router to that IP (I assume this is what you also did for your current setup?). And then in your nginx config you use the ip of your Plex docker and both WAN as LAN access should work.
    1 point
  17. Try giving plex it's own IP first by putting it on br0 or something. That will put it on your LAN. If you can access it locally then, you know that's the issue.
    1 point
  18. You can just # out that line in the Plex config. Then restart LE docker.
    1 point
  19. Well you have a nginx config for your Plex set up already right? The thing you made an image of? Can't you just copy paste my code there? Make a backup of your own file before testing though.
    1 point
  20. see here for a PR that apparently (not tried it) allows you to install file manager plugin:- https://github.com/binhex/arch-rtorrentvpn/issues/96
    1 point
  21. Yep. Try my config if you want. My subdomain is plex.MYDOMAIN. So if that is the same for your case you only need to change the IPDOCKER to your Plex docker's ip. #Must be set in the global scope see: https://forum.nginx.org/read.php?2,152294,152294 #Why this is important especially with Plex as it makes a lot of requests http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html / https://www.peterbe.com/plog/ssl_session_cache-ab ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; #Upstream to Plex upstream plex_backend { server IPDOCKER:32400; keepalive 32; } server { listen 80; #Enabling http2 can cause some issues with some devices, see #29 - Disable it if you experience issues listen 443 ssl http2; #http2 can provide a substantial improvement for streaming: https://blog.cloudflare.com/introducing-http2/ server_name plex.*; send_timeout 100m; #Some players don't reopen a socket and playback stops totally instead of resuming after an extended pause (e.g. Chrome) #Faster resolving, improves stapling time. Timeout and nameservers may need to be adjusted for your location Google's have been used here. resolver 1.1.1.1 1.0.0.1 valid=300s; resolver_timeout 10s; #Use letsencrypt.org to get a free and trusted ssl certificate ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_protocols TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; #Intentionally not hardened for security for player support and encryption video streams has a lot of overhead with something like AES-256-GCM-SHA384. ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; #Why this is important: https://blog.cloudflare.com/ocsp-stapling-how-cloudflare-just-made-ssl-30/ ssl_stapling on; ssl_stapling_verify on; #For letsencrypt.org you can get your chain like this: https://esham.io/2016/01/ocsp-stapling ssl_trusted_certificate /config/keys/letsencrypt/chain.pem; #Reuse ssl sessions, avoids unnecessary handshakes #Turning this on will increase performance, but at the cost of security. Read below before making a choice. #https://github.com/mozilla/server-side-tls/issues/135 #https://wiki.mozilla.org/Security/Server_Side_TLS#TLS_tickets_.28RFC_5077.29 #ssl_session_tickets on; ssl_session_tickets off; #Use: openssl dhparam -out dhparam.pem 2048 - 4096 is better but for overhead reasons 2048 is enough for Plex. ssl_dhparam /config/nginx/dhparams.pem; ssl_ecdh_curve secp384r1; #Will ensure https is always used by supported browsers which prevents any server-side http > https redirects, as the browser will internally correct any request to https. #Recommended to submit to your domain to https://hstspreload.org as well. #!WARNING! Only enable this if you intend to only serve Plex over https, until this rule expires in your browser it WONT BE POSSIBLE to access Plex via http, remove 'includeSubDomains;' if you only want it to effect your Plex (sub-)domain. #This is disabled by default as it could cause issues with some playback devices it's advisable to test it with a small max-age and only enable if you don't encounter issues. (Haven't encountered any yet) #add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; #Plex has A LOT of javascript, xml and html. This helps a lot, but if it causes playback issues with devices turn it off. (Haven't encountered any yet) gzip on; gzip_vary on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript image/svg+xml; gzip_disable "MSIE [1-6]\."; #Nginx default client_max_body_size is 1MB, which breaks Camera Upload feature from the phones. #Increasing the limit fixes the issue. Anyhow, if 4K videos are expected to be uploaded, the size might need to be increased even more client_max_body_size 0; #Forward real ip and host to Plex proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #When using ngx_http_realip_module change $proxy_add_x_forwarded_for to '$http_x_forwarded_for,$realip_remote_addr' proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; #Websockets proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #Disables compression between Plex and Nginx, required if using sub_filter below. #May also improve loading time by a very marginal amount, as nginx will compress anyway. #proxy_set_header Accept-Encoding ""; #Buffering off send to the client as soon as the data is received from Plex. proxy_redirect off; proxy_buffering off; # add_header Content-Security-Policy "default-src https: 'unsafe-eval' 'unsafe-inline'; object-src 'none'"; add_header X-Frame-Options "SAMEORIGIN"; add_header X-Content-Type-Options nosniff; add_header Referrer-Policy "same-origin"; add_header Cache-Control "max-age=2592000"; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; add_header X-Download-Options noopen; add_header X-Permitted-Cross-Domain-Policies none; location / { #Example of using sub_filter to alter what Plex displays, this disables Plex News. sub_filter ',news,' ','; sub_filter_once on; sub_filter_types text/xml; proxy_pass http://plex_backend; } #PlexPy forward example, works the same for other services. #location /plexpy { # proxy_pass http://127.0.0.1:8181; #} }
    1 point
  22. Moving over to target master branch always makes me nervous as a breaking change can happen at any point and for me stability wins over features every time. I think for now at least I'm going to leave it targeting github release just because there have been a lot of changes happened and I don't want to change too many things in one go. Sent from my EML-L29 using Tapatalk
    1 point
  23. You're missing a * should be */5 * * * * instead of */5 * * * http://corntab.com/?c=*/5_*_*_*_*
    1 point
  24. Try rolling back. See posts linked below. https://forums.unraid.net/topic/44142-support-binhex-plex-pass/?do=findComment&comment=725645 https://forums.unraid.net/topic/44142-support-binhex-plex-pass/?do=findComment&comment=725170
    1 point
  25. I'd say that unraid is polished enough to run your main PC as a VM if you pass through your video card and a USB PCIe card to it - the later for plug-n-play functionality. The big sticky point here though is getting a MB that'll pass through both without issues. Definitely checkout the KVM section here. I plan on consolidating four PC's into two unraid boxes with 2+ Win10 VM's on each, each with their own Quadro card and possibly a USB3 PCIe card on one to ingest four Logitech Brio's to NDI.
    1 point
  26. I would highly suggest and recommend to get a new connector instead (or fix it if possible, maybe just slightly bad connection with the pins - swapping the cable might help as well). I do not intend to fix something which is really not broken, and adding comments for empty trays might mess up the database and it will require a stupid amount of coding to just get that. Sorry I can't help you with this, at least not with this design.
    1 point
  27. I've been familiar and dabbling with UNRAID for several years but still consider myself new as I've been taking my time rolling out my current UNRAID system. My use case is fairly similar to yours though I'm very green on the PLEX side of things and that's my next tackle. I built two new PC systems a couple of years ago, one for Gaming (race sims) and one for UNRAID primarily for NAS duty. With space issues as well, I recently looked at consolidating 2 boxes into 1. System Hardware is Z170 Platform, i7-6700 CPU, 32GB RAM, 120GB SSD Cache Pool (2 drives), 4TB HDD Storage Array with Parity (2TB x2 + 4TB x1), 500GB NVME and 1TB HDD unassigned for Win10 VM (NVME) and Games Storage (HDD), GTX980 (passed through to Win10 VM), GTX760 (passed through to Mint VM) and 4 port USB PCIe card (isolated to Win10 VM). I'm currently running my Dockers on the NVME though I'm looking into upgrading the Cache Pool to 500GB and possibly moving them back to running on Cache. Currently have a Win10 VM for Gaming, a Mint 19.1 VM for Daily Driving, and a PLEX Docker set up. Mint VM is accessed remotely through the Win10 VM with both running at once. I just started testing my PLEX Server as it was a previous Docker setup that was moved and I'm not convinced that I don't need to rebuild it yet. I've gotten a lot good helpful information from Spaceinvader One's excellent YouTube Videos and great information in these forums. Still a ways to go but I'm getting closer. Once I get this setup stable with PLEX, I'm hoping to run as is for a couple of years and I'm going to start planning the replacement system which I'm thinking will be about double this size in specs.
    1 point
  28. Is it correct that Emby already does this natively? - Would it be possible to run Emby & Plex in dockers side by side using the same P2000?
    1 point
  29. 1 point
  30. Hello, Nice work on steamcachebundle, makes LAN parties a breeze. Would it be possible to add Xbox download cache to the docker as well?
    1 point
  31. logical for a single machine. but for docker use cases not as much. Yes, unless you don't auto-create the docker network for the first interface.
    1 point
  32. @bonienl You probably should make the docker custom network in the GUI more rounded ie, allow the user to disable auto create (per nic), and have docker network inspect as part of the GUI.
    1 point
  33. unRaid literally reinstalls on every boot, so you might say every boot is to a new motherboard, so assuming everything was migrated correctly (mainly your drives and controllers), it should just work. Suggest running a 24 hour memtest on the new rig before booting unRaid. Also suggest turning off automatic array start before booting the first time to give a chance to review the syslog for issues before starting the array. Also disable auto start of all Dockers and VMs. The main thing I'd be concerned about is the drive cabling. Double and triple check all is secure. Also, go through the BIOS settings and make sure everything is set properly. AHCI should be selected. SMART enabled. Vt-x (if you are doing VMs) and VT-d (if you are doing pass through) should be turned on. USB settings sometimes require some tweaking to boot off the usb. Turn off any overclocking of CPU or RAM. If you are doing any pass through, those configurations will need to be redone on the new motherboard. More on this below. Make a backup of your unRaid flash. When you do start unRaid with the new motherboard, I suggest safe mode. Do a non-correcting parity check. Let it run for a while (30 mins) to make sure all is well. Monitor the syslog for drive issues, like link resets and speed downgrades. If you see any, stop the check and shutdown and re-examine the cabling to the drives in question. Then repeat until you have a non-eventful parity check for at least 30 minutes. If a drive drops, you'll want to shutdown and restore the backup of the config folder (most importantly the super.dat file) before rebooting. When all is good in safe mode, reboot in regular mode and start your Dockers one at a time and make sure they are working. Then same for VMs. Relook at your CPU pinning and memory allocations in light of the new configuration. Like I said, if you're doing passthrough you'll have to go through the whole process of setting that up again. I suggest a full parity check the first night. Usually these motherboard swaps are surprisingly simple, but good to be prepared for issues. Good luck!
    1 point