Leaderboard

Popular Content

Showing content with the highest reputation on 07/05/19 in all areas

  1. Hey everyone, just thought I'd put this up here after reading a syslog by another forum member and realizing a repeating pattern I've seen here where folks decide to let Plex create temporary files for transcoding on an array or cache device instead of in RAM. Why should I move transcoding into RAM? What do I gain? In short, transcoding is both CPU and IO intensive. Many write operations occur to the storage medium used for transcoding, and when using an SSD specifically, this can cause unnecessary wear and tear that would lead to SSD burnouts happening more quickly than is necessary. By moving transcoding to RAM, you alleviate the burden from your non-volatile storage devices. RAM isn't subject to "burn out" from usage like an SSD would be, and transcoding doesn't need nearly as much space in memory to perform as some would think. How much RAM do I need for this? A single stream of video content transcoded to 12mbps on my test system took up 430MB on the root ram filesystem. The quality of the source content shouldn't matter, only the bitrate to which you are transcoding. In addition, there are other settings you can tweak to transcoding that would impact this number including how many second of transcoding should occur in advance of being played. Bottom line: If you have 4GB or less of total RAM on your system, you may have to tweak settings based on how many different streams you intend on transcoding simultaneously. If you have 8GB or more, you are probably in the safe zone, but obviously the more RAM you use in general, the less space will be available for transcoding. How do I do this There are two tweaks to be made in order to move your transcoding into RAM. One is to the Docker Container you are running and the other is a setting from within the Plex web client itself. Step 1: Changing your Plex Container Properties From within the webGui, click on "Docker" and click on the name of the PlexMediaServer container. From here, add a new volume mapping: /transcode to /tmp Click "Apply" and the container will be started with the new mapping. Step 2: Changing the Plex Media Server to use the new transcode directory Connect to the Plex web interface from a browser (e.g. http://tower:32400/web). From there, click the wrench in the top right corner of the interface to get to settings. Now click the "Server" tab at the top of this page. On the left, you should see a setting called "Transcoder." Clicking on that and then clicking the "Show Advanced" button will reveal the magical setting that let's you redirect the transcoding directory. Type "/transcode" in there and click apply and you're all set. You can tweak some of the other settings if desired to see if that improves your media streaming experience. Thanks for reading and enjoy!
    1 point
  2. Hi Guys. I have written a couple of scripts. These will download custom vm icons and gui banners from a website repo to a choosen folder on array. Then install them from there into the vm manager, by rsync to /usr/local/emhttp/plugins/dynamix.vm.manager/templates/images Before we used to add them to flash drive and copy with the go file as server started. Well now with Squids great user script plugin we can copy them after boot from the array. My plan is for us all to be able to create custom icons and share them through these scripts with other users. Please create any icons and upload them somewhere. Then post a link here in this thread and I will add them to the website repo. Hopefully us building a “community vm icons” If you would like to see what is there so far please goto Icons www.spaceinvader.one/unraidvmicons/ Banners www.spaceinvader.one/unraidbanners/ For example usage of an icon. I made this one (I know not great, but you get the idea!) Having the nvidea logo in the windows icon I know this vm is the one I have passed through my 1070, not the vnc one. There are 2 scripts that make this work. 1. Icon_banner downloader This script has 2 settings. First setting, to download all icons in category folders to a choosen location on the array. From here user copies the icons he or she likes to in that directory called store. From store script 2 syncs these icons with vm manager on array startup or manually Second setting, the script downloads all icons without category folders straight to the store folder and then rsyncs them vm manager with no user control (ie you cant pre select icons synced with this setting) This script also downloads custom banners to separate folder The script also has push notification for either pushover or pushbullet. So when new icons or banners are downloaded server will send a message telling you how many new ones have been downloaded. No message will be sent if nothing new downloaded. 2 Icon sysnc This script runs to rsync the downloaded icons which user has sorted into the store folder. This is done automatically on array start or manually. I have created a video showing how to setup and use below. I hope you guys like this idea. Scripts are also in zip attached to this post ready for user plugins. Custom VM icons automatically downloaded and installed to unraid Script 1 #!/bin/bash #downloads custom icons from online icon repository to array then copies them into vm manager. # #set below to [0 - First copies icons to category folders, so you can choose which icons to have copied to the system] #set below to [1 - direct downloads all icons without categories then copies them to your system without user choice] direct_copy_icons="0" #set location on server for download of icons if above not set to direct downloadlocation="/mnt/user/test" # # #optional push notifications set below leave all settings below if none required # # #set whether to use pushnotication on download of new icons [0- none] [1-pushover] [2-pushbullet] pushnotifications="1" #pushover api (only fill in if set above to pushover above) apitoken="token=put your pushover token here" userkey="user=put your pushover user key here" #pushbullet api (only fill in if set above to pushbullet above) API="put your push bullet api key here" #dont change anything below here *********************************************************************************** # dirtemp=$downloadlocation"/icons/temp" dirstore=$downloadlocation"/icons/store" dirbanner=$downloadlocation"/icons/banners" #set function "pushnotice" to push type if [[ "$pushnotifications" =~ ^(1|2)$ ]]; then if [ "$pushnotifications" -eq 1 ]; then function pushnotice { curl -s \ --form-string $apitoken \ --form-string $userkey \ --form-string "message=$1" \ https://api.pushover.net/1/messages.json } echo "set for pushover" elif [ "$pushnotifications" -eq 2 ]; then function pushnotice { curl -u $API: https://api.pushbullet.com/v2/pushes -d type=note -d title="unRAID vm icons" -d body="$1" } echo "set for pushbullet" fi else function pushnotice { echo "$1" } fi if [ ! -d $dirtemp ] ; then echo "Setting up first folder $dirtemp " # make the temp directory as it doesnt exist mkdir -vp $dirtemp else echo "continuing." fi #check if array if banner location exist if [ ! -d $dirbanner ] ; then echo "Setting up banner folder $dirbanner " # make the temp directory as it doesnt exist mkdir -vp $dirbanner else echo "continuing." fi #check if array if store location exist if [ ! -d $dirstore ] ; then echo "Setting up second folder $dirtemp " # make the store directory as it doesnt exist mkdir -vp $dirstore else echo "All folders needed are already created continuing." fi if [[ "$direct_copy_icons" =~ ^(0|1)$ ]]; then if [ "$direct_copy_icons" -eq 0 ]; then # set download location to temp folder for user to sort echo "information: direct_copy_icons flag is 0. Icons will be copied to array first for manual sorting." download=$dirtemp #set wget to download with folder structure for user sorting get="-r -c -S -N -nH -e robots=off -np -A png -R index.html* http://spaceinvader.one/unraidvmicons/" getbanner="-r -c -S -N -nH -e robots=off -np -A png -R index.html* http://spaceinvader.one/unraidbanners/" #set what to do at end of script end=0 elif [ "$direct_copy_icons" -eq 1 ]; then # set download location to store folder then copy to system echo "information: direct_copy_icons flag is 1.Icons will be copied directly to system without user intervention" download=$dirstore #set wget to download without folder structure as direct to system get="-r -c -S -N -nH -e robots=off -nd -np -A png -R index.html* http://spaceinvader.one/unraidvmicons/" getbanner="-r -c -S -N -nH -e robots=off -np -A png -R index.html* http://spaceinvader.one/unraidbanners/" #set what to do at end of script end=1 fi else echo "failure: direct_copy_icons is $direct_copy_icons. this is not a valid format. expecting [0 - array first] or [1 - direct to system]. exiting." exit 1 fi echo "'______'''_______''_'''''_''__''''_''___''''''_______''_______''______'''___'''__''''_''_______'"; echo "|''''''|'|'''''''||'|'_'|'||''|''|'||'''|''''|'''''''||'''_'''||''''''|'|'''|'|''|''|'||'''''''|"; echo "|''_''''||'''_'''||'||'||'||'''|_|'||'''|''''|'''_'''||''|_|''||''_''''||'''|'|'''|_|'||''''___|"; echo "|'|'|'''||''|'|''||'''''''||'''''''||'''|''''|''|'|''||'''''''||'|'|'''||'''|'|'''''''||'''|'__'"; echo "|'|_|'''||''|_|''||'''''''||''_''''||'''|___'|''|_|''||'''''''||'|_|'''||'''|'|''_''''||'''||''|"; echo "|'''''''||'''''''||'''_'''||'|'|'''||'''''''||'''''''||'''_'''||'''''''||'''|'|'|'|'''||'''|_|'|"; echo "|______|'|_______||__|'|__||_|''|__||_______||_______||__|'|__||______|'|___|'|_|''|__||_______|"; echo "'''''''''''''''''''''''''___'''_______''_______''__''''_''_______'''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''|'''|'|'''''''||'''''''||''|''|'||'''''''|''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''|'''|'|'''''''||'''_'''||'''|_|'||''_____|''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''|'''|'|'''''''||''|'|''||'''''''||'|_____'''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''|'''|'|''''''_||''|_|''||''_''''||_____''|''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''|'''|'|'''''|_'|'''''''||'|'|'''|'_____|'|''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''|___|'|_______||_______||_|''|__||_______|''''''''''''''''''''''''''''''"; firstcount=$(find $dirtemp -type f | wc -l) firstcount2=$(find $dirstore -type f | wc -l) bannercount=$(find $dirbanner -type f | wc -l) sleep 10 wget $get -P $download wget $getbanner -P $dirbanner sleep 3 lastcount=$(find $dirtemp -type f | wc -l) lastcount2=$(find $dirstore -type f | wc -l) bannercount2=$(find $dirbanner -type f | wc -l) totalnew=$(($lastcount - $firstcount)) totalnew2=$(($lastcount2 - $firstcount2)) bannernew=$(($bannercount2 - $bannercount)) if [[ "$direct_copy_icons" =~ ^(0|1)$ ]]; then if [ "$direct_copy_icons" -eq 0 ]; then #display message echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''''¶¶¶¶'''''''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''¶´´´´¶¶'''''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''¶´´´´´¶'''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''''¶´´´´¶'''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''''¶´´´¶''''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''¶¶¶¶¶¶¶¶¶¶¶¶'''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´¶''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´¶'''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''¶¶´´´¶¶¶¶¶¶¶¶¶¶¶''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´´´´¶'''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´´´´¶'''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''¶´´´¶¶¶¶¶¶¶¶¶¶¶''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´¶'''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''¶¶¶¶¶¶¶¶¶¶¶''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "'''''''''_______''___''''''___''''''''______'''_______''__''''_''_______'''''''''"; echo "''''''''|'''_'''||'''|''''|'''|''''''|''''''|'|'''''''||''|''|'||'''''''|''''''''"; echo "''''''''|''|_|''||'''|''''|'''|''''''|''_''''||'''_'''||'''|_|'||''''___|''''''''"; echo "''''''''|'''''''||'''|''''|'''|''''''|'|'|'''||''|'|''||'''''''||'''|___'''''''''"; echo "''''''''|'''''''||'''|___'|'''|___'''|'|_|'''||''|_|''||''_''''||''''___|''''''''"; echo "''''''''|'''_'''||'''''''||'''''''|''|'''''''||'''''''||'|'|'''||'''|___'''''''''"; echo "''''''''|__|'|__||_______||_______|''|______|'|_______||_|''|__||_______|''''''''"; echo "'''''''''__''''_''_______''_'''''_''''_______''_______''______''''_______''''''''"; echo "''''''''|''|''|'||'''''''||'|'_'|'|''|'''''''||'''''''||''''_'|''|'''''''|'''''''"; echo "''''''''|'''|_|'||'''_'''||'||'||'|''|''_____||'''_'''||'''|'||''|_'''''_|'''''''"; echo "''''''''|'''''''||''|'|''||'''''''|''|'|_____'|''|'|''||'''|_||_'''|'''|'''''''''"; echo "''''''''|''_''''||''|_|''||'''''''|''|_____''||''|_|''||''''__''|''|'''|'''''''''"; echo "''''''''|'|'|'''||'''''''||'''_'''|'''_____|'||'''''''||'''|''|'|''|'''|'''''''''"; echo "''''''''|_|''|__||_______||__|'|__|''|_______||_______||___|''|_|''|___|'''''''''"; echo "'__'''__''_______''__'''__''______''''''___'''_______''_______''__''''_''_______'"; echo "|''|'|''||'''''''||''|'|''||''''_'|''''|'''|'|'''''''||'''''''||''|''|'||'''''''|"; echo "|''|_|''||'''_'''||''|'|''||'''|'||''''|'''|'|'''''''||'''_'''||'''|_|'||''_____|"; echo "|'''''''||''|'|''||''|_|''||'''|_||_'''|'''|'|'''''''||''|'|''||'''''''||'|_____'"; echo "|_'''''_||''|_|''||'''''''||''''__''|''|'''|'|''''''_||''|_|''||''_''''||_____''|"; echo "''|'''|''|'''''''||'''''''||'''|''|'|''|'''|'|'''''|_'|'''''''||'|'|'''|'_____|'|"; echo "''|___|''|_______||_______||___|''|_|''|___|'|_______||_______||_|''|__||_______|"; echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "Sort your icons located at $dirtemp and put the ones you want into $dirstore" if [ "$lastcount" -gt "$firstcount" ]; then pushnotice "$totalnew new icons downloaded to $dirtemp ready for sorting" else echo "No new icons downloaded" fi elif [ "$direct_copy_icons" -eq 1 ]; then #rysnc downloaded icons to dynamix.vm.manager/templates/images then display message rsync -a $dirstore/* /usr/local/emhttp/plugins/dynamix.vm.manager/templates/images echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''''¶¶¶¶'''''''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''¶´´´´¶¶'''''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''¶´´´´´¶'''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''''¶´´´´¶'''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''''¶´´´¶''''''''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''¶¶¶¶¶¶¶¶¶¶¶¶'''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´¶''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´¶'''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''¶¶´´´¶¶¶¶¶¶¶¶¶¶¶''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´´´´¶'''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´´´´´¶'''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''¶´´´¶¶¶¶¶¶¶¶¶¶¶''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''¶´´´´´´´´´´´¶'''''''''''''''''''''''''''''''"; echo "''''''''''''''''''''''''''''''''''''''¶¶¶¶¶¶¶¶¶¶¶''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "'''''''''''_______''___''''''___''''''''______'''_______''__''''_''_______'''''''"; echo "''''''''''|'''_'''||'''|''''|'''|''''''|''''''|'|'''''''||''|''|'||'''''''|''''''"; echo "''''''''''|''|_|''||'''|''''|'''|''''''|''_''''||'''_'''||'''|_|'||''''___|''''''"; echo "''''''''''|'''''''||'''|''''|'''|''''''|'|'|'''||''|'|''||'''''''||'''|___'''''''"; echo "''''''''''|'''''''||'''|___'|'''|___'''|'|_|'''||''|_|''||''_''''||''''___|''''''"; echo "''''''''''|'''_'''||'''''''||'''''''|''|'''''''||'''''''||'|'|'''||'''|___'''''''"; echo "''''''''''|__|'|__||_______||_______|''|______|'|_______||_|''|__||_______|''''''"; echo "'''''''___'''_______''_______''__''''_''_______''''__''''_''_______''_'''''_'''''"; echo "''''''|'''|'|'''''''||'''''''||''|''|'||'''''''|''|''|''|'||'''''''||'|'_'|'|''''"; echo "''''''|'''|'|'''''''||'''_'''||'''|_|'||''_____|''|'''|_|'||'''_'''||'||'||'|''''"; echo "''''''|'''|'|'''''''||''|'|''||'''''''||'|_____'''|'''''''||''|'|''||'''''''|''''"; echo "''''''|'''|'|''''''_||''|_|''||''_''''||_____''|''|''_''''||''|_|''||'''''''|''''"; echo "''''''|'''|'|'''''|_'|'''''''||'|'|'''|'_____|'|''|'|'|'''||'''''''||'''_'''|''''"; echo "''''''|___|'|_______||_______||_|''|__||_______|''|_|''|__||_______||__|'|__|''''"; echo "'''''''''''''''''''______''''_______''_______''______'''__'''__''''''''''''''''''"; echo "''''''''''''''''''|''''_'|''|'''''''||'''_'''||''''''|'|''|'|''|'''''''''''''''''"; echo "''''''''''''''''''|'''|'||''|''''___||''|_|''||''_''''||''|_|''|'''''''''''''''''"; echo "''''''''''''''''''|'''|_||_'|'''|___'|'''''''||'|'|'''||'''''''|'''''''''''''''''"; echo "''''''''''''''''''|''''__''||''''___||'''''''||'|_|'''||_'''''_|'''''''''''''''''"; echo "''''''''''''''''''|'''|''|'||'''|___'|'''_'''||'''''''|''|'''|'''''''''''''''''''"; echo "''''''''''''''''''|___|''|_||_______||__|'|__||______|'''|___|'''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo "'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''"; echo " Icons are now ready to use and available in vm manager. " if [ "$lastcount2" -gt "$firstcount2" ]; then pushnotice "$totalnew2 new icons downloaded your vm manager" else echo "No new icons downloaded" fi fi else echo "." fi if [ "$bannercount2" -gt "$bannercount" ]; then pushnotice "$bannernew new banners downloaded to $dirbanner " else echo "No new banners downloaded" fi exit 0 script 2 #!/bin/bash # this script works with icon_banner downloader # It syncs the vm icon store on icon store folder on array with /usr/local/emhttp/plugins/dynamix.vm.manager/templates/images #set location on server for download of icons same location as icon_banner downloader else script will not work downloadlocation="/mnt/user/test" # do not change anything below this line dirstore=$downloadlocation"/icons/store" #check if above location exist if [ ! -d $dirstore ] ; then echo "$dirtemp does not exist please check you have icon_banner downloader script installed and run at least once and downloadlocation is set in this script the same " else echo "Ok evrything looks how it should. Syncing vm icon store with dynamix.vm.manager " fi rsync -a $dirstore/* /usr/local/emhttp/plugins/dynamix.vm.manager/templates/images sleep 5 exit icon_scripts.zip
    1 point
  3. I typically shut my system down carefully, by first stopping the VMs, then stopping the array, then powering down. Today I went straight for "stop array" and ended up with an unclean shutdown. Some research brought me to this helpful post by @dlandon: Basically, these settings must be in sync to get your system to shutdown gracefully: "Settings->VM Manager->VM Shutdown time-out": must be long enough to allow your VMs to shutdown "Settings->Disk Settings->Shutdown time-out": must be long enough for the array to stop *after* VMs have shutdown. So 60 to 120 seconds longer than the VM timeout. "Settings->UPS Settings->Time on battery": must be short enough that the system will have enough power to stay on for the entire "disk settings shutdown time-out" Additionally, "Settings->UPS Settings->Turn off UPS after shutdown" seems extremely risky and should only be used if you know your UPS will stay on longer than the "disk settings shutdown time-out". I don't see any options to configure this. Currently the UI provides no help in tying these settings together. So the request is: Adjust the UI so that it won't allow you to set the disk settings shutdown timeout less than the VM shutdown timeout (plus perhaps a minimum additional delay?) Adjust the UI on the UPS Settings page to make it clear that you need to trigger the shutdown early enough to allow the system to run for at least as long as the "disk settings shutdown time-out" Add a warning telling users they probably don't want to enable the "Turn off UPS after shutdown" option It may be worth adding a new "Settings->Powerdown" page that contains all of the relevant settings in one place A secondary request for @Squid: updating the UI to help people put in reasonable values is great, but will only help if you are looking at those screens. Perhaps FCP can alert the user if these settings are out of sync?
    1 point
  4. The error appears to have been a real disk error, especially if accompanied by a pending sector warning, still disk is good for now, I would keep it but if any more errors in the near future replace, you can run a parity check (non correct), but since the SMART test passed it should also complete without errors, keep an eye on it for now, and rebooting will clear the array errors.
    1 point
  5. Ok, I tinkered around the last few days with my Ryzen 1700 System and got it down to about 75W idle with a GTX 1070 AND a GTX 960 but it is still some tinkering involved and no plug in and go. That said, for a low power system I would stil suggest you a system like my other server: Celeron J4105. It is able to transcode up to 3-4 1080p streams and handles my 25 docker containers pretty well and idles around at 30W. The Ryzen APUs are not yet able to transcode at all as far as I know. If you want to run this many dockers I recommend 16GB of RAM and if not 8GB will suffice. cheers
    1 point
  6. You can also set FCP to "Avoid Disk Spinups", in which case any drive which is spun down won't get spun back up to run its tests. Doesn't help though if the drives aren't spinning down in the first place due to FCP running too often.
    1 point
  7. How often do you have fix common problems set to do background scans?
    1 point
  8. A pending sector can also disappear if the next attempt to write to it succeeds. It only gets reallocated if that also fails.
    1 point
  9. You have to add-it/uncomment it in the default site-config. nano /config/nginx/site-confs/default Mine looks like the following. You should find the section if you scroll down a bit. Not sure if I added the line or if you only have to uncomment it.
    1 point
  10. Probably what you really want is "single", not raid0. Or just forget about the smaller SSD and save the port for another array disk later. Single vs raid0, from that 2nd link I gave:
    1 point
  11. Edit: This is not a problem with an easy solution at all. I can monitor the transcode processes and make sure that everything is killed - but the only solution is to kill Plex: https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685/24 I can user fuser -vk /dev/nvidia* and it will immediately switch to a P8 state. The only process using the card when this is run is "Plex Media Server" It's not hard to write a script that will only do this if: There are no processes using the card and the card is in a P0 state. I just don't know if there are any undesirable side-effects of doing it this way. Here is such a script: #!/bin/bash while true; do cur_pstate=$(nvidia-smi --query-gpu=pstate --format=csv,noheader) running_processes=$(ps --no-headers "$(nvidia-smi |tail -n +16 | head -n -1 | sed 's/\s\s*/ /g' | cut -d' ' -f3)" | wc -l) 2>/dev/null if [[ $cur_pstate = "P0" && $running_processes -eq 0 ]]; then # if we got here, the card is only running the Xorg process and is in the P0 state, let's fix that. fuser -kv /dev/nvidia* echo "Reset Power State" fi #sleep so we aren't blocking a thread constantly. sleep 1 done Starting the X server on Unraid does allow one to open nvidia settings; to do this you can use a script like this to start the X server (note, that since chvt and fgconsole aren't available, you will have to switch back to VT7 by pressing Ctrl+Alt+F7): #!/bin/bash ##This will only work on single GPU systems: GPUID=$(nvidia-xconfig --query-gpu-info | grep BusID | sed 's/^[^:]*: //') #Now that we know the PCI BusID of the card we can create the X server with a fake display: nvidia-xconfig -s -a --allow-empty-initial-configuration --use-display-device=None --virtual=640x480 --busid "$GPUID" -o /dev/stdout | X :99 -config /dev/stdin& Once you have that server running, you can return to the default unraid GUI and run: nvidia-settings -c :99 To open nvidia-settings on the card. You could also store an xorg configuration file and use that for the virtual X display, and to set persistent nvidia settings. The only way I can think of to fix this properly is to figure out why the Plex process is claiming the card and prevent that from happening. I'll look into it some more, but this needs to be fixed properly by Plex/nVidia. The linked thread at the Plex forums has more information. I may be able to detach the Plex Transcoder process with the wrapper script, making it it's own entity, and then trapping the SIGINT/SIGKILL in the wrapper and using it to kill the transcoder, effectively using the wrapper script to separate the Plex Media Server process from the Plex Transcoder process. It's pretty kludgy, but might work. Oh Boy: We're in idle P-State while transcoding territory!
    1 point
  12. 1. 5.4.3.1 2. iMac 19,1 running AVID through a VM sounds herculean.... what kind of box do you have? Still no sound passed through at all though
    1 point
  13. I've read post in which people state that their Ryzen systems draw like 50-70 from the wall idling. I can't get mine to under 100W. No matter what. And a nearly as powerful Intel system comes really close to the much lower wattage. Like I said. It is possible but so much more difficult or I didn't find the right way. And your calculations are a little bit off. 11kWh/month add up to 132kWh per year and that multiplied with 0,29cents/kWh is about 38€ a year. That is pretty much if you ask me. Gesendet von meinem Pixel 2 XL mit Tapatalk
    1 point
  14. I only use the pinning to give unraid 1 core and my vm 3 cores. everything else is a docker party of priority. Also @Squid fairly sure it was one of your awesome posts that I got that info from,.
    1 point
  15. That prioritizes other apps over this one. So if Plex needs all the CPU power you've got, running this app won't impede it. IMO, not too much real-world use cases for pinning a container.
    1 point
  16. Here is mine. Yours should look roughly the same bar the excessive libraries i have haha --------------------- I just realise how this maybe unhelpful heres whats in some of them Make sure you have advanced view on. 3rd addon to this. the UI (for me atleast) is slow as shit I doubt that's the container and more that my server is almost pegged 247 because I'm not a nice sysadmin. Just give it some time to load things up don't forget this option either
    1 point
  17. oh... https://github.com/Josh5/unmanic/raw/master/webserver/assets/icon-clear_bg.png in the Icon URL with advanced view turned on
    1 point
  18. I am running the Asrock taichi x399 with 4 GPUs and have had no real issues. I started with 4x 960s but have since replaced the 2 bottom cards with RTX 2070s and couldn't be happier. Getting within 3% of bare metal performance even with the 8x slots. The bottom card gets the best airflow so I have a nice overclock on that card. I use the top card for unraid unless all 4 gaming workstations are in use- then it takes over the top card on boot. It shouldn't matter where your favorite GPU sits, since 8x and 16x slots perform very close to the same. My preference is to have the GPU with the best airflow- be the primary for my VM. I use web gui or ssh to manage the array after the 4th machine boots.
    1 point
  19. I don't use btrfs for vm vdisk storage so I wrote a script with parts of the posted scripts that will stop vms and rsync directories rather than btrfs send/receive. Hopefully this helps anyone else trying to achieve the same. #!/bin/bash # # This script will stop all Unraid VMs and rsync the specified src directories to # the specified dst directory. All src directories will be base64 encoded with # hostname and directory path to eliminate potential naming collisions and # the need for character escapes. This will complicate restoration of # backup data. The following illustrates what will be written and how to decode # the base64 string. # # # echo $SRC # /mnt/disks/src/domains/ # # echo $DST # /mnt/user0/Backup/domains # # hostname -f # localhost # # pwd # /mnt/user0/Backup/domains # # ls # bG9jYWxob3N0Oi9tbnQvZGlza3Mvc3JjL2RvbWFpbnMvCg==/ # # echo "bG9jYWxob3N0Oi9tbnQvZGlza3Mvc3JjL2RvbWFpbnMvCg==" | base64 --decode # localhost:/mnt/disks/src/domains/ # # Array of source directories with trailing forward slash declare -a SRC=( "/mnt/disks/src/domains/" ) # Destination directory without trailing forward slash DST="/mnt/user0/Backup/domains" # Timeout in seconds for waiting for vms to shutdown before failing TIMEOUT=300 # Stop all VMs STOP() { for i in `virsh list | grep running | awk '{print $2}'`; do virsh shutdown $i done } # Start all VMs flagged with autostart START() { for i in `virsh list --all --autostart|awk '{print $2}'|grep -v Name`; do virsh start $i done } # Wait for VMs to shutdown WAIT() { TIME=$(date -d "$TIMEOUT seconds" +%s) while [ $(date +%s) -lt $TIME ]; do # Break while loop when no domains are left. test -z "`virsh list | grep running | awk '{print $2}'`" && break # Wait a little, we don't want to DoS libvirt. sleep 1 done } RSYNC() { rsync -avhrW "$1" "$2" STATUS=$? } QUIT() { exit } NOTIFY() { /usr/local/emhttp/webGui/scripts/notify \ -i "$1" \ -e "VM Backup" \ -s "-- VM Backup --" \ -d "$2" } NOTIFY "normal" "Beginning VM Backup" STOP WAIT if [[ $(virsh list | grep running | awk '{print $2}') -ne 0 ]] ; then NOTIFY "alert" "VMs Failed to Shutdown. Restarting VMs and Exiting." START QUIT fi for i in "${SRC[@]}"; do RSYNC "$i" "$DST/$(echo `hostname -f`:$i | base64)" if ! [[ $STATUS -eq 0 ]] ; then NOTIFY "warning" "Rsync of $i return exit code $STATUS." fi done START NOTIFY "normal" "Completed VM Backup." QUIT
    1 point