Leaderboard

Popular Content

Showing content with the highest reputation on 02/28/19 in Posts

  1. We highly recommend following the terms of the relevant EULA, for example: https://www.microsoft.com/en-us/Useterms/Retail/Windows/10/UseTerms_Retail_Windows_10_English.htm
    1 point
  2. see previous post:- https://forums.unraid.net/topic/46127-support-binhex-rtorrentvpn/?do=findComment&comment=716138
    1 point
  3. It stopped working because RC4 is not included in the container. openssl ciphers -v doesn't show any RC4
    1 point
  4. #!/bin/bash #This should always return the name of the docker container running plex - assuming a single plex docker on the system. con="`docker ps --format "{{.Names}}" | grep -i plex`" echo "" echo "<b>Applying hardware decode patch...</b>" echo "<hr>" #Check to see if Plex Transcoder2 Exists first. exists=`docker exec -i $con stat "/usr/lib/plexmediaserver/Plex Transcoder2" >/dev/null 2>&1; echo $?` if [ $exists -eq 1 ]; then # If it doesn't, we run the clause below docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2" docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";' docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder" docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2" docker restart $con echo "" echo '<font color="green"><b>Done!</b></font>' #Green means go! else echo "" echo '<font color="red"><b>Patch already applied or invalid container!</b></font>' #Red means stop! fi EDIT: Just corrected some flawed assumptions on the logic above. Using grep -i to grab container name so that it matches without case sensitivity. Using a variable to capture the return value of the stat, since docker exec -it can't be used and docker exec -i always returns 0. Flipped -eq 0 to -eq 1 since that was the inverse of the intended behavior. Only weird thing is something prints "plex" lowercase and I don't know where. EDIT2: Figured that out, docker restart $con prints the name of the container once it's restarted. Could redirect the output to /dev/null, though.
    1 point
  5. If you just want a single proc board, it’s hard to beat the X9SRL-F. Ram for it is cheap, and you can always throw in a v2 processor. https://www.supermicro.com/products/motherboard/Xeon/C600/X9SRL-F.cfm
    1 point
  6. I compiled it myself after deleting libgomp. Should work now.
    1 point
  7. I'm currently also looking into a similar setup. From what i gather, the e5-2670 should be a fine match for a casual gaming VM while running a few docker containers and so on. There's a lot of supermicro boards on ebay that might tick of some boxes. Though, I don't currently know how easy they are to work with, regarding IMMOU groupings. You could go down a cpu tier and put the savings towards a dual socket mobo. Thats what I'm hoping for. I've also pondered about getting a Ryzen system, but I'd be needing pcie lanes for multiple gaming VMs. But for your current use case a ryzen build might not be a bad bet. But i'd only go that way if you can find it at good price with rebates and what not. Here's a link for some further inspiration. You should especially checkout the "CPU comparison sheet" https://www.serverbuilds.net/ JDM_WAAAT have made some cool videos and builds that you could use, if you havn't already looked into this community.
    1 point
  8. 1 point
  9. I'm seeing the same thing as well. Found this when searching around: https://forums.sabnzbd.org/viewtopic.php?t=23364 Started up normally after doing the following: 1. Connected to the docker (my container name was "sabnzbd") How to connect to a running Docker container? There is a docker exec command that can be used to connect to a container that is already running. Use docker ps to get the name of the existing container. Use the command docker exec -it <container name> /bin/bash to get a bash shell in the container. 2. cd /config/admin 3. mv server.cert server.cert.old (or delete it, but I was trying to play it safe) 4. mv server.key server.key.old (or delete it, but again playing it safe) I did an ls-al then and saw that the server.cert was immediately recreated but not the server.key I checked SAB and it was then running normally
    1 point
  10. Hi @kalfun I could not get any bios from techpowerup to work for me even after custom modification. My card is a GV-N1070IXOC-8GD, there _are_ bios's uploaded to tpu for the card, but the only dump I could get to work was the one I pulled myself, and modified myself. I have seen a few other people online report similar things, but I have no idea what the cause is. My vm configuration ends up containing: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/gpudumps/GeForce GTX 1070 Mini ITX OC 8G/modified bios/193322.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> I don't think I have shared anything that was not listed above, but my configuration is close to yours, so hopefully something in here is helpful. Good luck!
    1 point
  11. Hey, the AMD Athlon 200GE is on its way. 4GB RAM and a mATX Mobo with an Arctic Alpine Passive cooler. Case is the Fractal Design Define Mini. Maximum silence! I will post the results.
    1 point
  12. At the moment this docker is still beta. There are too many features missing. I'm afraid of it is added to the community application installer I may get inundated with feature requests. At the moment I'm tied up with work til possibly around April. I'm able to set aside small bursts of development effort to work on this app, but it is not easy to dedicate the time needed for the bigger features. Sorry for the inconvenience. Sent from my ONE E1003 using Tapatalk
    1 point
  13. 1 x Antec 1200 (Twelve Hundred) case 1 x Supermicro X9SRL-F 1 x Noctua NH-U12DX i4 1 x Intel Xeon E5-2695v2 1 x Antec 850W 3 x LSI 9211-8i 5 x Noctua NH-F12 1 x Lexar 32GB USB 4 x 16GB DDR3L-1600 ECC REG 4 x 5-in-3 Hard Disk Cage / Enclosure 20 x WD RE4 4TB 2 x 400GB OCZ Deneva 2 SSD NOTE: Not super pretty, but gets the job.
    1 point
  14. The price of Pro is so cheap I'm surprised anyone purchases less.
    1 point
  15. Ok so for anyone following this or interested, this may be becoming more of just a general virsh support issue than anything as i have an idea of what may work for achieving at least some form of easier backups and having had to trash a vm recently this is becoming a more prominent issues for me Please note, i have not tried rolling back a vm to a snapshot yet so don't use the above as serious backups until tested more Now as i'm fairly new to virsh and there are no doubt some guru's on here, this is my new current script #!/bin/bash #Declare backup sources sources=( "/mnt/cache/VMImages/vm1/" "/mnt/disk1/Disk1Test/VM Image/vm2/" ) targets=( "/mnt/disks/Disk_1/VMImages/vm1" "/mnt/disks/Disk_1/VMImages/vm2" ) vmname=( "vm1" "vm2" ) arraylength=${#sources[@]} # Declare backup drives deviceuuid=( "6ed5043c-14ee-41f2-903d-d201ec50d39f" ) devicemount=( "/mnt/disks/Disk_1" ) devicelength=${#deviceuuid[@]} # Mount drives for (( ii=1; ii<${devicelength}+1; ii++ )); do if grep -qs "${devicemount[$ii-1]}" /proc/mounts; then echo "${devicemount[$ii-1]}" " - mounted" else echo "${devicemount[$ii-1]}" " - not mounted" mkdir "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - created mount path" mount -t xfs UUID="${deviceuuid[$ii-1]}" "${devicemount[$ii-1]}" if [ $? -eq 0 ]; then echo "${devicemount[$ii-1]}" " - mount success!" else echo "${devicemount[$ii-1]}" " - mount failed!" exit 1; fi fi done # Handle Backup for (( ii=1; ii<${arraylength}+1; ii++ )); do echo "Starting backup of" "${sources[$ii-1]}" " to " "${targets[$ii-1]}" mkdir -p "${targets[$ii-1]}" #virsh domblklist "${vmname[$ii-1]}" virsh snapshot-create-as --domain "${vmname[$ii-1]}" $(date '+%Y-%m-%d-%H-%M-%S') --diskspec hdc,file="/mnt/user/ArrayVDisks/TempVms/overlays/${vmname[$ii-1]}.qcow2" --disk-only --atomic files=( $(find "${targets[$ii-1]}" -name "*.qcow2") ) if [ ${#files[@]} -gt 0 ]; then echo "Running incremental backup - setting as inplace" rsync -ahv --delete --progress --inplace "${sources[$ii-1]}" "${targets[$ii-1]}" else echo "Running first backup - setting as sparse" rsync -ahv --delete --progress --sparse "${sources[$ii-1]}" "${targets[$ii-1]}" fi virsh blockcommit "${vmname[$ii-1]}" hdc --active --verbose --pivot rm "/mnt/user/ArrayVDisks/TempVms/overlays/${vmname[$ii-1]}.qcow2" #virsh snapshot-list "${vmname[$ii-1]}" done # Unmount drives for (( ii=1; ii<${devicelength}+1; ii++ )); do if grep -qs "${devicemount[$ii-1]}" /proc/mounts; then fuser -k "${devicemount[$ii-1]}" umount "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - unmounted" rmdir "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - removed mount path" fi done So, this mounts the drive i specify to a mount point of my choosing, then loops through the VM's as needed. It creates a snapshot for each vm, checks if that vm has been backed up before, if it hasn't then rsync does a sparse image backup to preserve the space used rather than copying it at the full image size. If it exists then it does an inplace backup which just updates the sparse image, again keeping the files small This so far seems to work fine as i have a 30GB qcow2 vm thats using 5.6gb, 3 backup runs later and its still only 5.6gb as nothing in the vm ever really changes although it is running and there is no noticeable different to the vm while this is all running So now the image is copied the system commits the overlay image back into the base, removes the overlay image, and does the next vm, once its done it unmounts all the drives as needed. This is done so i can actually backup my vm's to multiple drives if needed - i use this for important documents on my array as well So that's all working fine but still there is that dreaded rsync that copies the entire image over every time which isn't ideal but is fine while i'm doing local backups So i had a thought earlier but i can't figure out for the life of me if its even remotely possible or not. From what i have read the command "virsh snapshot-create-as" is capable of creating a snapshot backing chain with multiple layers when this parameter is passed "--reuse-external". An example is below A brand new image is created and looks like this: base.img After 1 snapshot this looks like this: base.img -> snapshot1 After 2 snapshots this looks like this: base.img -> snapshot1 -> snapshot2 So my thought was, what happens if when we create snapshot1 we copy the base.img in our backup script When we create snapshot2 we copy snapshot1 We then commit snapshot1 back to the base.img using blockcommit and "--shallow --keep-relative" this then leaves: base.img -> snapshot2 The next backup run then creates snapshot1 based on snapshot2, then commits snapshot2 as above and loop like this but one creates snapshot1, the next creates snapshot2 etc. The file you then transfer would be the "middle" snapshot and would in essence be the difference between the snapshots resulting in a much smaller file to be copied In my mind that would work up to this point (although i know far too little to say if its plausible) Now the main issue i see comes in. In your backup destination you now have a load of snapshots overlays and a base file, but how on earth would you get those overlays to commit back to the main file? The overlay and base files would be aware of entirely different backing chains to each other so i'm not sure how to possibly maintain this My hope is that this thread may become more of a rambling of ideas in order to aid someone else come up with a good idea. At work we use windows hyperV with active replication (so if one server dies, server 2 is only so far behind and can be spun up in its place), i would love to be able to do something similar with my home unraid boxes and kvm I'm aware this post is long as this topic may be changing off topic so perhaps it may be worth moving it elsewhere? Either way, hopefully the scripts above may help someone Regards, Jamie
    1 point
  16. While we wait for LT to add this feature I've been playing with multiple btrfs pools with the help of UD plugin for some time with, while there are some limitations it's been working great, I wrote a FAQ entry for anyone who wants to try it, but note it's work in progress, it should be safe but use at own risk: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=462135
    1 point
  17. I recently added a Aeotec Z-wave USB stick to my unraid server and attached this device to my Homeassistant docker. Randomly this USB Z-wave stick stops working and I am wondering if this is a general USB issue as I am also having USB communications with my APC UPS (see below). For the Z-wave stick if I shut the docker down, unplug the USB Z-wave stick and then reinsert it and restart the docker, I am back up and running. This is a major inconvenience (WAF) since the home automation randomly stops working. For reference on the APC UPS issue here is the additional information: About a year ago I added an APC UPS to my system and ever since it was added I get notifications about communications lost from the UPS. If you look in the syslog these are the messages: Jan 27 06:34:28 Tower kernel: usb 5-2: USB disconnect, device number 5 Jan 27 06:34:29 Tower kernel: usb 5-2: new full-speed USB device number 6 using xhci_hcd Jan 27 06:34:29 Tower kernel: hid-generic 0003:051D:0002.0004: hiddev96,hidraw0: USB HID v1.00 Device [American Power Conversion Back-UPS RS 700G FW:856.L4 -P.D USB FW:L4 -P ] on usb-0000:0a:00.0-2/input0 Jan 27 06:34:35 Tower apcupsd[9482]: Communications with UPS restored. I have just been ignoring these messages...sometimes I get 3 notifications a day...somedays none. Any suggestions would be greatly appreciated! Logs are attached. tower-diagnostics-20180203-2243.zip
    1 point
  18. For completeness. add this to /boot/config/go echo "USERNAME ALL=(ALL) ALL" >> /etc/sudoers
    1 point