Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About bigjme

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. bigjme

    Docker Groups

    So as i'm using more and more dockers i'm finding my docker screen getting a little messy (with 23 dockers) So my request is for some sort of docker grouping or categorisation in the docker gui. While it won't affect many users it may be useful to be able to group dockers by what their for and tidy up the interface For example, with my setup i'd want these groups: Home Media Services Web Services CCTV Utils Not Needed (i store old useful dockers here) It would be amazing to see something like this, potentially with a user create-able grouping list so we can make as many as needed? Even if it just acted as a filter that would be amazing Jamie
  2. bigjme

    [Support] Linuxserver.io - Nginx

    OK I did not think of this at all.... Safe to say this would be a much easier approach than what I was thinking of as it's more scalable Looks like I'm going to need to learn some docker potentially! Thanks both for your replies. As I say, it was a bit of an odd one and one I didn't really expect a response from
  3. bigjme

    [Support] Linuxserver.io - Nginx

    This is a bit of an odd one so forgive me and i know it may sound totally mad For years i've hosted various servers and websites on an unraid server. Its my job so i host test sites and everything else when i'm working remotely I have a custom reverse proxy coded and working in nginx and a number of nginx backend server. This allows me to route different websites to different instances of nginx entirely. Sometimes i get stuck with older websites or those that have bugs with certain php versions and i need to change it Right now the nginx docker ships with php 7.1.17 which is great. My question is, is there any way to change the php version without manually removing and installing a new one each load? I was thinking perhaps a set of stable php versions all installed at the same time but with fast-cgi on different ports. You could then change the fast-cgi port used by nginx and therefore change the php version used The most commonly selectable options i've seen are: 4.4 5.1 5.2 5.3 5.4 5.5 5.6 7.0 7.1 7.2 My idea is to use dockers instead of something like MultiPHP so everything was truely isolated. I know the idea behind it is mad and likely not usable by many but its worth a shot
  4. bigjme

    [support] Spants - Kerberos.io template

    I managed to get it to run and work using these settings I couldn't figure out how to map the config folder for kerberos itself however so i'm assuming an update would wipe the settings within. One thing i did notice was it seems very memory heavy. A single 3mp camera was causing the docker to pull at least 3GB memory (climbing quickly) and 2 full system cores on my 5960x Safe to say my windows 10 vm running blue iris pulls around 2gb with a full windows install and a tiny 3% of 4 cores allocated in comparison Regards, Jamie
  5. Hi All So my unraid server uses an adaptec hba and sadly I'm unable to monitor it right now as I'm using unraid - primarily its just to check on the hba temperature as I have no other way to check it I was wondering if there was anyone with a built docker file for this or that would be able to create one? I know this topic was brought up around 2 years ago and there were issues with having the software compiled into the docker and I believe it was decided the software would have to be downloaded and installed on docker run which is fine I've found the following 2 Dockers that do this but their all on github so I've got no idea how to use them https://github.com/Fish2/docker-storman https://github.com/nheinemans/docker-storman I'm not sure if there are still licensing issues I'm not aware of, or it may be the case that this simply won't work at all but it would perhaps be useful to have for anyone on here running adaptec maybe now that their a lot more supported? Jamie
  6. bigjme

    KVM Live Backup qcow2

    Ok so for anyone following this or interested, this may be becoming more of just a general virsh support issue than anything as i have an idea of what may work for achieving at least some form of easier backups and having had to trash a vm recently this is becoming a more prominent issues for me Please note, i have not tried rolling back a vm to a snapshot yet so don't use the above as serious backups until tested more Now as i'm fairly new to virsh and there are no doubt some guru's on here, this is my new current script #!/bin/bash #Declare backup sources sources=( "/mnt/cache/VMImages/vm1/" "/mnt/disk1/Disk1Test/VM Image/vm2/" ) targets=( "/mnt/disks/Disk_1/VMImages/vm1" "/mnt/disks/Disk_1/VMImages/vm2" ) vmname=( "vm1" "vm2" ) arraylength=${#sources[@]} # Declare backup drives deviceuuid=( "6ed5043c-14ee-41f2-903d-d201ec50d39f" ) devicemount=( "/mnt/disks/Disk_1" ) devicelength=${#deviceuuid[@]} # Mount drives for (( ii=1; ii<${devicelength}+1; ii++ )); do if grep -qs "${devicemount[$ii-1]}" /proc/mounts; then echo "${devicemount[$ii-1]}" " - mounted" else echo "${devicemount[$ii-1]}" " - not mounted" mkdir "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - created mount path" mount -t xfs UUID="${deviceuuid[$ii-1]}" "${devicemount[$ii-1]}" if [ $? -eq 0 ]; then echo "${devicemount[$ii-1]}" " - mount success!" else echo "${devicemount[$ii-1]}" " - mount failed!" exit 1; fi fi done # Handle Backup for (( ii=1; ii<${arraylength}+1; ii++ )); do echo "Starting backup of" "${sources[$ii-1]}" " to " "${targets[$ii-1]}" mkdir -p "${targets[$ii-1]}" #virsh domblklist "${vmname[$ii-1]}" virsh snapshot-create-as --domain "${vmname[$ii-1]}" $(date '+%Y-%m-%d-%H-%M-%S') --diskspec hdc,file="/mnt/user/ArrayVDisks/TempVms/overlays/${vmname[$ii-1]}.qcow2" --disk-only --atomic files=( $(find "${targets[$ii-1]}" -name "*.qcow2") ) if [ ${#files[@]} -gt 0 ]; then echo "Running incremental backup - setting as inplace" rsync -ahv --delete --progress --inplace "${sources[$ii-1]}" "${targets[$ii-1]}" else echo "Running first backup - setting as sparse" rsync -ahv --delete --progress --sparse "${sources[$ii-1]}" "${targets[$ii-1]}" fi virsh blockcommit "${vmname[$ii-1]}" hdc --active --verbose --pivot rm "/mnt/user/ArrayVDisks/TempVms/overlays/${vmname[$ii-1]}.qcow2" #virsh snapshot-list "${vmname[$ii-1]}" done # Unmount drives for (( ii=1; ii<${devicelength}+1; ii++ )); do if grep -qs "${devicemount[$ii-1]}" /proc/mounts; then fuser -k "${devicemount[$ii-1]}" umount "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - unmounted" rmdir "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - removed mount path" fi done So, this mounts the drive i specify to a mount point of my choosing, then loops through the VM's as needed. It creates a snapshot for each vm, checks if that vm has been backed up before, if it hasn't then rsync does a sparse image backup to preserve the space used rather than copying it at the full image size. If it exists then it does an inplace backup which just updates the sparse image, again keeping the files small This so far seems to work fine as i have a 30GB qcow2 vm thats using 5.6gb, 3 backup runs later and its still only 5.6gb as nothing in the vm ever really changes although it is running and there is no noticeable different to the vm while this is all running So now the image is copied the system commits the overlay image back into the base, removes the overlay image, and does the next vm, once its done it unmounts all the drives as needed. This is done so i can actually backup my vm's to multiple drives if needed - i use this for important documents on my array as well So that's all working fine but still there is that dreaded rsync that copies the entire image over every time which isn't ideal but is fine while i'm doing local backups So i had a thought earlier but i can't figure out for the life of me if its even remotely possible or not. From what i have read the command "virsh snapshot-create-as" is capable of creating a snapshot backing chain with multiple layers when this parameter is passed "--reuse-external". An example is below A brand new image is created and looks like this: base.img After 1 snapshot this looks like this: base.img -> snapshot1 After 2 snapshots this looks like this: base.img -> snapshot1 -> snapshot2 So my thought was, what happens if when we create snapshot1 we copy the base.img in our backup script When we create snapshot2 we copy snapshot1 We then commit snapshot1 back to the base.img using blockcommit and "--shallow --keep-relative" this then leaves: base.img -> snapshot2 The next backup run then creates snapshot1 based on snapshot2, then commits snapshot2 as above and loop like this but one creates snapshot1, the next creates snapshot2 etc. The file you then transfer would be the "middle" snapshot and would in essence be the difference between the snapshots resulting in a much smaller file to be copied In my mind that would work up to this point (although i know far too little to say if its plausible) Now the main issue i see comes in. In your backup destination you now have a load of snapshots overlays and a base file, but how on earth would you get those overlays to commit back to the main file? The overlay and base files would be aware of entirely different backing chains to each other so i'm not sure how to possibly maintain this My hope is that this thread may become more of a rambling of ideas in order to aid someone else come up with a good idea. At work we use windows hyperV with active replication (so if one server dies, server 2 is only so far behind and can be spun up in its place), i would love to be able to do something similar with my home unraid boxes and kvm I'm aware this post is long as this topic may be changing off topic so perhaps it may be worth moving it elsewhere? Either way, hopefully the scripts above may help someone Regards, Jamie
  7. bigjme

    [support] Spants - Kerberos.io template

    What trouble is it having spants? Permissions or simply getting a mount point in the docker?
  8. bigjme

    [Resolved] Primary GPU passthrough

    Hmm, I'm not entirely sure then. I followed a fix someone else found so I don't really know how they found what to do Jamie
  9. bigjme

    [Resolved] Primary GPU passthrough

    It may be something unsupported on the Web terminal, try it from an actual ssh connection, echo should always be available For mine I ssh'd in as the root user (same details as the gui), you may find the Web terminal user may be different to root Regards, Jamie
  10. bigjme

    KVM Live Backup qcow2

    Hi All, So i know this has been mentioned a million times but i'm trying to back up my VM's live without any shut down or suspensions as part of my daily incremental rsync backup. The machines in question are for CCTV and VoIP systems so they need to remain online Right now i have this which does work, it creates a temporary overlay file allowing the base image to stay unchanged. You should then be able to clone the original file which in itself should work as an isolated backup and once done merge the overlay into the main file as snapshot point on the main system virsh snapshot-create-as --domain "Windows Server 2016" $(date '+%Y-%m-%d-%H-%M-%S') --diskspec dc,file="/mnt/user/ArrayVDisks/TempVms/overlays/Windows Server 2016.qcow2" --disk-only --atomic virsh blockcommit "Windows Server 2016" hdc --active --verbose --pivot rm "/mnt/user/ArrayVDisks/TempVms/overlays/Windows Server 2016.qcow2" If i then run this command afterwards i can see the snapshot is created and i can see the overlay file be created then merged as needed virsh snapshot-list "Windows Server 2016" So far, so good But then i try and run with my rsync command and it doesn't work properly, below is a fully working copy of the code sources=( "/mnt/cache/VMImages/Windows Server 2016/" ) targets=( "/mnt/disks/Disk_1/VMImages/Windows Server 2016" ) arraylength=${#sources[@]} virsh snapshot-create-as --domain "Windows Server 2016" $(date '+%Y-%m-%d-%H-%M-%S') --diskspec hdc,file="/mnt/user/ArrayVDisks/TempVms/overlays/Windows Server 2016.qcow2" --disk-only --atomic for (( ii=1; ii<${arraylength}+1; ii++ )); do echo "Starting backup of" "${sources[$ii-1]}" " to " "${targets[$ii-1]}" mkdir -p "${targets[$ii-1]}" 2>/dev/null BACKUPS=30 END=$((BACKUPS - 1)) mv "${targets[$ii-1]}"/backup."$BACKUPS" "${targets[$ii-1]}"/backup.tmp 2>/dev/null for ((i=END;i>=0;i--)); do mv "${targets[$ii-1]}"/backup."$i" "${targets[$ii-1]}"/backup.$(($i + 1)) 2>/dev/null done mv "${targets[$ii-1]}"/backup.tmp "${targets[$ii-1]}"/backup.0 2>/dev/null cp -al "${targets[$ii-1]}"/backup.1/. "${targets[$ii-1]}"/backup.0 rsync -ahv --delete --progress --exclude "docker.img" --exclude "Program Files" "${sources[$ii-1]}" "${targets[$ii-1]}"/backup.0/ done virsh blockcommit "Windows Server 2016" hdc --active --verbose --pivot rm "/mnt/user/ArrayVDisks/TempVms/overlays/Windows Server 2016.qcow2" Using the same internal backup loop on a normal set of folders gives me proper incremental backups and only copies over file differences. For the VM image it seems to just take a full copy of the base image every time, in this instance creating a 50GB image every day rather than copying over the maybe 2GB of file differences I've really new to rsync and i'm new to the kvm virsh command line so i'm hoping i am just miss-understanding something and there is an obvious issue. I keep seeing this mentioned in some online backups for virsh as i'm doing above but i'm unsure exactly what it does or if this would fix the issue "--no-metadata" Help would be hugely appreciated as i know a lot of people are looking for the same thing. My aim is to allow this to copy the differentials to an offsite ssh server for rsync but i need to get this working locally first as i can't afford to be copying 500GB of vm images a night and having them offline during the process As an additional, i'm using qcow2 as these are running on an NVMe drive so i need them to be sparse images and just use what they need Regards, Jamie
  11. Thanks johnnie.black, i will write something to use the UUID instead
  12. bigjme

    unRAID OS version 6.5.3 available

    I'd put this update off for a while but I've done it today and no issues so far I had one vm stutter a few times after fresh server reboots but a sits just the one vm I'm assuming it's a Windows problem as a reboot fixed it Other than that, all good so far
  13. Hi All, Really sorry if this has been covered but there is just so much about this plugin that i'm struggling to find what i need I'm testing something for backups so i have attached an old usb cradle to my machine. This cradle accepts 2 HDD's and shows them as a JBOD to the host. Unassigned devices detects them as in the attached screen shot. Problem is, if i try and change the mount point names (as right now it would try and mount them both in the same place), it changes the path of both devices. Checking the config is adds this: [ICY_BOX_IB-3620_PROLIFICMP000000B86] mountpoint.1 = "/mnt/disks/ICY_BOX_ICY_BOX_IB-362" So i can't actually give them separate names as its linking to the main device names. Does anyone know how to fix this issue or work around it? I'm trying to use rsync and want to keep the permissions hence using xfs Regards, Jamie
  14. This may help, I overcame this issue recently, I put the answer in the first question
  15. bigjme

    [Resolved] Primary GPU passthrough

    Ok so i tried the above on my main card, and my other card which i currently pass to a VM. they are both 750ti's, the same make and model. Both had been passed through to a vm in a secondary slow at the time. The exported roms both times were a tiny 62KB Safe to say, booting the vm i get no error saying the device is in use, but the vm has no video output at all Having read through the export i noticed my gpu was on an older bios then the one i fetched from techpower so i went and fetched n older version, edited it to remove the jump, and booted the vm I have video output and the windows startup recovery launched. So i restarted it to boot windows. Again like before, the windows loading screen comes up, and the seconds windows starts to initialise the nvidia drivers, i get the same error 2018-05-03T20:11:39.036200Z qemu-system-x86_64: vfio_region_write(0000:04:00.0:region3+0x1088, 0x7ffe11,8) failed: Device or resource busy KVM internal error. Suberror: 1 emulation failure RAX=ffffe3fca3011000 RBX=ffffe3fca3011000 RCX=ffffe3fca3011000 RDX=0000000000000000 RSI=ffff8f8b58f44830 RDI=ffff8f8b58fb1000 RBP=ffff8f8b58efc000 RSP=ffffa30c4d3868f8 R8 =0000000000001000 R9 =0101010101010101 R10=fffff80a6783c4ac R11=ffffa30c4d3866b0 R12=ffff8f8b56a72ab0 R13=ffff8f8b58f43010 R14=0000000000000000 R15=0000000000100000 RIP=fffff80a67abb038 RFL=00010216 [----AP-] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA] CS =0010 0000000000000000 00000000 00209b00 DPL=0 CS64 [-RA] SS =0018 0000000000000000 00000000 00409300 DPL=0 DS [-WA] DS =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA] FS =0053 0000000000000000 00017c00 0040f300 DPL=3 DS [-WA] GS =002b ffffd401e8712000 ffffffff 00c0f300 DPL=3 DS [-WA] LDT=0000 0000000000000000 ffffffff 00c00000 TR =0040 ffffd401e8721000 00000067 00008b00 DPL=0 TSS64-busy GDT= ffffd401e8722fb0 00000057 IDT= ffffd401e8720000 00000fff CR0=80050033 CR2=ffffe40646de7000 CR3=000000026416e000 CR4=001506f8 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000d01 Code=66 66 66 66 0f 1f 84 00 00 00 00 00 66 48 0f 6e c2 0f 16 c0 <0f> 11 01 4c 03 c1 48 83 c1 10 48 83 e1 f0 4c 2b c1 4d 8b c8 49 c1 e9 07 74 2f 0f 29 01 0f 2018-05-03T20:11:57.679635Z qemu-system-x86_64: terminating on signal 15 from pid 12367 (/usr/sbin/libvirtd) 2018-05-03 20:11:58.880+0000: shutting down, reason=destroyed I've double checked the device is in its owm iommu group and it is I've also checked the gpu is not bound to the vfio-pci driver and its not My next thought is that its because i'm booting unraid into gui mode and thats using something perhaps? --Edit Ok so i just did a fresh reboot with unraid in console mode and still, the exact same behaviour -- Edit 2 Ok so i found this post elsewhere on the forum So it says to run these 3 lines echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind I've ran them and the vm has started up and the rom errors have vanished I'm going to run some tests and have added it into my user scripts to run on array start up to see if that is fine on a fresh restart