MrAndyBurns

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MrAndyBurns's Achievements

Noob

Noob (1/14)

6

Reputation

  1. Hi All, I've been enjoying this docker for a while now, thanks! Running all the usual suspects, MLAT working and FR24 and ADSB are all happy! I have had a reoccurring issue where the container will just stop. logs says "send - failed to send 12 + 88 bytes in mode 0, code -1: Broken pipe" " *** buffer overflow detected ***: terminated\n","stream":"stdout","time":"2023-07-17T12:16:03.9714032Z" Any suggestions? Cheers Andy
  2. any now with the NVME drive passed though to the VM using /dev/disk/by-id/ ... interesting as I was expecting quicker ... now to see if I can pass the drive through as directly but I'm not sure its possible as its on a Z97 mobo and connected to the PCI bus with a cheap M.2 adapter card
  3. Just taking you along with me if your interested and reading this a some point in the future. I've updated to 6.12 and taken the opportunity to reformat my 2 disk SSD (WD blue 500gb) from BTFS to ZFS encrypted mirror and have had some good results. No other changes to VM Setup
  4. Hi, been looking but I can't find a solution. I'm trying to mount a folder on an off-site Synology. The Synology is live, 139,445 are port forwarded but there is no response to pings. (I can mount my test Synology to the off-site Syno with no problem ... UD SMB can see the folders when I do a search) Is there are way to force it to mount and bypass the ping check? Thanks Andy
  5. Hi All, Been loving Unraid for a few years now, lots of happy dockers and a few VMs for fun. I do have a Windows 10 VM which I use as a daily driver for email, web and general stuff. Its great but I wonder if I can get is a little more snappy .... I have a couple of WD Blue SSD's setup as a BTRFS Pool where the dockers reside, along with my other VMs and Win10 60gb windows vdisk. My question is, is this the best way? Crystal Disk figures and not the best, I am thinking is this the issue? Don't get me wrong, it's working okay, and I know it's not going to be like my standalone gaming PC but what do you think? (I have space for additional drives including nvme, pci slot) System Spec is i7-4790k on a ASRock Z97 mobo with 32gb of DDR3 Ram. Windows is lucky enough to have 8gb RAM allocated to it and 4 threads. (62% system ram utilized on 'day-to-day' use - built from lots of old/little bits 4 HDD for General Storage, VM for CCTV with HHD passed through) Thanks for any feedback Andy
  6. Hello All, I too wanted a piece of Pi on my Unraid. After having a good read of this, and others, I got confused so went for a play. In the end I have ended up downloading the latest "Raspberry Pi Desktop for PC and Mac" iso from https://www.raspberrypi.com/software/raspberry-pi-desktop/ . I created a new VM using the Debian template with the downloaded iso as the OS install iso drive and created a 15gb drive. I started this VM with the consol showing. A boot screen opened, where I selected Install. Dont just leave it as Pi will open, you will get excited but then you will realise its just in a boot loop. Work though the install, it's easy to do and you can see more information on it here https://projects.raspberrypi.org/en/projects/install-raspberry-pi-desktop/4. Once installed I stopped the VM and edited the configuration in Unraid to delete the install iso information so it boots to the 15gb drive. That was it! You start the VM, Debian pops up but you can just leave it and Pi will open. Setup a user etc, it will restart and your at the Pi home page. You can enable SSH etc as normal. In summary -Download "Raspberry Pi Desktop for PC and Mac" iso -Create new debian VM and set the OS install iso to the "Raspberry Pi Desktop for PC and Mac" iso -Start the VM -Select Install option (either one) -Stop VM -Edit VM and remove the OS install iso information -Start VM and enjoy a piece of Pi Hope this helps ...
  7. So ... what did you do?? I have a 3tb WD Red parity and 2x older 3tb standard disks ... I need some more space and have an 8tb WD Green drive I would like to add to the array while saving up for a new 8tb 'NAS' spec drive (Ironwolf or Red). My initial hope is add the 8tb standard drive to the array and get extra 3tb storage due to the size of my parity (3tb parity with 8tb (3tb usable)+3tb+3tb=9tb space) THEN in a few months, add new 8tb NAS drive as parity drive which then allows me to use all 8tb of the standard drive in the array (8tb parity with 8tb+3tb+3tb and retire oldest drive) ps I say NAS drive for the parity working on the principle that the parity does a lot more read/write so do not want to use an older standard 8tb shucked drive.
  8. Thank you for posting this .... spent ages trying to resolve the error "dial unix /var/run/docker.sock: connect: permission denied" from transmission accessing my dockers to report and update InfluxDB and Grafana .. have even updated to RC version of Unriad (which has been a nice bonus) ... any how rolled Transmission back to 1.20.2 and all is running again. Thanks Gnomuz
  9. Thank You @stor44!! ... after spending ages getting an Xpenology VM to run on the new 6.9.x, I merrily skipped over to my Windows VM and was gutted to find Windows running like it was on an old Atom notebook! It would have taken me ages to find the Tips and Tricks performance had changed! Thanks for the pointer, the initial testing looks good! (simple VM, RDC into Win11 developer build with allocated 2 core 4 thread 8gb hosted on i7-4790)
  10. Great work @HyperV and @ich777 and @ephdisk ... I still needed this fix as I am still having VM problems moving to 6.9.x (Xpenology - kernel is too new) ... but all 24 dockers now showing status correctly
  11. Thank you HyperV for the fix .. I had the same issue where all were 'not available' after a simple restart ... changed DNS/forced updates etc but no win .... this worked a treat If it help anybody else, in nano (editor) you can goto line 457 to add the magic 'i' above by typing adding the line to the command ... for example in a terminal window type nano +457 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php Thanks once again HyperV
  12. Probably just FYI ... I have been running Xpenology as a VM on my Unraid for about a year. Xpenology was my main NAS on a few bare metal builds but I liked the looks of Unriad and have been very happy on 6.8.3 (mainly spinning down drives and SSD cache pool for VM's dockers). I have setup Xpenology with a virtual disk, changed the virtio network adapter to e1000e as per many of the forums and works well. HOWEVER I have recently upgraded to 6.9.1 and found that I can not get Xpenology to run and it appears to panic so does not even get a network address (just like the old days). I have to say I didn't spend too long trying to fix and ended up rolling back to 6.8.3 (no data loss, just needed to use the backup of my flash disk to get the cache pool back). If anybody has been playing and worked a way around this or spotted whats causing the problem please post.