jmbrnt

Members
  • Posts

    89
  • Joined

  • Last visited

Everything posted by jmbrnt

  1. Hi @eds I have just had a look at this issue. The XML for my test Centos 8 VM I just fired up (on Unraid 6.8.1)... <interface type='bridge'> <mac address='52:54:00:e0:a3:3c'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> That is the default, generated by Unraid on my behalf (I selected br0 is my NIC to pass to the VM, as per usual). As itimpi said, there are a variety of NIC models you can use in kvm/qemu.. I have tested the following as working: - e1000, virtio, vmxnet3 You just edit the model type='X' parameter above. In my experience, e1000 has the best compatibility, vmxnet3 has the best performance (ony matters 10G and above, again, in my experience). Are you able to try making that change and see how it goes? Edit - note, it's worth mentioning my server is a Dell R730, using 1G broadcom based NICs. I am using the tg3 driver (as seen under settings>network settings> interface rules). It might be worth checking which driver you have. I know a chap with bnx2 drivers and he has trouble with Centos VMs too... Upgrading the variant of driver your NIC uses might be a solution in itself.
  2. You beaut, that seems to have solved it. Thanks very much.
  3. I just changed it to point at /mnt/disk1, and Docker is all go again. Should I just wipe the cache out and re-add it, copy the stuff from /mnt/disk1 back to the cache? The logic here is I did set it to /mnt/cache and still didn't work.
  4. OK - I thought they were on the cache... I'm still not sure why anything changed though, was all working before I fixed the xfs. I'll give it a shot moving to /mnt/disk1 and see..
  5. For example: root@unraid-2:~# find /mnt -name docker.img /mnt/user/system/docker/docker.img /mnt/user0/system/docker/docker.img /mnt/cache/system/docker/docker.img /mnt/disk1/system/docker/docker.img With this in mind I went to the Docker settings, changed the location of appdata and docker.img to the 'cache' variant, but no dice - still no containers showing up when re-enabling Docker. Do I need to do some other reconfig? Thanks again
  6. Cool - and thanks for the quick reply - but just to clarify - what am I looking for a duplicate copy of? I can see for example that there is a copy of docker.img on both the /mnt/cache and /mnt/user, but to be fair I am quite in the dark when it comes to the innards of Unraid
  7. After a migration to a new server, I had to run xfs_repair to see my shares which had disappeared. Once I did that, the shares returned (hooray) - but all my Docker images and VMs have disappeared. I think this is due to them living on the cache drives (BTRFS).. But whatever the reason, I'm stuck and can't get my VMs etc to show up. Attached the diagnostics. Thanks unraid-2-diagnostics-20191015-1028.zip
  8. Looks dead from the iDRAC Welp, ran the new config, re-assigned drives and rebuilding parity.. Now to save up for a new 5TB 2.5" drive :E Thanks for the help itimpi
  9. Thanks.. After a reboot... The parity disk has disappeared completely. Dead as a dinosaur probably.. I have another disk in the array that's the same size as the failed parity drive, currently empty. I will try re-assigning that as parity for the time being.
  10. Just logged in and my poor old parity drive has shut itself off, marked as faulty. It has a hilariously high number of reads/writes showing (140 trillion), some 900 errors. Can't do a SMART as the disk is dead (file just shows "Smartctl open device: /dev/sdb failed: No such device")... I guess I'm boned and have to buy a new drive to replace this ASAP - but is anything else worth a shot? unraid-2-diagnostics-20190429-2016.zip
  11. The port settings in the delugevpn container are automatically set by binhex's code (check supervisord log to see the magic happening) - you don't need to set them. Incoming port is the important one from a port forwarding perspective, you're likely to see this change when you restart the container. Outgoing doesn't matter so much (for some reason mine has just jumped from seeding around 0.1Kbit/s to 250Kbit/s so I have no idea what's going on
  12. Yeah sorry, as I said that didn't work. You can probably spend time to get wget working in the container - but it might not be worth it. It ~might~ be a different type of encryption being used in the application versis the .ovpn file that deluge container is using - binhex has a help thread where he mentions selecting a different algorithm if it's supported by the vpn endpoint, but I've not got any experience with that. When you change location (or even reload the container) you will get a new port and likely a new IP. Binhex is very clever and his container automatically negotiates and configures this port inside deluge for you (as long as you're using PIA) - so you don't need to touch it.
  13. Yeah that's kind of what I was thinking, if the performance of a raw HTTP download is just as crappy as the torrent, then the problem is narrowed down to the VPN, or as you say, the hub. L2TP tunnel to AA.net.uk for a tenner a month?
  14. One thing to try would be downloading a big binary file from a well hosted location, like an Ubuntu ISO (not torrent). This one is hosted at Oxford uni, so should be pretty fast http://mirror.ox.ac.uk/sites/releases.ubuntu.com/releases/bionic/ubuntu-18.04.2-desktop-amd64.iso I would wget it from inside Binhex's container and check the speed of the PIA tunnel against something that _should_ blast on Virgin Media You'll need to add wget first: pacman -S wget then wget http://mirror.ox.ac.uk/sites/releases.ubuntu.com/releases/bionic/ubuntu-18.04.2-desktop-amd64.iso (from inside the container's shell) ---- I just tried it and installing wget failed.. You could use a browser and the Privoxy function of the container, or the PIA application on your desktop etc to do the same test.
  15. So you're in modem mode, passing through to what?
  16. I have to say Sweden seems better than Switzerland, but my uploads are still messed up and my ratios are all taking a major hit compared to "no vpn". NORD was simple to set up, but as it doesn't support port forwarding at all you might find it not so great for uploads. Downloads however, fine. I have used both but honestly I think PIA was a waste of money and I'll just risk it with no VPN
  17. I seem to have absolutley terrible performance with PIA in Sweden/Switzerland. Rubbish compared to NordVPN for downloads, and despite ports being forwarded correctly (as reported by the tracker) - still just crap upload too. Miserable would describe it FWIW I used a Virgin Superhub 3 as a passthrough device (PPPoE) to a Mikrotik router that cost about £100 and it absolutely flew, line rate on anything I liked, as long as I wasn't using PIA (obviously their infrastructure comes into play there).
  18. Hi I have a Debian 9.4 VM running (for owncloud/some other stuff) and it by and large works perfectly. I have an Unraid share called 'owncloud' which is mounted at root with the tag oc-data. This shows up as I would expect as /oc-data and I can happily use the system. However, about once a week the permissions/ownership for the directory get reset to 99/nobody and 777... This isn't ideal, as it stops Owncloud from running (and has the obvious security flaw of a 777'd directory holding all my Owncloud stuff). I can't seem to see a way to stop this from happening. I can cron the required commands to fix the issue, but that leaves me with less than ideal uptime. What should I do to leave these permissions alone once set? Thanks
  19. Magic, and just like that the problem seems to be (slowly) resolving on the tracker. Thanks for the support, I'm making a donation
  20. OK cool - I was still having troubles with the tracker saying the port was actually open in that mode so I wasn't sure. With that out of the way - your container *should* just do the rest for me, right? 😉 Thanks again!
  21. Hi - I have a bit of a curly problem - apologies if it's been solved here before, I have searched and searched but have failed. I have a tracker that I connect to with binhex-delugevpn (and it's great!) - but this particular tracker only allows 4 IP addresses to connect, and changing IP requires a login from that IP over HTTPS (sigh), which I can do with Lynx from inside the binhex-delugevpn shell (after installing lynx). PIA allows for port forwarding from a set of their servers, but unfortunatley they have 84 servers behind the DNS name for X.privateinternetaccess.com (where X is the port-forwarding enabled server(s)). When I restart the deluge container, I get a new IP - which breaks my tracker - not ideal! I have changed the .ovpn file I use so that "remote" points to a particular IP, rather than a DNS name, but this throws up errors in the supervisord.log file (saying I'm not using a recognised port-forwarding server). I then added a static entry in the deluge container's /etc/hosts file, pointing X.privateinternetaccess.com to a specific IP, which my tracker would accept - however this seems not to be as static as I had hoped - arch seems to overwrite the IP with one that it resolves when the container restarts. Is there a nice way to keep using the port-forwarding PIA server, but only a specific IP? Thanks
  22. Thanks very much - this tricked me too. I was used to finding XML editor by clicking on the VM and selecting it. For those who wonder, you now click the VM, hit 'Edit' and then toggle XML view in the top right corner (where Basic/Advanced used to be). Confusing.
  23. Sorry for thread necromancy, but I have also done a successful (and very well performing it seems) docker instance of Terraria. I used 'ryshe's version - https://hub.docker.com/r/ryshe/terraria/ I added this docker manually, with no template. It installed fine, I added a couple of things (such as a share for the /world) and ran it. To do this (unraid 6.5.2): Go to your Docker tab in Unraid GUI and click 'Add Container' Instead of selecting a template, leave it saying 'select a template' Fill in the name.. This is the name of your Docker container, you'll need this later so keep it simple In 'Respository' add "ryshe/terraria" (without the quotes). This tells Docker to pull down the build from ryshe's dockerhub page Leave network type as bridge Click "Add another port/path/variable/label or device" and add the following 2 items: Port for UDP 7777 (host and container the same) + port for TCP 7777 (host and container the same) A Path called 'world', container path = /world, host past = /mnt/user/appdata/terraria/world On your first run of the server, you need an 'Extra Parameter' of "-dit", which is explained below What took me a long time to understand was the 'interactive' part - which essentially is hidden from us using unraid GUI to launch containers. The way to make sure this option fires up at all is to add "-dit " to the 'Extra Parameters' field. Once it was running (via gui, i.e. I clicked Apply and it ran) - I went into my Unraid box CLI and entered 'docker ps', found my Terraria container (called Terraria, this is the name I mentioned above) and entered 'docker attach Terraria'. This put me into that interactive shell, where I was able to generate a world - starts to feel good at this point. In order to make the world persistent (and tolerant of reboots/container restarts) - is to remove the '-dit ' from extra parameters - and replace it with "-world /world/your_world_name.wld" as a 'Post Argument'. This exact filename is up to you when you create the world, and you can find it by SSH'ing onto your Unraid server and cd in to /mnt/user/appdata/terraria/world - any worlds you create will be here. Overall, it works well and I'm happy for this thread existing in the first place
  24. I just had a thought - would it be best to fire up the new server without a cache pool, move everything over and then finally move the SSDs, turning on cache at that point?
  25. Hi I have a nice Micro-ITX server running Unraid 6, it has been going happy as anything for nearly 2 years. However, this is really my gaming rig (<3 PCIe passthrough) - and I have bought a Dell server to push all my Unraid stuff onto. My current setup: 2x3TB SATA drives 2x 256GB SSD sata drives (cache) 1xNVME drive (windows vm uses this via passthrough). The server can take the SSD drives, and has some 2.5" 10k SAS drives for storage. What I want to do, is stop using the cache pool on my current machine, so that I can recover the SSD drives and move them onto the new server. Then I can have it up and running Unraid, then simply migrate my non-gaming VM's across. Is this possible in a sensible way? Thanks