Jump to content

ken-ji

Members
  • Content Count

    944
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by ken-ji

  1. It should be like this. You want modules loaded (and permissions changed) before the array is started (which will then start dockers and VMs) #!/bin/bash # enable iGPU for docker use /sbin/modprobe i915 chmod -R 0777 /dev/dri # Start the Management Utility /usr/local/sbin/emhttp & These are discussed in the varius plex/emby support threads
  2. Containers that are in bridge network mode are connected to an internal bridge that cannot be accessed from outside (ipv4 or ipv6) unless ports are forwarded. you cannot forward ports thru unraid to and ipv6 address unless unraid itself is using ipv6. You need to put the container on a custom docker network, which will be exposed to the LAN (as a 1st class memeber of the LAN, responding to ARP etc) and which would allow you to set/gain an ipv6 address, that the router can reach/forward packets too. i don't think docker works with SLAAC, but documentation points to making sure the docker daemon /or network by extension should have a ipv6 prefix assigned to get ipv6 addresses from else only link local addresses get assigned.
  3. Disclaimer: I don't have IPv6. My comments are how I would solve it (but I'm probably missing some key info as we don't have IPv6 here) Do you have /64 assigned to you by your ISP? does your router allow you to route the /64 into your LAN? If not you'll need to look into Nat6 (yuck) This requires you to assign the containers their own ipv4 and ipv6 address. not shared with the Unraid (the ipv4 only of course as Unraid doesn't have ipv6) Make sure the docker network (eth0/br0) has Ipv4 and Ipv6 enabled - you'll need to stop docker engine and the array to make these changes Assign the docker network the Ipv6 /64 (and the necessary ip ranges) restart the docker engine. Modify the container to use the custom docker network your containers should now have an ipv6 address
  4. Running a Mikrotik hEX Router https://mikrotik.com/product/RB750Gr3 Its quite a bit of a learning curve for people coming from "point-n-click routers" but should be fairly straightforward for most technical users. What I really like about it is the QoS (quite a challenge) capability, and the support for VPN options (though still missing OpenVPN in UDP mode) There are some rough spots still like the built in DNS server only supporting A/AAAA records (but has regex matching) It also has builtin AP management (these need to be Mikrotik AP though) so new APs just need to be plugged in to the network and told to look for the head unit. The main feature I've loved about it until my ISP started placing users on CGNAT is how easy it is to create a site-to-site VPN between routers, just plug in the public IP on both ends and you are done.
  5. Been an Unraid user for 4+years and counting. Convinced my brother to have one at his house to manage his stuff using old hand me down parts without real issues (save for the impossibility to automatically upgrade in the latest versions with only 2GB of RAM). Never had major issues or surprise gotchas. Still have an unused license from the old pro two packs
  6. They look like man pages, but I have no idea why they would be in the root diretory.
  7. How many IP address does your Unraid server have? and how are your PC:s on the 192.168.5.x network reaching Unraid? Do they access it directly? or is there another IP not mentioned here? As a quick general point. An OpenVPN-AS container can share IP with the Host (Bridged or Host network mode), so the router can just port forward those ports. However, if the ports you want to use are already in use (80 and 443 comes to mind) or the app dynamically opens ports (thus needing its own IP) a single NIC and a switch without VLAN support, will give you containers running on their own IP, but are blocked from talking to the Host.
  8. your biggest mistake is assigning 8 IPs to Unraid on the same physical network. This will make networking work in ways you will not predict or understand. What you probably want here is to have just two bridges, and only br1 has and IP (10.23.0.11/24 - gateway 10.23.0.1) put eth0, eth1, eth2, eth3 together and bonded and bridged to br0, and assign the desired IP here then put eth4, eth5, eth6, eth7 together as bonded and bridge br4 (i think this is the correct one, else it would be br1) Configure the docker network pool to custom and delete the default one to br0, and create one for 10.23.0.0/24 (or smaller) on br4 point your containers to this network interface link your VMs to either bridge. that will simplify your life and make your network easy enough to understand: unraid is reachable via the first bond/bridge dockers on 2nd bond/bridge VMs on either it is connected.
  9. I'm considering dropping this Dropbox image given that I'm personally moving away from Dropbox, because of their limit to 3 devices policy. I'm experimenting on rclone and checking on how I can work with my workflow on it. That said, I'd like to look into a way to automate dropbox + fixed size loopback image for the Dropbox data directory.
  10. Well, if the additional disk to your VM will only keep Dropbox files, you can happily store the additional disk image on the array. there won't be too big an impact on your VM only the saving of a file would be impacted by somewhat slow array write speeds. Otherwise, you might want to look to using rclone as a tool for syncing local files to Dropbox. rclone is a CLI tool though and does have some realtime support for syncing directories.
  11. @joesstuff Have your home network ever supported more than one device? Some brain dead ISP tech teams in my neck of the woods actually think you only should have one device connected to the internet and thus limit their router by default to only hand out one DHCP address - go figure. EDIT: Oops you did mention other devices... One option is to use the settings from the DHCP server on the router and work out a static IP you can assign the Unraid server.
  12. if you cannot access a certain site while the VM is on br0, can you get all the important network info from the VM point of view? ipconfig / ip addr route ping <site that you can't reach>
  13. Hey, /mnt/user is a special filesystem containing the aggregated data from all the disks so if you have 2 disks + cache /mnt/user/appdata is the total of all the appdata directories (/mnt/disk1/ppdata /mnt/disk2/appdata /mnt/cache/appdata) that exist. So this is perfectly normal and correct behaviour and kudos to you for asking here before trying anything crazy
  14. Oh, then we do have a similar config, just that my 2nd Bridge does not have an IPv4 address to properly allow Containers on the same VLAN as Unraid to talk to Unraid. Interesting that we do get the 1Gbps limit then on a Windows VM talking to unRAID on the same bridge. But unplugging a network should not have any effect. Also my Linux test was actually RAM disk (Alpine Linux ISO) to RAM disk (Unrsaid /tmp).
  15. Can you elaborate your two bridge setup? I don't quite get what you meant by intra bridge trasnfer between your VM and Unraid share. I did some testing on my setup and it seems the max I can get out of a Linux VM (with VirtIO network) doing SCP against Unraid is 180MB/s A windows VM via SMB shares maxes out at 110MB/s, but these are all on the same bridge (My containers all live on a different bridge and VLAN)
  16. Create a user, and use that user to connect to the share. The root account is not allowed to connect to shares over the network, unless the share is in Guest mode
  17. Except the NVidia build is not a Limetech build but a community addon. Maybe if the guys added traps to the Unraid OS update mechanism as part of their plugin... but until then, user beware.
  18. Quick peek at your diags show your network config is bonkered you have 3 network interfaces with the same IP address and gateway assigned to all of them. In this case, nobody can predict the networking to be functional as the routing table would make no sense. you network config actually declares an eth4/br4 - but that interface doesn't exist. Quick fix. Shut down UnRAID, and pull the USB. Edit file config/network.cfg # Generated settings: IFNAME[0]="eth0" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.1.22" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" DNS_SERVER1="192.168.1.1" USE_DHCP6[0]="yes" DHCP6_KEEPRESOLV="no" MTU[0]="1500" SYSNICS="1" This should be doable in the GUI with the GUI boot mode using the local browser, but I've never used GUI boot mode.
  19. @Aderalia You might want to make sure that the HP SAS RAid controller can work in true JBOD mode (or the RAID controller can be set/flashed to IT mode) Not an expert but RAID controllers can give you issues down the line when they die, as the JBOD/array might not be accessible with out the same RAID controller/family
  20. Would like to point out I'm not encountering issues with HW transcoding on my Pentium G4620 iGPU, but I'm using the official Emby docker.
  21. Upgraded from 6.7.0 remotely without issue.
  22. You can edit the first post and change the topic to tag it as solved. That said. you didn't really provide any info, but was both your interfaces configured with a gateway? this is one of the causes for this behavior.
  23. The big difference I can see is that QNAP NAS's are hardware with defined and clear cut w/ regards to the hardware like the number of network interfaces and possible roles of the interfaces. Also, I think most commercial NAS have a small reset button in case you bugger up the network config. We don't have that in Unraid, which leads to the WebUI having to figure out just what is on the machine and work with it seems to get from the OS, and if it makes a mistake, there's no magic reset network config button. You either reconfigure locally (Console or GUI mode), or you unplug the USB, pop in another PC and fix/reset the config files. Unraid has to make some general assumptions that would work for everybody out of the box, hence the initial bond all network interfaces together upon startup so that the network would work out of the box regardless of how many devices there are or which one is eth0 and if that interface has an actual cable plugged in. Unraid has no idea which of your on-board or addon nic would become eth0 and neither would you unless you've used the PC/server on a similar generation Linux OS before already. The fact that it works for a lot of people out of the box or with minimal difficulty is already a great step forward and it will probably get better, but lets face it we are not there yet. Last I checked the similar competition FreeNAS, or OMV do not have such a sparkling network configuration UI either. One should also remember that QNAP spends a lot for the development and support, and makes a user pay for it, by being way more expensive than a similarly specced NAS made out of off the shelf consumer or even server parts. I'm not knocking on the way you think, just pointing out that features tend to cost money due to development and testing, and you would have been shocked at how we got to the current networking configuration in the last 3 - 4 years - we only got custom docker IPs around 2 years ago, and we had back and forth with the community and the devs on which way to go and how to go. The current system is amazing and flexible, though not super user friendly.