poshmick907ak

Members
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

poshmick907ak's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. I'm having this same issue and tried posting in this thread, but couldn't, so figured I'd post in a separate bug report thread since I don't see one posted for this issue: I upgraded to 6.12.8 and the Web UI became almost unusable with how slow it was. After finding the above thread, I was able to confirm I was in the same boat with special characters in both the Description and Model fields for my server. After emptying those fields out, the Web UI became snappy and operated normally again. The special characters used included: ' - û Just to clarify, I am NOT running the Connect plugin. The creator of the original thread referenced above had theorized the Connect plugin was behind this, but like him, I was able to confirm it was simply the presence of special chracters in these fields.
  2. Ok, so making some progress on this, I read you should be able to override the DNS for a container by specifying "--dns X.X.X.X" as an extra parameter passing your DNS IP as part of the container setup, but couldn't see where to enter parameters. Turns out I just had to enable Advance View. Anyway, I tried entering "--dns X.X.X.X" to specify the IP of my DNS server in extra parameters field, but the container still shows 127.0.0.11 as the DNS server on boot. Am I doing that right? I did test this with a second container and saw the same behavior.
  3. I've configured custom bridge interfaces in my Docker settings that correspond to my local VLANs I want my Docker containers to live on. When installing a new container, selecting one these VLAN-specific custom bridge interfaces requires me to set a static IP for the container (using IPVlan). All the containers I've setup install just fine this way and have full network functionality on their designated VLANs as intended. The only thing I can't figure out is how to set the DNS server configuration for my Docker containers. I have a local AdGuard-Home instance running as a Docker container on this Unraid server I'd like to be the primary DNS server for all other Docker containers as well as everything else on my network. The only exception being the Unraid server itself, I set that to use 9.9.9.9 since it will be booting prior to the AdGuard Docker container. Since I set Unraid to use 9.9.9.9 as it's DNS, is it passing this through to Docker behind the scenes and then all installed Docker containers use 9.9.9.9 as a result? When I do a "cat /etc/resolv.conf" command through the terminal for one of my local containers (tested on the linuxserver build of Radarr in this example), it returns that the container is using 127.0.0.11 for it's DNS server. Where can I override this to use my AdGuard IP? I couldn't find anything under Docker settings and none of the additional parameters I tried passing to the container during setup honored the IP (I read you should be able to specify this using a parameter of "DNS" in the container setup, but it didn't work).
  4. Thanks @trurl, that confirms my assumption. Looks like I'm stuck at the slower transfer rate for the time being unless I reconfigure the structure of my array (not really wanting to do right now). I'll just have to be patient then
  5. New user to Unraid with LOTS of data to move from my workstation to an Unraid server soon. I'm almost done reconfiguring them so both server and workstation are connected via 10Gb network. My understanding is that when I copy a directory with lots of content over SMB to the Unraid server using robocopy or even just Windows Explorer, that process is going to move files sequentially. On the receiving end, Unraid will write those files sequentially as it receives them to the appropriate drive as determined by the high-water setting. Unlike a RAID 5 array that would split the incoming stream out among drives, Unraid will only ever write this incoming stream of content to one drive at a time, thus making the drive write speed, not my NIC speed the bottleneck, correct? If I run parallel copy processes on my workstation moving broken up different portions of my data to the Unraid server at the same time, would Unraid potentially handle each incoming stream of content independently in a way that would allow writing to multiple disks simultaneously rather than one at a time and thus allowing me to take better advantage of my 10Gb connection?
  6. So reading a little more into this, I think I have clarity on Macvan vs IPvlan the difference being each host gets a unique virtualized MAC vs having to share with IPvlan. I think the only consequence of IPVlan would be I couldn't have hosts get IPs via DHCP and maybe some layer 2 discovery protocols might be borked, but neither should be an issue in my use case, so I think I'll make the plunge and just switch to IPVlan.
  7. So I'm new to Unraid and am trying to setup my server so that the single network interface is a trunk. I have configured my NIC in Unraid settings so Unraid's management is working fine on the default native VLAN. I've also defined the additional VLANs that will be used for my Docker containers. In Docker settings I enabled Macvlan (don't really know the difference with IPVlan, but went with Macvlan since I used that to expose my Dockers on my Synology to the local network) and then defined the custom network interfaces for each VLAN I had defined in Unraid network settings. When setting up my docker containers that needed to live on special VLANs, I was able to assign them to the VLAN-specific custom Docker interfaces for those VLANs and they worked great on those networks as I intended.... until they didn't I've had my server lock up and become non-responsive twice now (Unraid management nor any of the Docker containers responding to pings). I noticed that the FixCommonProblems plugin flagged me for "Macvlan and Bridging found". Looking online, I've found some references to problems with Macvlan and possible system stability. What's the proper way to fully extend my network to Docker containers? I'd like to have them operate like they were directly on my network and receive their own individual IP addresses and not just be bridged behind the Unraid IP using different ports (some of them won't even be on the same VLAN as the Unraid management NIC). Do I need to add a second NIC to the server so one is for un-tagged native VLAN traffic just for the Unraid management IP and then dedicate a separate NIC for use with Docker that supports trunking for the various VLANs used by the containers?