Xaero

Members
  • Posts

    367
  • Joined

  • Last visited

  • Days Won

    2

Xaero last won the day on July 19 2019

Xaero had the most liked content!

Recent Profile Visitors

4052 profile views

Xaero's Achievements

Contributor

Contributor (5/14)

104

Reputation

  1. IMHO instead of using a PCI-E 1x slot for a 10GbE adapter in any situation, you should select either an NVME to PCI-E x4 adapter, if you board has NVME slots, or a USB 3.0/3.1 10GbE adapter as it will result in higher usable bandwidth in almost every case. If thunderbolt is available, it is also a great choice. USB 3.0 is 4.8 GBit, or ~5x as fast as a PCI-e 3.0 x1 link. USB 3.1 is 10 GBit, and has very low protocol overhead so you can expect near 10GBit performance. Thunderbolt 3 is 40GBit and will obviously support full speeds for up to 3 10GBit ports, and near full speed for 4. Just my 2c. I only use my 1x slot for powering a port expander.
  2. Even if it is, enhancements to the functionality and security of core features (disabled by default or not) should be welcome. Thanks for taking the time to improve Unraid. Maybe someday I will be able to submit some PRs
  3. No worries - You could try this workaround that is reported to fix many issues with that chipset -- but YMMV as none of them mention this gigabit negotiation issue: https://wiki.archlinux.org/title/Network_configuration/Ethernet#Broadcom_BCM57780 If it works, you can also make that change permanent until such time that a Kernel mainline fix arrives.
  4. Oh, that would definitely indicate that the switch itself is fine. It looks like something changed between Linux Kernel 4.14 and 4.15 that probably caused an issue with all chips in the TG3 driver: https://askubuntu.com/questions/1163256/broadcom-bcm5764m-gigabit-ethernet-wont-connect-at-gigabit-speed That chip is indeed covered by the TG3 driver: https://linux-hardware.org/index.php?id=pci:14e4-16b1-17aa-3975 And that would line up with your experience of "Works in Windows, but not here."
  5. Here's your problem: The partner it is plugged into is saying it only supports 10/100 for some reason. I would double check that it is indeed a gigabit switch, and if it is - I would try a different switch in that position. I'm also working off some assumptions here. It's possible the switch is degrading all links to 10/100 because the upstream link is 10/100, though that isn't common these days. You've already confirmed the card in the server and the cables aren't the problem - the server negotiates a 10/100/1000 link just fine when connected to a different endpoint, so the suspect is the switch, and that output confirms it. The switch is only advertising 10/100. EDIT: I see you say it also does that when plugged into the wall. From the wall termination, where does it go? Is the cable pulled thru the wall good? How long is that run? Has that run been tested with a cable tester? There's lots of questions here. If it works with a short cable directly between two endpoints, then the endpoints are working properly.
  6. Would it be possible to add Mame-Tools? Packages exist for Slack. Depends are LibSDL2 and GlibC 2.33 being available for a current version. Mostly desired for CHDMAN. There is a docker for CHDMAN but it has a static script that isn't very flexible for my needs. I'm trying to bulk convert ~10TB+ of archived Bin/Cue & ISO images to CHD since they are directly playable and this would be best suited doing directly on the server (especially since CHDMAN is multithread and I have 48 threads to work with on my unraid box.) I was able to manually install Mame-Tools and SDL2 with upgradepkg --install-new; however the GLIBC available through NerdPack/DevPack and Unraid is too old. I'll try manually installing a new GLIBC and hoping that doesn't break the runtime. If it does a reboot should recover me. Thanks for the consideration EDIT: For now I am rebuilding that container by editing the DockerFile to execute a script that I am binding via -v as a file. This lets me edit the script easily to suit my needs and lets me execute CHDMAN without needing to worry about dependency hell and such.
  7. Linux Systems Administration. Sysadmins spend a lot of time being intimately familiar with configuration files, packages and their dependencies and resolving conflicts between them. They would be expected to have some familiarity with programming languages (C/C++ generally) as well as being comfortable with a terminal and scripting languages. Modern sysadmins also work with automated deployment systems a lot, things like CHEF/Docker/Kubernetes are going to be premium skills to acquire. This is the field I would like to get into, but never feel confident enough with my skill level to dive right into.
  8. It would also be nice to address the color blind accessibility concerns by implementing terminal color profile support. I had a topic on this previously, but it seems heavily related to this since we're talking about readability. I basically am forced to use putty/kitty/ssh in another terminal emulator just so I can have colors that I can see (specifically the folders being light blue on bright green is awful)
  9. This would be a question better directed at Plex, rather than unraid. They would be able to confirm what those IPs are for, or if they are foreign. Most likely, however, they are the IPs for the Plex server as the Plex application allows both direct and indirect connections thru their own "proxy" service on their side.
  10. For the absolute bleeding edge I'd probably suggest passing thru a thunderbolt PCI-E card and using a fiber optic thunderbolt cable with a thunderbolt dock at the other end. This would give you video outputs and USB inputs for basically raw performance. It's a bit on the expensive side. For the budget oriented I would head in the direction of using Moonlight to access the VM's remotely. This would limit your I/O options a bit, and would introduce some latency (the thunderbolt dock option would be effectively zero latency.) Moonlight has clients on basically every OS imaginable at this point. The KVM/HDMI/USB over IP solutions are also going to be fairly low latency, but they will be heavily limited on what resolutions they support and what I/O they enable. In all cases you will need a "client" box at the display end to handle the display output and I/O. I think some of the fanless braswell units available from china would be attractive moonlight "thin clients" since they would be fairly low cost, silent, and capable of outputting 4k60hz, and decoding 4k60hz natively. In theory you could lower the cost further by buying them barebones with no ram or storage, adding a single 2gb SoDIMM module and setting up a PXE server to hand out the thin client image on unraid. There'd be a lot of legwork involved in that but it would be cheaper and pretty slick.
  11. Does anyone have any input on this? Not super familiar with iptables and such; but this seems like the only way to approach it?
  12. Reposting my response here so we don't keep bumping that other thread; From diag.zip -> system/ps.txt: nobody 31750 0.3 0.0 0 0 ? Zl May17 4:31 \_ [transmission-da] <defunct> The transmission-da process has become a zombie. Interestingly, the container is using dumb-init which should be handling the zombie process cleanup with a wait() syscall; but doesn't seem to be. Typically this would indicate that the offending zombie process is waiting on IO of some kind. I also note in your syslog that the BTRFS docker image was corrupted (twice) and recreated. I'm assuming a write operation in the docker.img is hanging and when forcibly rebooting the server that's corrupting it. Double check that none of the paths you have mapped to this container are on the docker image. If they are, this could be the culprit as that docker image could be filling up and then write operations just repeatedly failing.
  13. Easiest way would be to just bridge all the cards together. A network bridge *is* software level switching. Don't expect good performance with this. Even if your CPU has hardware acceleration for network loads (some xeons) it's going to be substantially slower at the task than the FPGAs used in standard switching applications. It's why routers make poor switches and switches make poor routers. Servers can be pretty good firewalls and routers but they are generally awful switches.
  14. You can try Chrome Remote Desktop as well. Video quality suffers from time to time in my experience, and depending on your corporate firewall, DPI might block CRD - though I doubt it if TeamViewer works for you. In my case TeamViewer did the same to me, which is unfortunate because I used to recommend their product - but aggressively flagging power users as "corporate" usage is just an indicator that they just want the cash grab at this point. I ended up breaking down and setting up Apache Guacamole and use RDP with NLA on the server-side of things. (I.E. you enable RDP and NLA in your VM, then add that RDP connection to Guacamole, and expose Guacamole to the internet - preferably with 2FA) Now I not only have secure access to all of my machines remotely - but I'm not using someone else's resources to do it, so I'm not reliant on them not changing the agreement.