Xaero

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Xaero

  1. Xaero

    A proper login page

    If Chrome has your password, Google has your password. If you have Google home devices - Google has access to your local network. I think that was the point.
  2. I'm a solo, personal use user with more than 100TB of storage now. My original server from ~2012 was built into a U-NAS NSC-800. That server runs ArchLinux and is still live with some of the original disks still spinning (over 6 years power on time, it's getting scary so I'm cycling in new drives as I can afford to) That system had 8 2tb drives installed in Raid 5 with a hot spare with hardware raid, and it's been rock solid. The problem is I severely underestimated my "hobbyist" usage. I figured 12TB would last me for years to come, yet somehow a year or two later the system was nearly full and I was resorting to non-redundancy and no backups just to fit the bits on what I could afford. I was wrong. But, even saying that - technically almost all of my data could live happily on the cloud. And gigabit internet is "pretty quick" and "pretty affordable" now. Problem? I work with 500gb+ disk images and mounting over gigabit takes an eon. Moving the disk image back and forth constantly takes ages. 10gbe on a local server is the cure.
  3. The only feasible way I could see for connecting 8 disks to that box would require money. Not a ton, but enough that it would be reasonable to consider moving away from the box. It would involve a Pci-E 1x sata HBA; and one of these: https://www.newegg.com/p/1DW-0044-00018?item=9SIAHEA7VP4273&source=region&nm_mc=knc-googlemkp-pc&cm_mmc=knc-googlemkp-pc-_-pla-silkroad-_-accessories+-+video+card-_-9SIAHEA7VP4273&gclid=EAIaIQobChMI96W76-W65AIVAp6fCh1LlgPtEAQYByABEgJAh_D_BwE&gclsrc=aw.ds The experience would... not be great, either. That mpcie slot MIGHT be 4x and that would help a bit; but even then you would not have a great time.
  4. Wasn't meant to come off as an insult, I apologize if it did. Was just trying to be helpful, I'll go back in my hole now.
  5. FYI, cryptsetup supports piping the keyfile via stdin: echo "securestring" | cryptsetup --key-file=- ...
  6. I don't think there's an SSLH docker for unraid; so I'm using https://github.com/shaddysignal/sslh-hub Configuration is as follows: Privileged Host networking Extra params: -e LISTEN_IP=0.0.0.0 -e HTTPS_PORT=18443 -e LISTEN_PORT=48443 -e SSH_HOST=192.168.1.74 -e HTTPS_HOST=192.168.1.74 -e OPENVPN_HOST=192.168.1.74 192.168.1.74 is my Unraid box. Yes, I realize that makes the "SSH_HOST" my unraid box. Don't worry, SSH is disabled in unraid, it's actually going to a docker. Everything "works" right now, except that when users connect from outside my network the internal requests are seen as having come from the unraid box docker lan: sshd[8232]: Accepted password for [user] from 172.17.0.7 port 36792 ssh2 This is problematic as the docker has both fail2ban and denyhosts running within it. Eventually, malicious attempts come, regardless of what you do. For example, an ip range (now blocked at the router) slammed with a wave of invalid SSH attempts, which put 172.17.0.7 on the hosts.deny list for ssh. Now I can't get in, until I manually clear that list. So I looked into it and "transparent" mode seems to be what I need to use in SSLH. So, I opened the docker entrypoint script, and added "--transparent" to the arguments list, before all the rest. This is when I had to switch from not privileged to privileged - so I know the parameter is accepted and doing "something" but requests are still being seen as from within the internal docker network. Has anyone messed with this at all? Is it worth pursuing this any further? Ideally I don't want to have to add another box to just do my SSLH and HTTPS forwarding with a custom Let's Encrypt + Nginx + sslh + ssh + denyhosts + fail2ban just to get remote access over 443 while sharing the port with other services...
  7. That I am not certain of. FreeBSD is built on completely different source to Linux, so FreeNAS is inherently completely different to Unraid. That said, a quick Google shows users of Freenas having issues with the same card(s) for probably similar reasons For example, this thread: https://forums.freebsd.org/threads/replacing-realtek-re-driver.55861/page-2
  8. Yes, it may be (unfortunately) that you have to compile the module to even attempt testing. That's a long and grueling process which I will not suggest for the feint of heart. A simpler solution is to acquire an Intel NIC - though it would be nice to prove that this driver is even the problem to begin with.
  9. From /etc/fstab: /boot/bzmodules /lib/modules squashfs ro,defaults 0 2 The /lib/modules directory is actually a mounted squashfs filesystem, you'd (in the end) need to rebuild that squashfs image with the new file. You may be able to get away with just using: insmod /path/to/r8168.ko.xz insmod accepts paths, but does no dependency resolution. modprobe looks inside /lib/modules/... for the module, and automatically probes any needed modules. rmmod r8169 && insmod/path/to/r8168.ko.xz May work. If not you can get really convoluted for testing: mkdir /tmp/testing mkdir /tmp/testing/work mkdir /tmp/testing/upper mount -t overlay overlay -o lowerdir=/lib/modules,upperdir=/tmp/testing/upper,workdir=/tmp/testing/work /lib/modules You can then copy that module into the directory, and Linux should be none the wiser of it's actual location. Also know that doing this wouldn't immediately result in modprobe knowing how to use the new module, you'd also need to run depmod -a to generate the module dependencies. At that point modprobe would almost certainly work. I know, it's a giant pain to troubleshoot this way. Normally it would be as simple as "yaourt -S r8168-dkms" or something similar and moving on with life. This sort of thing is the reason I wish unraid would embrace package management; but I understand their reasons for not doing so.
  10. You will likely have to build the module from scratch. However you may be able to steal the r8168.ko.xz from this package: https://archlinux.pkgs.org/rolling/archlinux-community-x86_64/r8168-lts-8.047.02-3-x86_64.pkg.tar.xz.html As it was compiled for Linux 4.19.x and that is the current kernel for Unraid. I would highly recommend against this, as you cannot guarantee stability with mismatch kernel version modules. But it would be okay for testing to see if it is the cause of the problem before proceeding to build a custom module for unraid.
  11. I actually did a little bit of searching, in the Unraid 5.xx days, Limetech was shipping with the r8168 driver included, but they had to "pick" a default driver to configure. Since one driver or the other will work better depending on which exact realtek chip you have, eventually they decided to ship only the Linux kernel team's r8189 driver. Also in your diagnostics zip, lsmod shows that the current driver in use is r8189 - which suggests this is the case. If you have an intel nic laying around, it may be worth a shot to see if it resolves the speed issues. I would like to see both drivers included - but that requires manual intervention of the user if they have one of these (very common) chips. You may be able to install the slackware package for this module on unraid; you would then need to blacklist the r8169 module so that the r8168 module could load instead. I'll need to research a bit. My other server, which runs Arch had a similar network speed problem, and it currently uses the r8168 module to resolve that problem. Also know that this would come with the giant "YMMV" and "Not supported by LimeTech" badges of honor.
  12. @Tom21 I see that you are using a realtek 8xxx chipset based network card. These devices, and their open source r8189 driver do not perform well, I'm not sure - but I believe @limetech was shipping the closed source r8168 driver at one point - perhaps they can comment as to whether or not this is the case. TL;DR: The 8168 driver performs as expected, on par with Windows. The 8169 driver performs at anywhere between 20 and 50% of the performance of the 8168 driver and Windows.
  13. This might not work since 1.16.7.1597 is a "forum only" preview.
  14. It does prevent compromising the entire array by discovering the key to one disk. If they implemented a system like this, they obviously would need to use a cipher system to generate the keys. This cipher system could need to have a "recovery" mode where if you are able to provide the full set of data it would be able to provide you with a luks keyfile to recover the data from the disk. This wouldn't break compatibility with existing luks systems, but would prevent an attacker from compromising the entire array if a single key is somehow exposed.
  15. I wholely agree that you cannot fix stupid. But you can implement things in a way that take stupidity into consideration. This specific instance is capable of being mitigated by implementation. If you can implement mitigation you should.
  16. Aside from you actually being correct, for many reasons, this keyfile is created so that you don't have to type your password in for every single LUKS disk in your array. Ideally, a few things should change with this part of the system. Allow me to point out an assumed security that is incorrect from people in the thread thus far: "It's in /root, so it's only available to the root user" - this is wrong for a couple of reasons; for one, this relies on no docker's being configured in such a way that they cannot see "/root" as any docker that uses a root account would also have access to this file. Furthermore, any bootable distribution, or hypervisor would also have access to this file (though it is detroyed every shutdown) Furthermore, the file is both plaintext, and stored in memory. Anything that manages to gain access to memory (We have several major vulnerabilities right now that affect different hardware and software implementations that allow access to the memory map, even to protected regions) can see this password. All of this assumes that simple privilege escalation attacks aren't viable (which they have been, in recent times, even.) In 2019, the only time a key should be plaintext is at the moment it is used. After that the variable it is contained in should be overwritten and destroyed. We must overwrite it because the kernel will simply mark that region "free" rather than writing 0's to it. Sure, it takes microseconds for it to be overwritten after being marked free on a system that has high memory utilization, but that's time it's available. We want to destroy the variable so there is no reference to it to make it identifiable after we use it. Furthermore, while it *is* convenient to have a single password for all of the disks - this is not ideal. Ideally (Imho) Unraid should generate a salted and hashed key from the user-defined luks password for each drive. You have two sources of unique data for each disk: The unraid license key, and the disk serial number. Using the license key ensures that no two unraid users can end up with the same salted key - even if they choose the same password and somehow have two disks with the same serial number. These salted keys should be generated based on the user's input and each disk should have its own unique key to mount it. This greatly increases the security of the system against brute force attacks.
  17. Try copying from this link: https://raw.githubusercontent.com/Xaero252/unraid-plex-nvdec/master/Plex_nvdec_enable.sh And replacing all contents of the edit box for the script.
  18. In theory, yes. So containerization is different than virtualization in several ways. On Linux this has some big benefits. For example, on a Linux OS devices are populated into a filesystem we call "sysfs" Sysfs nodes exist for every single sensor, switch, and register for every single device that is found and initialized. As such, your GPU also becomes a SYSFS node, and all of it's features become exposed through SYSFS as well. With a virtual machine, we "remove" the card from the host OS and "install" that card into the guest OS. In a containerized environment, specifically in docker, sysfs nodes can exist on both "machines" simultaneously. The application, driver, et al aren't any wiser about the existence of the other OS, and the card exists in both at the same time. As far as the card is concerned (and nvidia-smi for that matter), two processes on the same system are using the same card at the same time. Which is perfectly acceptable. In theory you could use it on both Plex and Emby at the same time with a 2 transcode limited card - but any more than 1 transcode at a time on either would result in broken transcoding on the other. Not a desirable situation. I have successfully had the card in use with 3 "operating systems" at the same time: - Unraid (I spawned a fake xorg server on a 'virtual display' running at 640x480 so I could run nvidia-settings over ssh) - The LS.IO Plex docker - netdata
  19. You'll want to post the diagnostics zip. Most likely you are stubbing a PCI-E bus address that is in the same IOMMU group as the other card. This will stub both cards, even though you only intended to stub the one. This can be worked around, but I'll leave that advice to people more experienced than I with these things.
  20. Dockers are not virtual machines and the nvidia container runtime should support multiple dockers using the same card's resources. Know that if both of these cards require access to the transcoding pipeline you may run into problems, especially if you have a card that isn't licensed for more than 2 simultaneous transcode processes.
  21. I think there's a substantial computational advantage showing its face here. 6400 is 80^2 (power of two advantage for computing is strong) 3200 and 6400 - evenly divisible, exactly one-half - again, whole integers with no floating point precision, and it's exactly half - meaning the CPU can take shortcuts. I'm not 100% certain how the stripe size and window size interact, but those two mathematical advantages compared to the rest of the table could make a massive difference on a lower powered CPU.
  22. So this is probably less of an Unraid, KVM/Qemu issue, and more of a Linux itself issue. From that perspective: - What distribution are you using? - What driver is being used (nvidia? nouveau?) Some useful information from within the Linux VMs would be relevant GPU information from the distro: dmesg | grep -e nouveau -e nvidia the contents of /var/log/Xorg.0.log lspci | grep vga lsmod We need to see what the topology of the display server from the ground up looks like. I have extensive experience with Linux on bare metal for desktop use. This should be effectively no different assuming the KVM-VFIO passthrough is being handled properly (Which, judging from the Windows performance, it is.) If nouveau is mucking about, then it would explain part of your choppy video playback experience - nouveau has very basic 2d and 3d support on most newer cards, and has no support for the nvenc/nvdec pipelines. The nvidia proprietary driver has nearly complete feature parity with the Windows drivers, however nvidia has made questionable decisions on the default driver settings for a modern desktop environment on Linux. For example, they choose to not enable the full composition pipeline, which leads to sync issues between layers on the desktop - which results in full-screen tearing, even with vsync enabled. These settings can be manipulated obviously - but first we need to figure out the "what" of your problem.
  23. I am partially colorblind. I have difficulty with similar shades (such as more green blues, next to greens, or more blue greens next to blues.) The default Web Console color scheme makes reading the output of ls almost impossible for me: I either have to increase the font size to a point that productivity is hindered drastically, or strain my eyes to make out the letters against that background. A high contrast option would be great. Or, even better, the option to select common themes like "Solarized" et al would be great. Perhaps even, the ability to add shell color profiles for the web console. For now I use KiTTY when I can - and I've added a color profile to ~/.bash_profile via my previously suggested "persistent root" modification. Also, worth mentioning here: https://github.com/Mayccoll/Gogh Gogh has a very extensive set of friendly, aesthetically pleasing, and well-contrasting color profiles ready-to-go. Edit: Also worth noting that currently the web terminal doesn't source ~/.bashrc or ~/.bash_profile, and this results in the colors being "hardcoded" (source ~/.bashrc to the rescue) Edit2: Additionally, the font is hard coded. If we are fixing the web terminal to be a capable, customizable platform - this would also be high on the list of things to just do.
  24. The reason nohup doesn't work for this is because when you disconnect, or log out of that terminal, that terminal, and any child processes of the terminal are killed. This is just Linux kernel process management doing it's job. To prevent this you can just disown the process, no need to nohup it. For example you can: $ processname & $ disown and "processname" will continue running after the terminal is killed. This is good because it means that "processname" will still respond to hangup, which may be needed. Of course, you could also call disown with nohup: $ nohup processname $ disown You can also disown processes by using their PID, but calling it immediately following the spawn of a process will automatically disown the last created child.
  25. SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage [0:0:0:0] flash sda 62.7GB Extreme [1] scsi1 megaraid_sas MegaRAID SAS 2008 [Falcon] [1:0:11:0] disk13 sdb 8.00TB WDC WD80EFAX-68L [1:0:12:0] disk5 sdc 8.00TB WDC WD80EFAX-68L [1:0:13:0] disk7 sdd 8.00TB WDC WD80EFAX-68L [1:0:14:0] disk2 sde 8.00TB WDC WD80EFAX-68L [1:0:15:0] disk3 sdf 8.00TB WDC WD80EFAX-68L [1:0:16:0] disk4 sdg 8.00TB WDC WD80EFAX-68L [1:0:17:0] disk10 sdh 8.00TB WDC WD80EFAX-68L [1:0:18:0] disk21 sdi 8.00TB WDC WD80EFAX-68L [1:0:19:0] disk8 sdj 8.00TB WDC WD80EFAX-68L [1:0:20:0] disk12 sdk 8.00TB WDC WD80EFAX-68L [1:0:21:0] disk11 sdl 8.00TB WDC WD80EFAX-68L [1:0:22:0] disk15 sdm 8.00TB WDC WD80EFAX-68L [1:0:23:0] disk16 sdn 8.00TB WDC WD80EFAX-68L [1:0:24:0] disk19 sdo 8.00TB WDC WD80EFAX-68L [1:0:25:0] disk22 sdp 8.00TB WDC WD80EMAZ-00W [1:0:26:0] disk17 sdq 8.00TB WDC WD80EFAX-68L [1:0:27:0] disk18 sdr 8.00TB WDC WD80EFAX-68L [1:0:28:0] disk20 sds 8.00TB WDC WD80EFAX-68L [1:0:29:0] disk6 sdt 8.00TB WDC WD80EFAX-68L [1:0:30:0] disk9 sdu 8.00TB WDC WD80EFAX-68L [1:0:31:0] disk14 sdv 8.00TB WDC WD80EFAX-68L [1:0:32:0] disk1 sdw 8.00TB WDC WD80EFAX-68L [1:0:33:0] parity2 sdx 8.00TB WDC WD80EMAZ-00W [1:0:34:0] parity sdy 8.00TB WDC WD80EMAZ-00W [N0] scsiN0 nvme0 NVMe [N:0:1:1] cache nvme0n1 1.02TB INTEL SSDPEKNW01 [N1] scsiN1 nvme1 NVMe [N:1:1:1] cache2 nvme1n1 1.02TB INTEL SSDPEKNW01 Results from B3 look good!