Jump to content

Xaero

Members
  • Content Count

    347
  • Joined

  • Last visited

  • Days Won

    2

Xaero last won the day on July 19 2019

Xaero had the most liked content!

Community Reputation

95 Good

About Xaero

  • Rank
    Advanced Member

Recent Profile Visitors

3484 profile views
  1. Any chance of adding support for Deluge-RBB plugin for this docker - it would make it infinitely more usable (it adds a browse button to WebUI and remote desktop clients) Currently, if you make no changes it errors out trying to import from common.py. I've tried adding the PYTHONPATH env variable which gets me further, but I stillg et "Cannot import name get_resource" on "from common import get_resource" but I don't see an actual python COMMON.PY anywhere (I see the deluge ones, but none for the actual python package)
  2. Is it possible to stop the Unraid WebUI from listening on Wireguard interfaces? For one, since I use SSL - clients that don't have access to the LAN can't see the dashboard anyways; for two I'd like to be able to bind a dashboard docker to the HTTP port for clients that are connected via wireguard. Right now I believe the nginx server is bound to 0.0.0.0 - I'd like to change that to the fixed IP, if possible.
  3. You are correct, it is automatic, I went ahead and proceeded blind (never a great idea) and it worked just fine.
  4. I'm trying to format my cache drives with 1mb alignment with the new Beta30. I have moved all the data off the cache drives and created a backup, I've stopped the array, and I'm ready to format but my only options are BTRFS and BTRFS encrypted and I don't see a way to adjust the alignment? P.S. I've searche as best I can with Google and the site's search but I keep getting results from 2014 regarding alignment. Solution: Just formatting them automatically resolves the issue.
  5. Since all of the errors are with AER and they are all Corrected - it would be safe to disable AER - however, I would not recommend doing so. Instead, since this issue is being triggered when attempting to access the memory mapped PCI Configuration; I would use the kernel option to switch back to legacy PCI Configuration you can do so by adding the following kernel parameter: pci=nommconf This will force the machine to ask the device itself for it's configuration parameters rather than mapping the device's configuration to a memory address. There's a completely negligible performance difference, and this will keep AER enabled, which can improve stability (for example, if an actual error occurs AER might be able to correct it on the fly and not result in a crash)
  6. In this particular case, I would use either an Unassigned Device or a second cache pool exclusively for the ingest of these backups. There's a couple of reasons: 1. Wear leveling, and NAND degradation. Repeatedly filling and dumping an SSD is going to kill it. For bulk ingest disks that are constantly dumped to the array I would rather it not by the system cache drives that hold my appdata and such. Even if it's mirrored and/or backed up, when the drive inevitably dies I'd rather it be something I can replace and not have to mess with. 2. You can keep this volume completely empty and buy a disk (or disks) that match the needed capacity. If your needs expand down the road you can simply increase the size of the disk(s) utilized and move on, no need to worry about transferring settings, applications or data to the new disks.
  7. Also note that mixed MTU affects inbound (write) performance more than outbound (read) performance. The reason is pretty simple; inbound 9000 MTU packets must be split (fragmented) by the network appliance (switch) before they are transmitted to the client. This nets a rather substantial loss in throughput per packet, and results in increased latency as well. Where as packets transferred by the lower MTU client are less than the frame size, and rather than having to combine or split them, they are just sent with zeroes padding to the right. Latency isn't increased at all, but there is a small (yet measurable) loss in throughput. The performance definitely improved substantially just being able to take advantage of multichannel.
  8. You may also need to add: # ENABLE SMB MULTICHANNEL server multi channel support = yes I've not heard of issues using 9000 MTU with docker yet, but I also cannot run 9000 MTU with my current network configuration (my ISP provided modem will not connect above 1500 MTU) There will be significant performance implications if you use MIXED MTU. if everything is 1500 or everything is 9000 then things should be "more or less the same" outside of a large number (thousands) of large sequential transfers (gigabytes) - where the larger MTU will start to pull ahead. With MIXED MTU the problem is that any incoming packets must be fragmented when sent to a client that isn't using the larger MTU. This wastes a ton of resources on the switch or router to accomplish. On the flipside, when the smaller MTU client sends a packet it will use the smaller MTU and the potential overhead savings of the large frame is lost, though this is not as bad of an impact. EDIT: Removed flawed testing, will update later with proper testing again. I'm not on a mixed MTU network atm so I can't actually test this haha.
  9. One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled. Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be.
  10. This largely depends on your location. Most stores in my area that aren't technology-centric start at 16gb as the smallest size now. I also use my thumb drive to store a persistent home folder image, and some other stuff. Even with that though, I've only used 9.7gb/128gb. And I'm only using a 128GB drive for two reasons: It's faster than smaller drives It was free
  11. That's not Linuxserver.io - Unraid Nvidia, that's totally different project, by a different creator. My point was to avoid the confusion that I've apparently created anyway. The point being: Currently, with the driver release available with the latest Linuxserver.io Unraid Nvidia plugin release - that driver version is not included.
  12. The v6.9.0-beta25 release doesn't include that driver, either.
  13. A gentleman - and a scholar. Can't wait for Power State switching to be reliable 🙂 EDIT: to be clear this release doesn't have the beta nvidia driver that fixes the power state. I realized in post this may cause confusion for some sorry.
  14. Is there any particular reason we can't implement SPICE for video, with the OpenGL Acceleration (Spice works when manually enabled, but the GL stuff isn't compiled in on Unraid - or at least virt-manager claims it isn't) Spice supports dynamic resolution, audio, USB redirection and clipboard integration in the guest with drivers unlike the VNC Implementation for KVM. And there's already spice web clients (though they do have some limitations, like not supporting celt audio or USB redirection)
  15. M.2 is just a physical interface for SSDs. M.2 PCIE SSDs are generally substantially faster than conventional SATA SSDs. I currently use a bifurcation riser with two M.2 slots for my cache pool and performance is pretty great (though I do manage to bog down the less optimal 660p ssds I have occasionally) The biggest concern is cooling. NVME drives run quite a bit warmer than their SATA counterparts. They don't need to be cold (in fact its bad for them) but they also need to not overheat.