Jump to content

Xaero

Members
  • Content Count

    342
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Xaero

  1. In this particular case, I would use either an Unassigned Device or a second cache pool exclusively for the ingest of these backups. There's a couple of reasons: 1. Wear leveling, and NAND degradation. Repeatedly filling and dumping an SSD is going to kill it. For bulk ingest disks that are constantly dumped to the array I would rather it not by the system cache drives that hold my appdata and such. Even if it's mirrored and/or backed up, when the drive inevitably dies I'd rather it be something I can replace and not have to mess with. 2. You can keep this volume completely empty and buy a disk (or disks) that match the needed capacity. If your needs expand down the road you can simply increase the size of the disk(s) utilized and move on, no need to worry about transferring settings, applications or data to the new disks.
  2. Also note that mixed MTU affects inbound (write) performance more than outbound (read) performance. The reason is pretty simple; inbound 9000 MTU packets must be split (fragmented) by the network appliance (switch) before they are transmitted to the client. This nets a rather substantial loss in throughput per packet, and results in increased latency as well. Where as packets transferred by the lower MTU client are less than the frame size, and rather than having to combine or split them, they are just sent with zeroes padding to the right. Latency isn't increased at all, but there is a small (yet measurable) loss in throughput. The performance definitely improved substantially just being able to take advantage of multichannel.
  3. You may also need to add: # ENABLE SMB MULTICHANNEL server multi channel support = yes I've not heard of issues using 9000 MTU with docker yet, but I also cannot run 9000 MTU with my current network configuration (my ISP provided modem will not connect above 1500 MTU) There will be significant performance implications if you use MIXED MTU. if everything is 1500 or everything is 9000 then things should be "more or less the same" outside of a large number (thousands) of large sequential transfers (gigabytes) - where the larger MTU will start to pull ahead. With MIXED MTU the problem is that any incoming packets must be fragmented when sent to a client that isn't using the larger MTU. This wastes a ton of resources on the switch or router to accomplish. On the flipside, when the smaller MTU client sends a packet it will use the smaller MTU and the potential overhead savings of the large frame is lost, though this is not as bad of an impact. EDIT: Removed flawed testing, will update later with proper testing again. I'm not on a mixed MTU network atm so I can't actually test this haha.
  4. One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled. Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be.
  5. This largely depends on your location. Most stores in my area that aren't technology-centric start at 16gb as the smallest size now. I also use my thumb drive to store a persistent home folder image, and some other stuff. Even with that though, I've only used 9.7gb/128gb. And I'm only using a 128GB drive for two reasons: It's faster than smaller drives It was free
  6. That's not Linuxserver.io - Unraid Nvidia, that's totally different project, by a different creator. My point was to avoid the confusion that I've apparently created anyway. The point being: Currently, with the driver release available with the latest Linuxserver.io Unraid Nvidia plugin release - that driver version is not included.
  7. The v6.9.0-beta25 release doesn't include that driver, either.
  8. A gentleman - and a scholar. Can't wait for Power State switching to be reliable 🙂 EDIT: to be clear this release doesn't have the beta nvidia driver that fixes the power state. I realized in post this may cause confusion for some sorry.
  9. Is there any particular reason we can't implement SPICE for video, with the OpenGL Acceleration (Spice works when manually enabled, but the GL stuff isn't compiled in on Unraid - or at least virt-manager claims it isn't) Spice supports dynamic resolution, audio, USB redirection and clipboard integration in the guest with drivers unlike the VNC Implementation for KVM. And there's already spice web clients (though they do have some limitations, like not supporting celt audio or USB redirection)
  10. M.2 is just a physical interface for SSDs. M.2 PCIE SSDs are generally substantially faster than conventional SATA SSDs. I currently use a bifurcation riser with two M.2 slots for my cache pool and performance is pretty great (though I do manage to bog down the less optimal 660p ssds I have occasionally) The biggest concern is cooling. NVME drives run quite a bit warmer than their SATA counterparts. They don't need to be cold (in fact its bad for them) but they also need to not overheat.
  11. Looks like this has been an issue with Norco for a couple of years now, which is unfortunate. They may be more closed than usual due to COVID. What's wrong with your backplane? EDIT read more of OP - which revision of the backplane do you have, might be as simple as replacing a fuse on the board if it's one of the later ones. EDIT2 now I'm concerned because I also have an RPC-4224 as my primary chassis, if this is the case I may need to look at migrating to a different chassis.
  12. Also worth Noting that the Norcotek website appears to have a JS exploit injected into the page's header. My Antivirus is blocking it. Try shooting an e-mail to service@norcotek.com
  13. Convert offending videos to a format your transcoding hardware supports. Keep in mind not all codecs have hardware transcoding support, and not all cards support all formats that do. Any time the card is not capable of decoding the source format - the decoding will fall back on the CPU. Similarly, if the target format is not supported by the GPU, the encoding will fall back on the CPU. We (the forum members) don't have enough information on the situation to identify why you are having this problem currently.
  14. Correct. Everyone using nvidia transcoding with Plex on Linux has this problem with Plex. There isn't really a simple workaround. I tried injecting a wrapper script to monitor the transcode process and then manually kill the offending process once the transcode has ended, but it isn't reliable, and can interfere when multiple users are streaming transcoded content simultaneously. I think I have some new logic that will work properly - but ultimately Plex or Nvidia needs to figure out why this is happening. According to a recent post on the thread regarding this issue - nvidia was able to replicate the issue in their lab environment, and they were able to use some internal testing drivers that no longer exhibit the issue. That driver release "should" fix the problem permanently for everyone. It's odd that this behavior only seems to happen with Plex, and not Emby - but they are different applications so there could be a different interaction in software there. Here's the thread on the Plex forums: https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685 fuser -kv /dev/nvidia* will kill the offending process (and any other other processes using the GPU) by the way.
  15. To build on this, allow setting a title for the code tag spoilers; for example: [ code = "/etc/ssh/sshd_config" ]...[ /code ] could change the header of the spoiler to the name of the file or command of which the code block contents come from?
  16. Would it be possible to implement the fix depicted here: https://github.com/rakshasa/rtorrent/issues/861 Currently, the bug presented in that thread is present in this container. I've been having to screen -r my docker and manually stop and start torrents because I still cannot seem to stop getting the errors in regard to the getplugins.php (my previously implemented resolution stopped working, after a couple of months and incidental docker updates... This seems to be a constant struggle with rtorrent on unraid. Regardless of the number of active torrents.) EDIT: I'm at a loss. I'm back to square one basically. I'm going to backup my rutorrent config folder; and try starting over but I'm back to the abysmal 20-30kb/s and unresponsive (or not even loading) webui. Nothing in the logs seems of relevance. I've read through logs and even enabled advanced logging. There's no "errors" so to speak, just the rtorrent application sitting at 99.99% IOWAIT. I'm wondering if it doesn't like the merged filesystem that unraid uses? I'm downloading to an NVME SSD and the only application with this issue is rtorrent. I'll post back with what I figure out.
  17. Interesting, admittedly I don't use the docker myself, the glibc dependency must have either changed or been removed so it doesn't work anymore. Good that you got it working!
  18. nvidia-smi is provided by the nvidia docker runtime. On your netdata docker are you using --runtime=nvidia in the additional parameters? If not it won't have nvidia-smi.
  19. That is what I had, and the fix in my case was to swap to the BFQ Scheduler. I also don't know exactly why that change fixed the problem, so it may very well not be applicable in every case 😞 I think rutorrent might not like the layered FS that unraid uses. I've tried saving directly to cache and letting mover handle moving the the array (using /mnt/cache/sharename in the rutorrent configuration) but that hasn't alleviated the other problems (The request to rutorrent has timed out) I have seen. I do no longer get the 504 gateway errors though.
  20. This is fine, but it honestly has me reconsidering unraid as a viable platform for me. A bit too late in the game now as it will require substantial financial resources to migrate to a different platform. I know you, nor CA is affiliated with Limtech or Unraid, but its a pretty integral part of the user experience and its now tainted. And it's taken since last wednesday for me to come to this decision as a result. Hence the long gap between the event and me posting about it. I realized it happened. I had to think long and hard about how I wanted to proceed.
  21. I'm not sure which application in the Community Apps library was responsible for the popup alert about COVID-19 support, but I will be uninstalling all CA packages as a result. It was invasive and I'm not okay with that. I get it was a good gesture, and its got some serious circumstances behind it - but I like it when other people aren't touching my systems. I'm even okay with pinning the apps to the top of the list like they are. Just not the invasive nature of the popup and warning banner. Felt like I was visiting a webpage with my adblocker off.
  22. For users of the r8168 devices that are testing this - can you comment on network performance? I'm interested in seeing if the in-tree drivers for the 816x devices have improved to usable status or not haha.
  23. I'll need to see if I can come up with how to replicate this once the currently running parity check completes. I'll update and reopen when I do. So when I last rebooted the server it wanted to do a parity check, even though it was a clean reboot. I had to enter my key to mount the drives as usual and it started the parity check. It also started my dockers - but the docker page is broken and loops like this: 6455534e1b843378b87e06fc01c918e6.mp4
  24. If your issue was (500, [error,getsettings]) And not (50x, [error,getplugins]) It probably doesn't match the symptom I was depicting. I outlined very specifically when the scheduler change would be applicable.
  25. Android running on Emulated ARM hardware would probably run worse, FWIW. This would be a use case for it then. You could however, set up a PXE boot SD card and store your development/testing images on a PXE server for the PI. This way you aren't constantly writing an SD card and your changes to the image can happen quickly.