Jump to content

Xaero

Members
  • Content Count

    333
  • Joined

  • Last visited

  • Days Won

    2

Xaero last won the day on July 19 2019

Xaero had the most liked content!

Community Reputation

90 Good

About Xaero

  • Rank
    Advanced Member

Recent Profile Visitors

3299 profile views
  1. M.2 is just a physical interface for SSDs. M.2 PCIE SSDs are generally substantially faster than conventional SATA SSDs. I currently use a bifurcation riser with two M.2 slots for my cache pool and performance is pretty great (though I do manage to bog down the less optimal 660p ssds I have occasionally) The biggest concern is cooling. NVME drives run quite a bit warmer than their SATA counterparts. They don't need to be cold (in fact its bad for them) but they also need to not overheat.
  2. Looks like this has been an issue with Norco for a couple of years now, which is unfortunate. They may be more closed than usual due to COVID. What's wrong with your backplane? EDIT read more of OP - which revision of the backplane do you have, might be as simple as replacing a fuse on the board if it's one of the later ones. EDIT2 now I'm concerned because I also have an RPC-4224 as my primary chassis, if this is the case I may need to look at migrating to a different chassis.
  3. Also worth Noting that the Norcotek website appears to have a JS exploit injected into the page's header. My Antivirus is blocking it. Try shooting an e-mail to service@norcotek.com
  4. Convert offending videos to a format your transcoding hardware supports. Keep in mind not all codecs have hardware transcoding support, and not all cards support all formats that do. Any time the card is not capable of decoding the source format - the decoding will fall back on the CPU. Similarly, if the target format is not supported by the GPU, the encoding will fall back on the CPU. We (the forum members) don't have enough information on the situation to identify why you are having this problem currently.
  5. Correct. Everyone using nvidia transcoding with Plex on Linux has this problem with Plex. There isn't really a simple workaround. I tried injecting a wrapper script to monitor the transcode process and then manually kill the offending process once the transcode has ended, but it isn't reliable, and can interfere when multiple users are streaming transcoded content simultaneously. I think I have some new logic that will work properly - but ultimately Plex or Nvidia needs to figure out why this is happening. According to a recent post on the thread regarding this issue - nvidia was able to replicate the issue in their lab environment, and they were able to use some internal testing drivers that no longer exhibit the issue. That driver release "should" fix the problem permanently for everyone. It's odd that this behavior only seems to happen with Plex, and not Emby - but they are different applications so there could be a different interaction in software there. Here's the thread on the Plex forums: https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685 fuser -kv /dev/nvidia* will kill the offending process (and any other other processes using the GPU) by the way.
  6. To build on this, allow setting a title for the code tag spoilers; for example: [ code = "/etc/ssh/sshd_config" ]...[ /code ] could change the header of the spoiler to the name of the file or command of which the code block contents come from?
  7. Would it be possible to implement the fix depicted here: https://github.com/rakshasa/rtorrent/issues/861 Currently, the bug presented in that thread is present in this container. I've been having to screen -r my docker and manually stop and start torrents because I still cannot seem to stop getting the errors in regard to the getplugins.php (my previously implemented resolution stopped working, after a couple of months and incidental docker updates... This seems to be a constant struggle with rtorrent on unraid. Regardless of the number of active torrents.) EDIT: I'm at a loss. I'm back to square one basically. I'm going to backup my rutorrent config folder; and try starting over but I'm back to the abysmal 20-30kb/s and unresponsive (or not even loading) webui. Nothing in the logs seems of relevance. I've read through logs and even enabled advanced logging. There's no "errors" so to speak, just the rtorrent application sitting at 99.99% IOWAIT. I'm wondering if it doesn't like the merged filesystem that unraid uses? I'm downloading to an NVME SSD and the only application with this issue is rtorrent. I'll post back with what I figure out.
  8. Interesting, admittedly I don't use the docker myself, the glibc dependency must have either changed or been removed so it doesn't work anymore. Good that you got it working!
  9. nvidia-smi is provided by the nvidia docker runtime. On your netdata docker are you using --runtime=nvidia in the additional parameters? If not it won't have nvidia-smi.
  10. That is what I had, and the fix in my case was to swap to the BFQ Scheduler. I also don't know exactly why that change fixed the problem, so it may very well not be applicable in every case 😞 I think rutorrent might not like the layered FS that unraid uses. I've tried saving directly to cache and letting mover handle moving the the array (using /mnt/cache/sharename in the rutorrent configuration) but that hasn't alleviated the other problems (The request to rutorrent has timed out) I have seen. I do no longer get the 504 gateway errors though.
  11. This is fine, but it honestly has me reconsidering unraid as a viable platform for me. A bit too late in the game now as it will require substantial financial resources to migrate to a different platform. I know you, nor CA is affiliated with Limtech or Unraid, but its a pretty integral part of the user experience and its now tainted. And it's taken since last wednesday for me to come to this decision as a result. Hence the long gap between the event and me posting about it. I realized it happened. I had to think long and hard about how I wanted to proceed.
  12. I'm not sure which application in the Community Apps library was responsible for the popup alert about COVID-19 support, but I will be uninstalling all CA packages as a result. It was invasive and I'm not okay with that. I get it was a good gesture, and its got some serious circumstances behind it - but I like it when other people aren't touching my systems. I'm even okay with pinning the apps to the top of the list like they are. Just not the invasive nature of the popup and warning banner. Felt like I was visiting a webpage with my adblocker off.
  13. For users of the r8168 devices that are testing this - can you comment on network performance? I'm interested in seeing if the in-tree drivers for the 816x devices have improved to usable status or not haha.
  14. I'll need to see if I can come up with how to replicate this once the currently running parity check completes. I'll update and reopen when I do. So when I last rebooted the server it wanted to do a parity check, even though it was a clean reboot. I had to enter my key to mount the drives as usual and it started the parity check. It also started my dockers - but the docker page is broken and loops like this: 6455534e1b843378b87e06fc01c918e6.mp4
  15. If your issue was (500, [error,getsettings]) And not (50x, [error,getplugins]) It probably doesn't match the symptom I was depicting. I outlined very specifically when the scheduler change would be applicable.