Jump to content

Xaero

Members
  • Content Count

    257
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Xaero

  1. There's a secondary reason for not using devices with motors on a UPS. UPS have brownout protection; this brownout protection will trigger anytime the input line drops between nominal load voltage. (Usually somewhere around 105-112v, varies on component age and manufacturer's specifications) The UPS enters "Buck mode" and uses the battery charge to supplement input voltage. Normally this occurs infrequently due to outside circumstances. A motor on the other hand constantly hammers both ends of the AC line with current spikes. This will frequently force the UPS into buck mode and WILL be detrimental to the lifespan of the UPS batteries and the buck mode circuitry. Source: I replace around 50 UPS per year due to external interference killing them (APC, TrippLite, CyberPower) In every single case, a refrigerator, fan, heater, or vacuum is connected to the UPS day-to-day. Another 50-100 UPS have premature battery failure (less than 1yr old SLA) in the same circumstances. Nice stable loads are fine for UPS. Also that's just me, in my area alone there are 15 other technicians who have similar numbers at our company.
  2. Interesting; if traceroute works that would imply that dns resolution is working which means ssh <hostname> should work. On that note; does ssh 37.48.xx.xx work? If so; ssh -vvv box.seedbox.eu may provide useful information. EDIT: Also I feel like nslookup should be in the default package list? Am I wrong on this?
  3. What is the output of: nslookup box.hostname.eu and traceroute box.hostname.eu I have a feeling whatever local DNS unraid is using is unable to resolve that hostname.
  4. https://gitlab.freedesktop.org/bolt/bolt It's part of the `bolt` package. Not sure if there are any slackbuilds of it currently.
  5. Follow the instructions in the Unraid-Nvidia plugin thread's original post, and the instructions on the Linuxserver.io Plex docker's config page. It's pretty straightforward.
  6. Thanks; I knew it was possible just couldn't recall off-hand for it. Was not in front of a computer so searching is cumbersome haha.
  7. Start with a diagnostics.zip before the crash. If possible, capture one during/after the crash. You might be able to get it to capture to the flash drive automatically on crash. Also, make sure you aren't passing through the same USB you use for your unraid flash drive; for obvious reasons.
  8. Correct. nvdec and nvenc are now natively supported by Plex, rendering this script effectively useless.
  9. Ah, I see. I just got a Samsung Q60R and the HTPC box I was using for Plex on the old TV is... staying with the old TV, as it's not needed with the new one.
  10. I've finally solved my issue with the rutorrent process being deadlocked in IOWAIT. If this is your problem you will see all of the below symptoms: rutorrent will load, but you will get an error 500, 504, or 502 on getplugins.php and the queue will not load when the queue does load you will rarely get updates and "the request to rtorrent timed out" will be your most common response. torrents will get stuck in checking status torrents that are downloading/seeding will get abysmal performance. all of the above will be intermittent and will usually occur after adding new, large torrents, or several smaller torrents Checking "iotop" when the above is occuring will have the rtorrent process listed at the top, with 99.99% IOWAIT and very low read/write speed. I had previously attempted many things to fix this problem: changing nginx scgi buffer size. increasing rtorrent memory allocation changing php-fpm workers and memory allocations changing php-fpm and nginx timeout to allow rtorrent more time to respond to requests. The final nail in the coffin was switching IO Schedulers. I swapped from mq-deadline to BFQ and the problem has entirely gone away. Not entirely sure why internally this was the fix - but immediately upon switching to BFQ the problem is completely gone and I can actually watch checking progress on a 200gb torrent while data is moving on the other torrents in the queue.
  11. The encoder can only do so much transcoding at once. Chances are streaming that many sessions will be the limit, rather than the PCI-E x8 slot. And you'll probably hit other limits before you hit the session limit; For example, https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding Shows that this card (P2000) should handle ~17 1080p to 720p streams simultaneously (real world use shows it can handle a bit more depending on other factors) Assuming a 10MBit input stream, and a 4MBit output stream (their numbers, not mine) That's 14MBit/s per stream, 17 streams at that rate comes out to 238MBit/s - PCI-Express is measured in megabytes not megabits, so we need to divide that by 8: 29.75MB/s. PCI-Ex8 should have a bandwidth ceiling of around 7880MB/s. So you aren't even getting close to scratching the surface of PCI-E lanes there. A dedicated transcoding card could probably live in a x1 slot and not be impacted for most consumer use cases without 4k being prevalent.
  12. The GPU will still work in a x8 slot. Total bandwidth between the GPU and the CPU will be limited to 8x bandwidth; You will probably run into the encoder bottlenecking before x8 will be the bottleneck.
  13. What TV did you get? A lot of the SmartTVs on the market have Plex available for them natively now.
  14. Note that the Quadro 4000 (Released 2010) and the Quadro P4000 (Released 2017) are two entirely different products several generations apart from one another.
  15. Well, for one - this script is now useless. Plex ships with nvdec and nvenc on Linux now. (At least for Plex Pass users, not sure if its hit mainline Plex but probably) That said, its just a power management thing. So, by working around the bug I can get the card to stay in an "idle" P8 state. Which means the card is only consuming 3-5W of power. With the bug present (it still is) the card latches to a P0 state - which is the highest power state, and will consume 25-40W. The above power figures are observed from my P2000 - other cards will consume more, or less depending on their power requirements.
  16. +1 I'd also like to see the ability to start specific docker containers. If the data lives entirely on cache would be the condition to allow this. I.E. anything mapped to /mnt/user* would immediately disable this capability. I think RegEx could probably handle checking VM XML and Docker config for any reference to a non cache directory.
  17. Xaero

    10gbit speeds?

    It's probably one of the options in the Advanced Settings for the NIC driver in device manger. Device Manager -> Network Adapters -> Right-Click your adapter, click properties -> Advanced. Look for Recv Segment Coalescing (IPv4) - if it's there, disable it There are other settings that can cause substantially reduced performance depending on the chipset. If you can post a screenshot of that page (Alt+prt Screen with that window in focus)
  18. Yeah unfortunately you may need to invest in a symmetrical connection or rent rackspace for your server if you want to get the throughput for this many simultaneous streams. Once you get that throughput the next bottleneck will be spinning platters, and the transcoding hardware itself.
  19. Xaero

    10gbit speeds?

    What motherboard? Which slot on the board? If it's consumer hardware, most likely one or more of the slots is gimped since consumer CPUs don't have enough pcie lanes to provide full, current gen lanes to each slot. Also the reason we test with iperf is to more or less rule out OS tuning and configuration. If it works with iperf the hardware is fine. If it DOESN'T work with iperf something in the hardware is usually the bottleneck.
  20. Okay so here's the deal; I have a few not super tech savvy people that need a reliable way to upload several terabytes worth of disk images to my server for off-site backup of assets. I've tried using WireGuard, but sometimes I add a user or remove a user and all of the users before or after the user will become unusable and I have to do a lot of manual prodding to keep things in order. I want things to be less involved for me, while still being easy to use for them. So I set up Owncloud - and added my shares as external storage. "This is awesome!" I thought; until I realized that with WebDAV we are limited to ~4gb no matter what, and with the client we have to sync the entire directory and have no option to mount the share as a local disk. The WebUI works but isn't really suitable for the workflow. Okay, so I did a bit of Googling and found out that Owncloud basically became Nextcloud and Nexcloud has an experimental disk mount option! FANTASTIC! Wrong again. The disk mount is based on DokanFS and crashes when attempting to open external share folders. DokanFS has been a thorn in my side before. It's not pleasant to work with, and ends up resulting in me manually prodding things even more than wireguard was. Back to square one. Seafile. Looks like it has a robust client application with the ability to mount to a drive letter on Windows machines! So I set it up... ony to realize that it uses its own block based storage system and requires all data be added into Seafile itself. Is there any way to do what I'm trying to do without a ton of manual upkeep? In the past I used SSHFS and CIFS over SSH tunnels, and it worked perfectly. But I had to maintain keys and passwords manually. I also had to set up each and every client (I mean, I used a script to do it - but it still required ME to do the setup) and there was plenty of manual intervention. I'd like something where users can be added and removed quickly and easily, forgotten passwords can be handled by the individuals, and the application is point and click driven for setup. End goals: - WebUI - Access data on Unraid shares - User management. - Mount as a local filesystem (drive on windows, path on Linux) without syncing all contents. PS: For reference, I'm in a different state than the machines the images are being captured from. I need them to be backed up offsite, for my own peace of mind. There are two backups currently (one a hot spare disk, the other the image that spare was created from) but nothing off site. Prior to me becoming involved, there was no backup. Once its backed up to my unraid server I will back up the important images to the cloud. (Some of them can be rebuilt, it's tedious but otherwise disposable data; others are proprietary encrypted filesystems and must be byte-perfect)
  21. Xaero

    10gbit speeds?

    The M.2 cache drive should be capable of delivering higher speeds than that. Use something more low level than Samba for testing first. https://datapacket.com/blog/10gbps-network-bandwidth-test-iperf-tutorial/ Use iperf to test that the link is actually capable of full 10gbit speed. If not, eliminating the switch is fairly easy (use a crossover cable, and set a static IP on the PC in the same subnet) Rerun the iperf test. If the speed is recitfied without the switch in place - several things can come up; I've heard that these Mikrotrik devices have a "dual OS" and that sometimes the RouterOS works faster than the SwitchOS; and vice-versa, depending on the internal hardware of the device itself. If it is, then you are looking into Samba performance tuning and ensuring both clients are using SMB3 and not SMB2.
  22. I'd wait for a response from Norco. I don't recall which backplane version I have - but mine is fully populated with 24 8tb drives and I have been able to hot swap without any damage to any drives or the backplanes. Mine use a single 4 pin molex power cable each. I know there were variants with SATA, and dual 4 pin molex prior to mine. I bought the case ~4 years ago now.
  23. +1 I have learned my lesson from this.
  24. Ah, I misunderstood the initial request. Yes, that makes sense and is approachable from a software engineering perspective. Start at the source tree level 0. If tree level 0 and all subdirectories fits, just merge with the existing tree. If tree level 0 doesn't fit, jump to tree level 1 and iterate over subdirectories in level 1 until something doesn't fit. Once it doesn't fit make a new tree on another drive. Simple enough. I was thinking this request was for some arbitrary content type. If it's just based on folders its not that hard.
  25. What criteria would this be based on? There has to be the possibility for simple If This Then That logic to decide when/where to split the data. This is a drawback to the merged filesystem approach to bulk data storage that Unraid takes. If the file is larger than a single volume - it can't go on the array at all. Similarly, if a directory or set of directories is larger than a single volume, it must be split across multiple volumes. I can't think of a way any software engineer could approach this in a logical, and universal fashion. Sure, we could write an implementation for one specific use case - but that's about the limit. The only thing I can think of would be to use a "keep together" list and allow selecting specific folders to try to force onto a singular volume. If those directories ever didn't fit the free space of a disk, it would have to store data on another disk. It's just not feasible imho.