paulrbeers

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by paulrbeers

  1. I have serveral Pools: 1 - 6 x 8TB spinning Seagates 2 - 2 x 1.92TB NVME Samssungs 3 - 3 x 3.84TB NVME Samsungs 4 - 5 x 2TB NVME (mixed) All of them are in RAID 0 and I can't do better than about 600 MB/s when transferring from my Windows 10 Machine to my Unraid server on a 10gbps network. I've installed iperf between my Unraid Server and my desktop and iperf3 tells me I am consistently getting 1 GB/s (roughly 9gbps). I have a single SSD in my unraid server for my VM's. If I just transfer files to that, I get 1 GB/s with Samba from the Windows 10 machine. Why would my pools consistently only get 600 MB/s. I get the spinners might make sense, but the NVME's? Yes they aren't the best NVME, but when tested singularly before deploying to the server, they all pulled over 700+ MB/s and most pulled well over 1GB/s. Is there some limitation to btrfs? I've tried RAID 0 and "Single Mode" and nothing does better than 600 MB/s.
  2. Please see your last question. I explained the basics to unraid. Unraid does NOT treat an array like a traditional file server where all drives are used when reading and writing. Unlike a traditional RAID 5 where blocks are written to all disks, unraid only writes a file to a single drive (and the parity drive if using a parity drive). You will see no additional read performance or write performance (unless using a cache for writes) faster than a single drive. Files are written to a single drive.
  3. I guess I don't understand your question.... Are you getting 200-400MB read or writes? Hard drives? SSD's? whats your array look like? At a basic level, unraid treats each drive as it's own read and write. It presents itself as one large drive as a share, but when it comes to reading and writing, it doesn't span across multiple drives like traditional RAID 5. So a 200MB/s read would be about right because that's all a single drive would be capable of. The Cache drives are simply used for writes and then moved to the spinning array once a day (usually, unless doing something different than standard). Now how you are getting to 400MBps? I am assuming that's reading from the cache drive? Is it a file you just copied over and then tried to read right back? In that case it's because it is on the cache drive and hasn't done the nightly move. How fast is your cache drive? 400MB/s could be about right with an SSD that small.....
  4. I've run two Unraid servers for several years now. They've had plenty of upgrades over time. I've also been running a 10gbps network between my servers and office desktops for years. My main Unraid server has an array of 6 - 8TB drivers in Single Parity Mode (40GB usable). My other Unraid server just has 5 - 8TB drives with no Parity. I just use it as a backup to my main server using rsync. Each night around 4AM, I have an rsync script to copy changes from one to the other. I use dual 500GB SSD's in Raid 0 in my main server to give me awesome write speed to my main server from any of my desktops. That works great for writes since I never write more than 1TB of data in a given day. But the read speeds..... oh the read speeds. I understand the unraid thought process of spinning down drives and if you are running just 1gb ethernet, even a single hard drive can max that out. But 10gbps networking means I'm leaving a lot of read capacity behind. So my thoughts.... I am thinking about taking 5 - 8TB drives and putting them into RAID 0 as a Pool. In my array I would load up with how many drives I need to have storage to cover the data in the Pool (right now I only have about 20TBs worth of actual data, so 4 - 8TB drives in Single Parity should work). I would then run rsync every 6hrs to copy from the Pool to the Array in case the pool becomes corrupt. This should give me roughly 700-1000 MBps read and writes from the pool. While still not saturating the 10gbps network, it's more than enough for what I do. Thoughts? Is there a better way to do this?
  5. Just wanted to say this plugin is great! Although makes me realize how I need to use a better GPU than a P400 (just transcoding 1 4K to 720P is taking 70% of my GPU Memory). The update to do multiple GPU's is awesome. When I first installed it picked up my currently un-used GPU (just an extra I have in the box for testing VM's), but being able to switch to the one I use for Plex is great! Thank you!!
  6. Just tried 6.8.2 myself but no dice. 6.8.1 seems to be no issues.....
  7. Umm I'm literally trying to only due 2 streams. 4K to 1080P requires 1300MB of vRAM. Having only 2GB means the second stream just spins. https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding So yeah not trying to bypass the 2 streams, I'm literally solving a problem with only 2 streams. I have 4K full mkv rips. Edit: I was just going to purchase a 1050ti since it has 4GB, but newegg had 1660's over the weekend for only $113, so it was a no-brainer.....
  8. Really quick question, I am planning to upgrade from a Quadro P400 to a GTX 1660 to get additional transcoding headroom (I can't successfully transcode 2 4K streams at once due to the P400 only having 2GB of vRAM). Right now the P400 is completely dedicated to the Docker container. The problem is that a 1660 is a dual slot card and in order to make room I will have to pull the P400 and the GT 710 that I use for GPU out. Based on the comment at the beginning of this thread, I assume that as long as I am not using the UnRaid GUI and simply showing the command line that I could use the 1660 for both Unraid command line interface and the for the Plex Docker? TL;DR: If only using GPU for Unraid Command Line, can it be passed into the Plex docker for transcoding purposes? I'm sure this has been discussed before, but at almost 70 pages it's hard to find the answers....
  9. Up until a couple of weeks ago this setup worked flawlessly System: Ryzen Threadripper 1950x Gigabyte X399 board 48GB of RAM 1 - Nvidia GT 710 (solely for Unraid to use) 2 - Radeon RX470s (for VMs) All of a sudden, I no longer get both of my RX470s. If I look at System Devices, I only see one of my 470s. If I run command lspci -v in the terminal, I can see a bunch of devices that are in 4X:XX.X range that aren't showing in System Devices and that's where my second GPU is showing (43:00.0 to be exact). Included in the 4X:XX.X range is my Mellanox ConnectX-2 and that is definitely working (and should show) since that's the only connected Network device in the server. I've tried turning ACS Override on just to see if that somehow magically would fix it (it shouldn't have since it wasn't an issue of being tied to anything). I even updated to 7.8.0 (I was on 7.8.0 rc7 I believe). Short of pulling the GPU and testing it in another rig, I'm out of ideas. But the weird thing about all the other 4X:XX.X devices not showing up in the IOMMU groups makes me wonder if there is something I am missing in the settings or if there is an issue with my Unraid setup.
  10. Yeah that was a big failure. My hope that Unraid would some how switch to the other GPU when the boot hit my setting of vfio-pcie for my 1060 and switch over to the gt710 did not happen. Either it continued to load headless or it just froze. I can't be sure as I didn't even have the server on my LAN for my test, but either way it didn't do what I wanted so at this point it is looking like a ROM dump is necessary and it'll run completely headless Unless someone else has an idea to force Unraid to use the 710 as soon as it behind losing Unraid.
  11. If I stub all the GPUs but the GT 710, would that work? Would UnRAID default to the GT 710?
  12. I have a problem, I'm building a gaming server with a treadripper 2970wx that will have 3 gaming GPUs (one RTX 2060 and two GTX 1060s) so that I and my children can play our games anywhere on our LAN using either Parsec or Moonlight. The other GPU I have that I want to use for Unraid is a PCIE 1x GT710. Using the 710 allows my 8x and 16x slots to be used for the gaming GPUs and my 10gb Mellanox NIC. The problem is that my motherboard prioritizes the 8x and 16x slots over the 1x slots when it comes to determining the boot GPU. So even though I have my 10gb in PCIE slot 1 and my GT710 in slot 2, the bios chooses slot 3 (a 1060) as the primary. If I pull the Mellanox card and put the 710 in slot 1, it boots using the 710 but then I lose my 10gb NIC. Unfortunately the bios does not allow me to select which slot is my primary GPU (my other treadripper motherboard does allow this but I don't think that board can handle the power draw of 4 GPUs). So I was hoping recent advancements in Unraid might allow for setting the Primary GPU in some way at boot when multiple GPUs are detected. Yes I could just scrap the 710 and use bios ROM to pass the primary into a VM, but then I lose the Unraid GUI when I'm sitting next to the server and, while I've passed a primary into a VM before using see space Invaders instructions, it was kind of a pain in the arse. I've thought about dumping Unraid for this server altogether and going straight Ubuntu and KVM, since I don't technically need the raid storage for this machine, however, I do really like the web interface rather than having to resort to VNC for remote administration. Any help would be greatly appreciated!
  13. Here's my dilemna: Use Case: I have a 10gb network at my house use SPF+ and all my "servers" and my main desktop are connected via the 10gb network and the rest of the network is on 1gb ports. The reason for this, is I transfer a lot of video files between my main desktop and my servers (some being full extended Blu Ray rips of 70+GBs). Issue: I currently use Open Media Vault with 4x8TB drives in RAID 0 (Striped) with a nightly RSync backup so I get 500MB/s Reads. If I switch to using UnRaid, I'll only get around 150MB/s Reads since it will only pull from one hard drive. Write's I'm not worried about since I plan to use a couple decent SSD's to keep my writes close to 500MB/s. My thought is to put 6-7 8TB hard drives into my UnRaid server and have 4 drives be solely dedicated for an Open Media Vault Virtual Machine so that OMV can create a Striped array for media storage. 1 - 8TB drive will be left for network storage for unraid (for ISO's and other data stores) and at least 1 - 8TB drive would still be used by UnRaid for parity purposes. So I still get the benefit of various drives and parity calculations by Unraid, but I also end up with a network store that is 500MB/s or faster. Thoughts? Is there a way to do this with Unraid directly without having to do a OMV VM?