Jump to content

notphilip

Members
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

0 Neutral

About notphilip

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Cool thanks for the quick reply! I’ll go from 6.7 to the 6.8 beta for my next upgrade then.
  2. Does this update have drivers for 9900k gpu?
  3. Interesting. That's a good idea. Is there a guide to formatting drives that are actively in use? For context, I have 5 3tb data drives with about 5tb currently used across all of them. So should I move all the data to two drives, format three of them, then move the data to those three drives, and format the remaining two? Is there any guide or easy way to do this? (I imagine the unbalance plugin is in the answer, but I'm not terribly familiar with how it works)
  4. That's my big question. How important is ECC ram, particularly if my array is formatted as reiserfs? I've had ECC in my old rig for the last five years, and no data corruption failures happened - but was that due to the RAM or the general reliability of my drives? I can't tell if it was worth the cost and compatibility hassle of finding ECC ram. If ECC ram doesn't matter much, then I'm probably better off with the 9900k option. If it does matter, then I'm better off with the 1920x. Right now, I'm feeling a bit conservative, so I'm leaning towards the 1920x for ECC ram, and the hope that one day, GPU offloading in plex will be sufficiently good that a 1060/1920x will vastly outperform the 9900k. I also have the option to get more RAM and NVME drives with the 1920x, while the 9900k is maxed out at 2 nvmes and 4 sticks of RAM.
  5. Interesting. But presumably, the unraid kernel will update over time and will eventually get support for the 9900k iGPU?
  6. I need help deciding between two setups. Setup 1: i9 9900k 32 gb non-ecc ram Setup 2: threadripper 1920x 32gb ecc ram 1060 gpu Primarily, I want to optimize for unraid, plex transcoding (and conversion for sync), and being able to dedicate 2-4 threads to a Roon VM. Pros of setup 1: significantly lower power consumption (currently testing out the threadripper and it idles around 150-160w, while the 8700k I'm also testing idles around 70-80w) better performance at conversion for sync , presumably transcoding (8700k I'm testing converts a 1080p movie to 720p about 17% faster, so I assume the 9900k would be even faster) much easier software configuration for plex hardware acceleration Pros of setup 2: ECC ram - which I've had in my build for the last 5 years but I'm not sure how much it 'saved' me 8 more threads (4 more cores), so dedicating 2-4 of them for Roon won't affect the rest of system too much potentially better transcoding/conversion performance in future as support for offloading to GPU improves more PCIE lanes means full 3GB/s throughput of my NVME drives (which are bottlenecked at 2.5GB/s with the 8700k I'm testing) What would you all do?
  7. Good idea. I've ordered replacement cables for power and data. I'll replace them tomorrow when they arrive. Now that I'm looking at the critical Amazon reviews of the data cables I'm currently using (Cable matters mini-sas to SATA cable), it appears other people are having issues with them as well. Turns out cheap cables are usually too good to be true. I ordered some higher-end Startech cables. I've usually had good luck with their products.
  8. I got some read errors for this drive last night. I ran extended smart today, and it seems to have no issues besides a value of 1 for UDMA CRC Error Count - which appeared about two months ago, I replaced the cable, and it hasn't risen since. I've purchased a new WD Red 3tb drive to use as a replacement data drive, but I'm considering pre-clearing this 'failed' drive then deploying it as a second parity drive. Anyone have any reasons why I shouldn't do this? I've attached the SMART report. It seems the log has been cleared since this morning, or else I'd post the errors from last night. In any case, they were "print_req_error: I/O error" read errors. WDC_WD30EFRX-68EUZN0_WD-WCC4N5SL1FH1-20190612-1708.txt
  9. Usenet on the desktop is getting about 100 MB/s. On the Unraid server, it is getting about 25 MB/s. Both of which are reasonably consistent with the speed test results
  10. I don’t think so. The desktop on the same network can pull 1gbps without any issues. WiFi devices are consistently getting 600mbps. So it’s specific to this server
  11. Update: I just tried using an ubuntu live image on the same machine, and it also had slow speed test speeds (yet somehow fast xfinity speed test speeds). I tried changing ethernet cables and ports on the router, but nothing changed. So it doesn't look like this is an Unraid issue after all. It's a hardware issue of some sort. Not sure exactly what is wrong since the speeds are constrained via integrated and pcie gigabit network ports. I guess it's time for a new mobo, ram, and cpu? This setup is about 6 years old after all.
  12. I just tried rolling back to 6.67 to see if speeds are better on that version, but they were not. I also upgraded to a 1.25Gb NIC, and that got my speed up to 900-1050 mbps inside firefox using Xfinity's speedtest. With speedtest.net inside firefox, I get 365 mbps. But the new NIC had no effect on the host speeds.
  13. Interesting. I hadn't thought to try speedtest in a firefox container. I just tried it and got about 850 mbps. Close but not quite there. Also not clear why a successive speedtest to the same server using the unraid speedtest plugin got 790 mbps (and one right after that got 450).
  14. Hoping someone can help me figure this out. Recently, I upgraded to 1gb/40mbps internet. On my desktop that is wired to the router, I'm getting consistent speedtest results of about 940 mbps. Even wifi devices are getting about 600 mbps. On my unraid server that is wired to the same router, I'm getting speedtest results (via the plugin) that vary from 400-780 mbps (and usenet maxes out at about 250 mbps). If I run "wget --output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test1000.zip", I consistently max out at about 720 mbps with an average of about 580 mbps. I've disabled all bonding/bridging, and I've tried using another 1gb intel ethernet pcie card with no change in results. Any thoughts on what's causing the performance hit to my unraid server? Should I try a 10gb pcie card to see if it resolves the issue? I'd like to get consistent 900+ mbps, and at least 500 mbps on usenet.