• Posts

  • Joined

  • Last visited


  • Gender
  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Stokkes's Achievements


Contributor (5/14)



  1. Hey, Not sure if I'm the only one, but using an Intel 10th Gen (i9-10900) and running `intel_gpu_top` results in an error: root@Tower:~# intel_gpu_top Failed to detect engines! (No such file or directory) (Kernel 4.16 or newer is required for i915 PMU support.) Running 6.9.1. The iGPU works as Plex can HW transcode (so can ffmpeg) and the intel_gpu_top command runs fine if I reboot my system with the Ubuntu SSD plugged in (ruling out a bios issue). Any ideas? It's definitely unraid related, but who knows with what.
  2. I think the point is most would be upset having to need an additional 400MB of RAM just for a driver they'll never use. sure it's 100MB when on the USB but when it's extracted at boot and stored directly into RAM, it's using 400. That's a lot.
  3. The 42% used isn't of 31.4Gb - which I'm guessing is what you're calculating to get to 13GB used (42% of 31.4 = 13) The 31.4 is your usable RAM in your server (32GB). By default, the docker img size is 20GB (you can check this in Settings -> Docker) and see what it's set to. If you're image is 42% full, then I'd say it's probably the default (20GB) at least based on the container sizes you pasted. Cheers,
  4. I'm looking on eBay for these cases, but there is a difference between the SAS2 and SAS3 versions, at least from reputable sellers. About 450USD difference. SAS3 seems like it would be the ideal choice for "long term" use and would definitely be quicker for parity / etc
  5. First, thank you so much for your time and replies today, very much appreciated! I haven't built a server in 6-7 years. You're right, I would be using 2x NVME (1TB) where the writes would occur and the mover would just move the data 1-2 times a day. The drives I plan to put in are all Seagate Exos X16 (16TB), so about 384-576TB of space, 2 drives dedicated to parity. I guess I'm concerned about parity checks/rebuilds/etc. As long as I can hit 100MB/s across all drives during a parity check, we're looking at about 48 hours to do a parity rebuild. My use cases are pretty simple - nothing high throughput, about 1-2TB of transfer per day being moved to the array from the NVMe - regular scheduled parity checks. I was also looking at this thread, specifically the image that has 1 LSI 2008 connected to 24 drives, which shows a max of 95MB/s I think (if that's correct). Cheers,
  6. I actually haven't bought the case yet, but I have 2 I could buy, one older chassis (now discontinued) with the SAS2 backplane and a newer one with SAS3, the SAS3 one is obviously significantly more. I guess I'm worried about 1 HBA @ 6Gbps being a bottleneck for 24-30 drives.
  7. Hmm, may be worth investing in one of those HP 12Gbps HBAs then? I worry putting 24 drives on 1 HBA? On the Supermicro 847, the back drives are on a different backplane, so i could use the second HBA for the back.
  8. Actually, since the chassis has a built-in expander (it's the BPN-SAS2-846EL1), I think the 2x cards I have now should be able to sustain 30 drives on 4 ports
  9. Hey all, I'm building a new server on 10th gen intel (chassis: Supermicro CSE-847, mobo: Supermicro X12SCA-F) and due to the chassis / motherboard, I have the following contraints: Must be low-profile Mobo only has 2x PCIe x8 slots, 1x PCIe x4 Which HBA would people recommend for this build that would work in Unraid? I currently have older M1015 (9240-8i), but I'd be maxed at 16 drives and no PCIe slots left on the motherboard. So i'd like to buy 1 (or 2 if speeds would improve) new HBAs to support up to 30 drives in this new build. Thanks!
  10. Wondering if someone can help.. I built a docker for Plex Sync ( which syncs your watch lists between multiple Plex Servers. I'm doing this because I"m slowly moving my Plex server to a Linode VPS backed by Amazon Drive (encrypted with EncFS) and I can't seem to get this running with User Scripts. The plex-sync tool is a bit finicky and requires I run the Docker command this way: docker run -ti --rm plexsync plex-sync TOKEN@source_ip/1 TOKEN@destination_ip/1 the -ti runs it in pseudoTTY and in interactive mode. Without these, the plex-sync tool (built on NodeJS) will simply not run for some reason. Trying to run this via User Scripts, I get this in the logs: Script Starting Mon, 24 Oct 2016 18:47:01 -0400 Full logs for this script are available at /tmp/user.scripts/tmpScripts/PlexSyncWatched/log.txt cannot enable tty mode on non tty input This is Docker throwing the error, I'm guessing due to the environment User Scripts uses to run. Any idea how I can get around this, I'd like to run this script on an hourly basis as I transition my Library to my VPS/ACD Thanks!
  11. Yes just done a guide to do a fresh install of Sierra on unRAID. Hope you find it useful. What are the chances we can upgrade an existing install?
  12. This looks good, but a bit of a bug if your unRaid is on a non-standard port (like mine). I can add the unraid server on port 8080 by adding the server with However, this seems to break the mount function, see image:
  13. Does this work on 6.2? I noticed "screen" has stopped working since upgrading to 6.2. Tried uninstalling and reinstalling the plugin. Ideas?