Jump to content

Stokkes

Members
  • Content Count

    335
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Stokkes

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Ottawa

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm looking on eBay for these cases, but there is a difference between the SAS2 and SAS3 versions, at least from reputable sellers. About 450USD difference. SAS3 seems like it would be the ideal choice for "long term" use and would definitely be quicker for parity / etc
  2. First, thank you so much for your time and replies today, very much appreciated! I haven't built a server in 6-7 years. You're right, I would be using 2x NVME (1TB) where the writes would occur and the mover would just move the data 1-2 times a day. The drives I plan to put in are all Seagate Exos X16 (16TB), so about 384-576TB of space, 2 drives dedicated to parity. I guess I'm concerned about parity checks/rebuilds/etc. As long as I can hit 100MB/s across all drives during a parity check, we're looking at about 48 hours to do a parity rebuild. My use cases are pretty simple - nothing high throughput, about 1-2TB of transfer per day being moved to the array from the NVMe - regular scheduled parity checks. I was also looking at this thread, specifically the image that has 1 LSI 2008 connected to 24 drives, which shows a max of 95MB/s I think (if that's correct). Cheers,
  3. I actually haven't bought the case yet, but I have 2 I could buy, one older chassis (now discontinued) with the SAS2 backplane and a newer one with SAS3, the SAS3 one is obviously significantly more. I guess I'm worried about 1 HBA @ 6Gbps being a bottleneck for 24-30 drives.
  4. Hmm, may be worth investing in one of those HP 12Gbps HBAs then? I worry putting 24 drives on 1 HBA? On the Supermicro 847, the back drives are on a different backplane, so i could use the second HBA for the back.
  5. Actually, since the chassis has a built-in expander (it's the BPN-SAS2-846EL1), I think the 2x cards I have now should be able to sustain 30 drives on 4 ports
  6. Hey all, I'm building a new server on 10th gen intel (chassis: Supermicro CSE-847, mobo: Supermicro X12SCA-F) and due to the chassis / motherboard, I have the following contraints: Must be low-profile Mobo only has 2x PCIe x8 slots, 1x PCIe x4 Which HBA would people recommend for this build that would work in Unraid? I currently have older M1015 (9240-8i), but I'd be maxed at 16 drives and no PCIe slots left on the motherboard. So i'd like to buy 1 (or 2 if speeds would improve) new HBAs to support up to 30 drives in this new build. Thanks!
  7. Wondering if someone can help.. I built a docker for Plex Sync (https://github.com/jacobwgillespie/plex-sync) which syncs your watch lists between multiple Plex Servers. I'm doing this because I"m slowly moving my Plex server to a Linode VPS backed by Amazon Drive (encrypted with EncFS) and I can't seem to get this running with User Scripts. The plex-sync tool is a bit finicky and requires I run the Docker command this way: docker run -ti --rm plexsync plex-sync TOKEN@source_ip/1 TOKEN@destination_ip/1 the -ti runs it in pseudoTTY and in interactive mode. Without these, the plex-sync tool (built on NodeJS) will simply not run for some reason. Trying to run this via User Scripts, I get this in the logs: Script Starting Mon, 24 Oct 2016 18:47:01 -0400 Full logs for this script are available at /tmp/user.scripts/tmpScripts/PlexSyncWatched/log.txt cannot enable tty mode on non tty input This is Docker throwing the error, I'm guessing due to the environment User Scripts uses to run. Any idea how I can get around this, I'd like to run this script on an hourly basis as I transition my Library to my VPS/ACD Thanks!
  8. Yes just done a guide to do a fresh install of Sierra on unRAID. Hope you find it useful. What are the chances we can upgrade an existing install?
  9. This looks good, but a bit of a bug if your unRaid is on a non-standard port (like mine). I can add the unraid server on port 8080 by adding the server with 10.0.1.2:8080 However, this seems to break the mount function, see image:
  10. Does this work on 6.2? I noticed "screen" has stopped working since upgrading to 6.2. Tried uninstalling and reinstalling the plugin. Ideas?
  11. Hey all, Had an El Cap VM running for a while but recently upgraded to 6.2 rc3 and the VM just sits at the Grey startup screen with the white status bar and never goes into OSX. Any ideas how I can fix this?
  12. Yep this container complete broke for me Sent from my iPad using Tapatalk
  13. So update on my crashing (still having it). The Desktop app continually loses connection to the backup engine, however, the backup engine is still running. Looking at the logs, the backup occurs everyday, as it's supposed to. Here's the behaviour I'm experiencing: 1. Start CrashPlan docker container 2. Everything works fine 3. After a few days (I've seen within 3 to up to 7-8 days), attempting to view the WebUI (Desktop) results in a "Connection to backup engine lost." When connecting to the interface. 4. Clicking the "OK" button in the WebUI dismissing the error but crashes the Desktop client, so that the WebUI is a black screen. 5. Looking at the logs on a nightly basis, the backup is still occurring, simply can't access the WebUI since its crashed. The only way to revive the WebUI is to restart the Docker container. This is not an engine out of memory error. The logs indicate that after a backup has completed that the Java heap is fine. This has obviously only been happening since the Desktop client / Server were merged into a single container. Any ideas?