doubleohwhatever

Members
  • Posts

    83
  • Joined

  • Last visited

Everything posted by doubleohwhatever

  1. @limetech Thanks for taking steps to right this situation.
  2. Yes, eventually this will fade into the past. However, the fact that many of us are not surprised this happened is indicative of a larger problem with the way limetech governs unraid in relation to the community. Perhaps limetech should create and fill a communications/community liaison role. Overall communication and particularly communication with the community must improve to prevent future bridges being burnt to a crisp.
  3. I know this likely won't matter to anyone but I've been using unraid for just over ten years now and I'm very sad to see how the nvidia driver situation has been handled. While I am very glad that custom builds are no longer needed to add the nvidia driver, I am very disappointed in the apparent lack of communication and appreciation from Limetech to the community members that have provided us with a solution for all the time Limetech would not. If this kind of corporate-esque "we don't care" attitude is going to be adopted then that removes an important differentiating factor between Unraid and Synology, etc. For some of us, cost isn't a barrier. Unraid has an indie appeal with good leaders and a strong community. Please understand how important the unraid community is and appreciate those in the community that put in the time to make unraid better for all of us.
  4. We just caught them on sale. We're about to start switching them out for HGST He12 drives. They appear to perform the same as the IronWolf drives but run cooler.
  5. Another +1. It would be quite nice to have a cache pool just for VMs and docker containers.
  6. The wear on the drives is indeed around 50%. As well as they have performed for me, I'm fine with that.
  7. Sure. I've attached two of them. I *think* these are the two oldest drives. SanDisk_SDSSDHII960G_151740411432-20180507-0825.txt SanDisk_SDSSDHII960G_151855401818-20180507-0823.txt
  8. I have one of the 8TB pro variant and 22 of the 8TB non-pro variant. I've had zero issues with them.
  9. I've been using these since late 2015: https://www.amazon.com/gp/product/B00M8ABHVQ/ I have had zero problems with them and they haven't been used lightly. My cache array runs eleven dockers and two VMs. One VM is a security camera server. 16 4k cameras are writing to the cache array 24/7 (not motion triggered but 16 24/7 4k h.265 streams). On top of that the cache drive is still used for storage array writes (mover runs nightly). They also survived a 140F temps when a fan failed. They aren't the fastest but they just freaking work.
  10. +1 as well. I was going nuts trying to figure out what changed.
  11. Check out this plugin: https://lime-technology.com/forums/topic/43651-plug-in-unbalance/ That said, I'd really like to see such a feature built into unraid. This feels like it should be a core feature.
  12. Generally, if I don't need need dual CPUs, over 64GB of memory or IPMI, I'll go with a gaming board. Otherwise, I'll go with a previous generation xeon platform (saves $). As for gaming and server motherboards, it's not the motherboards that you have to watch out for. It's the fact that xeons tend to have more cores/threads at slower clock speeds. Some games will be fine with a ton of slower cores and others will run better with fewer but faster cores. For what you're after, I'd take a close look at AMD Threadripper. Basically it provides a large number of cores/threads at higher clock speeds and 128GB of ram is doable. Well, that and a ton of PCIe lanes.
  13. I was dealing with a similar decision this past week (Ryzen 7 vs i7-8700). I ended up going with the 8700 since I could pass the integrated GPU to the plex docker for transcoding. After getting it running and seeing how well plex runs with hardware acceleration (leaving the CPU for other tasks), I feel like I made the right choice.
  14. Hi All, I'm having trouble with the transmission docker: =============================== STARTING TRANSMISSIONCONFIGURING PORT FORWARDINGTransmission startup script complete.Generating new client id for PIAFri Dec 29 19:05:58 2017 Initialization Sequence CompletedGot new port 51897 from PIAtransmission auth requiredlocalhost:9091/transmission/rpc/ responded: "success"Checking port...Error: portTested: http error 0: No Response =============================== Any idea what I'm doing wrong?
  15. If 10GbE is out of reach due to costs or other reasons, you could look at LACP. However, you'll still have to have a capable switch and nics. That said, if gigabit speeds are fine for your client machines but you have a bottleneck at the server then a good switch and nic bonding on the server might be enough.
  16. Getting this error: -v "/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/":"/logs":ro -v "/mnt/user/appdata/plexpy":"/config":rw linuxserver/plexpy Unable to find image 'Support/Plex:latest' locally Invalid namespace name (Support). Only [a-z0-9-_] are allowed. It seems to be complaining about the spaces in the path to /logs/. Not quite sure what to do about this since I can't control the plex logs directory
  17. Careful with those WD green drives getting too hot in an attic. We've started moving away from them because they just can't take the heat long-term. We keep the climate control on 80F (with good air turnover and server air flow) to keep costs down and the WD green drives just can't take it more than a year or two. Over the past year we've lost nine drives and eight were WD green drives. I used to sell the WD green drives on eBay when we upgrade to larger disks but now I just toss the damn things.
  18. For anyone still looking for these cases: http://lime-technology.com/forum/index.php?topic=46019.0
  19. We're in the process of rebuilding all of our servers with a standardized set of hardware and coming up for rebuild next week I have two servers using D-316M/AVS-4/10 cases. They're both in good shape with only minor scratches from being moved around. All keys, etc would be included. No other hardware. Pictures would be included if I list them. The reason for this post is that we don't necessarily need to replace these two cases with our rebuild. We have a stack of our new custom cases but the D-316M cases technically meet our requirements for these two servers. So I'd essentially need to sell the two D-316M cases for the cost of our new cases in order for us to ditch them. That would be $200 each plus shipping (free if you pickup - zip 35242). Is there anyone interested in these two cases at that price? I'm offering these because a while back we were in need of D-316M cases before going the custom route and I would have gladly paid $200 for them. If no one has expressed interest by Monday (Feb 1st), we'll just go ahead and rebuild using the D-316M cases. If anyone finds this post out of place here, please feel free to remove it. Edit: For those wondering what the case cost new: http://lime-technology.com/d-316m-server-case/
  20. There's no ETA from what I was told. If you need something in the very near future, I'd go with another case. If you need more details I'd contact Tom.
  21. Thanks for the offer. However, I'm looking for the particular item I mentioned.