Necrotic

Members
  • Posts

    150
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    East Coast, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Necrotic's Achievements

Apprentice

Apprentice (3/14)

7

Reputation

  1. So I was wondering, does this work if you haven't started the array? (ie stuff like dockers dont work until then) I was just wondering as I have it set in case of power loss for it to restart on power, but I need to log-in to start the array, would this let me do this remote?
  2. Just wanted to chime in and say great work. Thank you so much for what you do!
  3. I mean a simple male to female stubby one would be fine, we already have sata extensions to daisy chain stuff and this wouldn't even have a cable...
  4. I think I have 4+ shucked drives, all of them running for multiple years. No issues on my end. Most are either WD Red, or the white labeled ones that are basically Red drives. I don't think shucked drives change the lifetime, for that you may want to look at the backblaze report on their failure rates. Some are just more prone that others (apparently WD were higher than average) but keep in mind they use those in datacenters so usage is more intense. Personally, for the price I am ok with the slightly higher risk and havent had any issues.
  5. Background The new sata spec changed the way the 3rd pin worked, it now disables the drive if it gets a 3.3v signal. This was intended for datacenter use and usually isn't relevant for consumer drives at this time. However there have been a lot of people buying external drives and taking them out of the enclosure (shucking) as the prices are much better. Now for years the solution is usually to either add tape over the pin, remove the pin, modify the 3.3v power from the cable, use a molex to sata (since molex doesnt carry 3.3v), etc. They each have downsides. My main point Its driving me nuts that no one had just made an adapter that goes at the end of the sata power cable and simply removes that 3rd pin connection. Has anyone found something like this? from a reliable source? Its the simplest solution out there and so obvious but I cant seem to see anyone that has done it.
  6. I guess as an extension of this, my parity drive is bigger (12TB) than the other drives (<10TB). When doing a parity, once it gets past the max size of the other disks, they end up shutting down but I think I saw it still continue the parity with just the parity drive for those last 2TB. I don't usually pay attention to parity so I haven't confirmed that is the absolute case but it seemed odd that it would waste the time rechecking the parity area that isn't protecting a data disk.
  7. This is what I have found, I guess the only relevant ones would be the improved routing and reduced overhead cost but I am unsure how that will translate into real life: https://www.networkcomputing.com/networking/six-benefits-ipv6 Mostly I am trying to prepare for the future, eventually I may have to enable this stuff and I want to better understand what I need to do now to prepare for then.
  8. So I have been looking at enabling IPV6 for my network. My main concern is that you don't get the natural isolation we currently get with NAT and if my unraid is prepared for it. I have tried searching for tips/suggestions but no success, is there a particular set of steps to ensure that my unraid is secured against intrusion? (ie can someone access my gui/dockers remotely? do I need to disable telnet or change unraid password, etc).
  9. @SlrG Thanks! I have updated the original post back to the .cnf file.
  10. Right, but I am just wondering if there would be a way to set up like a master and a slave unraid servers. It would integrate it all under a single SMB share, or something similar.
  11. I am not sure how to make this work with Unraid, but Linus just talked about using GlusterFS to make multiple separate things show up as a single share.
  12. Not sure what you are trying to do, but here is a guide on how to set up a standardized docker that works well.
  13. This can't be right, I had a G2020 with those apps (emby instead of plex). I ran up to 2 transcodes at the same time without issues. The Ryzen 5 1600 is way more powerful....Could there be something else going on? something that causes pauses? Have you tried isolating the docker apps to only some of the cores and reserve a single core to always be open to handle unraid?
  14. Thanks. I didn't have issues with 6.6.6 and it ran for like 4 months. I went ahead an updated, I'm hoping it fixes it. Update 5/23/19: So far so good after update to 6.7.0. Update 6/1/19: Still working well though unsure what caused it in the end.
  15. Ok, this seems to be a recurring issue as my shares and whole system went haywire again just 1 day later. Attached is the new log. unraid-diagnostics-20190519-1016.zip