• Posts

  • Joined

  • Last visited


  • Gender
  • Location
    East Coast, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Necrotic's Achievements


Apprentice (3/14)



  1. You need to look more into how mythtv is doing its encoding (settings, options, etc). With that much CPU consumption I would think that its doing software encoding and probably some high bitrate or high quality settings. Need to check if you need to do anything to get it to use quicksync to offload then I think it would be ok. Still this is only unraid, I think more research is needed on the mythtv side.
  2. You seem to have other stuff going on so I am unsure if this will help. I had 2 drives with RiserFS but overall everything was working, still I wanted to standardize on XFS for all. The way I did it was that when I added a new drive that was bigger, I used unBalance which is an app/plugin you can get thru CA. Then I moved all the files off the first old drive into the new one, wiped the old one and reformatted to XFS, then moved the files from the second old drive to the reformatted one then I reformatted the second.
  3. Here is my oldest one, 2 other drives around the same time. WD RED 3TB for all 3, I have another one about a year newer and then a series of drives after that.
  4. A @SpaceInvaderOne video on this would be awesome
  5. So I am not entirely sure what was going on. I must've restarted the container like 20 times. Sometimes it looked as if it was updated steamcmd, other times not (I am not sure if the update was failing or something). In any case, I simply enabled the validation, ran it once until it loaded right, then disabled validation. Not running experimental as far as I know (I see no flags for it).
  6. So for Satisfactory I am getting a version mismatch. I have tried re-starting the docker repeatedly, several times it looked as if it was updating SteamCMD but it doesn't do that anymore. Still my game version is 177909 while server is 176027. I have no idea how to fix or force an update. Edit: Nevermind, I edited the parameters of the docker and set the validate to true. That seems to have forced the change.
  7. So I was wondering, does this work if you haven't started the array? (ie stuff like dockers dont work until then) I was just wondering as I have it set in case of power loss for it to restart on power, but I need to log-in to start the array, would this let me do this remote?
  8. Just wanted to chime in and say great work. Thank you so much for what you do!
  9. I mean a simple male to female stubby one would be fine, we already have sata extensions to daisy chain stuff and this wouldn't even have a cable...
  10. I think I have 4+ shucked drives, all of them running for multiple years. No issues on my end. Most are either WD Red, or the white labeled ones that are basically Red drives. I don't think shucked drives change the lifetime, for that you may want to look at the backblaze report on their failure rates. Some are just more prone that others (apparently WD were higher than average) but keep in mind they use those in datacenters so usage is more intense. Personally, for the price I am ok with the slightly higher risk and havent had any issues.
  11. Background The new sata spec changed the way the 3rd pin worked, it now disables the drive if it gets a 3.3v signal. This was intended for datacenter use and usually isn't relevant for consumer drives at this time. However there have been a lot of people buying external drives and taking them out of the enclosure (shucking) as the prices are much better. Now for years the solution is usually to either add tape over the pin, remove the pin, modify the 3.3v power from the cable, use a molex to sata (since molex doesnt carry 3.3v), etc. They each have downsides. My main point Its driving me nuts that no one had just made an adapter that goes at the end of the sata power cable and simply removes that 3rd pin connection. Has anyone found something like this? from a reliable source? Its the simplest solution out there and so obvious but I cant seem to see anyone that has done it.
  12. I guess as an extension of this, my parity drive is bigger (12TB) than the other drives (<10TB). When doing a parity, once it gets past the max size of the other disks, they end up shutting down but I think I saw it still continue the parity with just the parity drive for those last 2TB. I don't usually pay attention to parity so I haven't confirmed that is the absolute case but it seemed odd that it would waste the time rechecking the parity area that isn't protecting a data disk.
  13. This is what I have found, I guess the only relevant ones would be the improved routing and reduced overhead cost but I am unsure how that will translate into real life: Mostly I am trying to prepare for the future, eventually I may have to enable this stuff and I want to better understand what I need to do now to prepare for then.
  14. So I have been looking at enabling IPV6 for my network. My main concern is that you don't get the natural isolation we currently get with NAT and if my unraid is prepared for it. I have tried searching for tips/suggestions but no success, is there a particular set of steps to ensure that my unraid is secured against intrusion? (ie can someone access my gui/dockers remotely? do I need to disable telnet or change unraid password, etc).
  15. @SlrG Thanks! I have updated the original post back to the .cnf file.