Jump to content

JonathanM

Moderators
  • Posts

    16,714
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. What systems besides Unraid need to use files on that external drive?
  2. @JorgeBwill correct me if I'm wrong, but I'm fairly sure the default settings for BTRFS automatically detect file corruption on all read requests, making the file integrity plugin redundant. BTRFS scrub triggers a checksum detection sweep across the volume manually.
  3. Possible if you don't assign a parity disk, like JorgeB said. Correct, no point in copying the data twice. Cache is good for small dumps that can get flushed overnight while the array is otherwise idle.
  4. Yes. However, unless you manually mount the drives read only, parity will need to be corrected when you put the drives back. Also, I doubt you would see a huge speed difference, and since you won't be sitting in front of the server waiting for it to finish, the difference between 45 hours and 40 hours isn't going to be meaningful. In my opinion the marginal gains aren't enough to justify all the risk of physically moving the drives around.
  5. Parity doesn't work with data, it works with raw bits, which are typically a jumbled mess on used drives that haven't had a lengthy process done to set every bit to a known value.
  6. This has been requested before multiple times, and it's definitely in the "too hard to implement properly and safely" bucket. So many different possible ways for people to set up and manage their servers means it's almost impossible to make it fool proof. If Unraid removed the ability for people to use individual drives directly it would be doable, but that's not happening any time soon. For now, you can use Unbalance plugin to clear off a drive, possibly while needing to shut down the entire VM and Docker services, not just the containers, and dealing with any direct referenced drives in your configurations manually. BTW, the way you worded the statement about parity is misleading. Parity is maintained during all normal file operations, including formatting and other file system things. Parity has no concept of files, only bits across the entire capacity of the drive. So in order to remove a drive and keep parity valid, you have to write zeroes to all locations on that drive, including the parts that define the file system. Moving the files off the drive doesn't actually do anything with the bits that made up the file, it just changes the file system table of contents to show the space as available. TLDR; Removing a drive is WAY more complex than adding one. Automating the process isn't going to happen any time soon, if at all.
  7. Sure, but last I heard the software to do it was in the 10's of 1000's of USD.
  8. That syntax will corrupt parity, md1 is the correct device. Definitely NOT the same command that the GUI uses. If you had no parity, you could have used sdd1, assuming the drive designation didn't change. sd designations are subject to change each boot depending on multiple factors. Regardless, the diagnostics zip file obtained with the array started normally will hopefully have more info about how to proceed.
  9. Recharging typically takes 10 to 20 times longer than the discharge, so if it's running on battery for 10 minutes, it probably won't be back to full capacity for 3 or so hours.
  10. You should try not to discharge below 50% if possible for best overall battery life, so you will probably need to set Unraid to shut down after the power is out for at most 5 minutes so everything has time to shut down completely before you run below 50% battery. Keep in mind that consumer type battery backups are only meant to provide for a clean shutdown, if you are trying to keep your rig running through an outage you need a different setup.
  11. Google that phrase, and read the very first search result.
  12. Will you be releasing step by step directions for the migration procedure highlighting what needs to be done to keep the end result identical to what is currently working? I poked around docker hub and github, but I didn't see any mention in the documentation yet.
  13. Maybe I missed it, but are you still running any of the original 12 year old drives? What file format and capacities are your current array drives? Working with encrypted drives is much riskier, any slip ups and you are likely to lose data. Do you already have full backups physically separate?
  14. Opinions only, no good thoughts. 1. Intel, possibly research 10th gen vs 11th gen. Something about a removed feature? IDK, just something I read. Once you have enough cores, frequency determines responsiveness to a large degree, higher GHZ rules. plex transcoding is a minor consideration, my opinion is that it's better to upgrade the client side than chase transcoding. 2. Wait until the world gets less upside down unless you like spending more for the GPU than you did for your first car. 3. Personal preference is workstation grade or server grade, I don't like using consumer grade for 24/7/365 for years on end. 4. Water cooling is fine if you will be present whenever the machine is on. Otherwise the risks are too high. 5. 3 drives easily covered by basic license VM considerations are largely determined by amount of resources passed through. Server grade boards tend to have better ability to subdivide resources cleanly.
  15. Always use 64 unless you have a specific reason not to. 32 limits you in so many ways, not the least of which is RAM management. 32 hasn't been the norm since XP.
  16. Why are you mapping a UNC path to a drive letter at all? Much preferable to use the share directly, as \\servername\sharename
  17. Unless you are ok with no parity protection, USB is not recommended for Unraid hard drive connections.
  18. It's designed to accept a three pin fan on a four pin connector, so yes. Plenty of example circuits if you google.
×
×
  • Create New...