Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Do you still have the "failed" disk as it was when removed?
  2. Verify your username and password for the VPN. In particular, make sure to only use letters and numbers as a test. Sometimes special characters don't work. There could be other issues, but that's the first one I see.
  3. As long as you leave the ground leg of the UPS connected, sure. Don't just yank the cord out of the wall, as that can have unexpected and sometimes disastrous consequences. A large percentage of the time people get away with it, but if it bites you, it bites big. Remove power to the circuit without unplugging, that way all the connected equipment still has a common ground reference. Tripping the breaker, turning off the outlet, whatever, as long as the cord is still plugged in.
  4. Since you only have 8TB data drives, the correct contents of the 10TB after the 8TB boundary IS all zeros. So, you would want to fill the 10TB with random bits, not preclear it.
  5. Yes, but it's not as easy as bare metal. SpaceInvader One has a video on youtube covering that, I'm pretty sure. I recommend spending some time watching his videos.
  6. If your UPS is configured properly, Unraid should shut down automatically before the battery dies so you don't have an unclean shutdown. Killing the power on a running server can cause all sorts of issues, including data corruption. If you don't have a UPS, you need to get one ASAP. It's a requirement that the server has clean constant power. If your utility doesn't provide uninterrupted power, it's on you to fill that gap.
  7. If you have an 8TB in either parity slot, you will not be able to assign a larger drive to a data slot.
  8. RAID or Unraid, doesn't matter. They don't protect against Regardless of the drive failure protection scheme you choose, backups are a requirement if you value your data.
  9. Yes, in the beginning the intention was to have GUI elements available for all the commonly changed items, and editing the raw xml was to be done only in edge cases. There turned out to be a LOT of edge cases. Maybe officially baking in virt-manager would be a good answer.
  10. I think this is where the disconnect is. The parity drive isn't doing the data rebuilding, it's ALL the rest of the remaining data drives, the parity drive is just filling in the last missing piece. There is a very big gap between backups and parity being able to reconstruct a missing drive. Parity can't recover from corrupted data, deleted data, overwritten data, or anything like that. Parity is meant to allow the replacement of a failed drive, but that's only one way to lose data.
  11. That statement is true, but the nuance is that Plex does not require access to the GPU. So if you configure Plex to use the GPU, then if the VM is running, Plex won't run. However, most people if they only have one GPU wouldn't put themselves in that corner, and just let the server CPU handle Plex, in which case they will both run quite happily at the same time.
  12. https://www.kingston.com/us/usb-flash-drives/datatraveler-se9-usb-flash-drive Yes, I know, they are a brand you don't like. Also, I don't recommend ANY usb 3.0 sticks. The SE9's are rugged, have been reliable for me, good heatsinking (all metal case), and are still available in USB 2.0, which is important to me. I think (personal opinion here) the extra speed of the 3.0 sticks causes pinpoint heat buildup in the on stick controller chips and contributes to their demise. I personally have 4 for my servers, and a handful more scattered around on keychains and elsewhere.
  13. Depends on how much you enjoy learning and playing with technology.
  14. No, but no data drive may be larger than either parity drive.
  15. In your specific use case you need to set domains to cache : no instead of prefer. When you get a larger cache drive and want new VM's to live on the cache, change it to cache : only. I know it's confusing, but those two settings are the only ones that tells mover not to mess with your domains share. prefer and yes both engage the mover.
  16. Possibly corrupt filesystem on cache, no way to tell without the diagnostics zip file.
  17. There are many elements that you have slightly incorrect or otherwise have totally wrong, I suggest spending some quality time watching some informative videos that should clear some things up for you. Way too much for me to address one at a time. https://www.youtube.com/channel/UCZDfnUn74N0WeAPvMqTOrtA
  18. They probably are real files, left behind. A move operation is a copy followed by a delete. If you interrupt it before it finishes, it will have copied but not deleted. I suspect you have created quite a mess. Before you start cleaning up that mess, I suggest learning how the unraid share system works, where each share is a root directory on one or more disks. If you have the same file in the same path on multiple disks, the user share system will only show the first one it finds. If you then rename or delete it, then the copy that is on the second disk will be visible.
  19. Maybe because you have a container using openvpn?
  20. As shown, that's a definite NO to using it in the parity protected array. It could possibly be ok to use as a scratch drive for transitory data you don't care about if it passes an extended smart test and reallocates or verifies those current pending sectors. Never use a questionable drive in the parity array, all drives must be read perfectly from end to end to reconstruct a failed drive.
  21. Have you migrated all drives to XFS or BTRFS at some point? If you are still using ReiserFS then basically you are experiencing normal behaviour. Deleting large numbers of files is slow in the best of circumstances, with ReiserFS it's basically impossible.
  22. Depends. Well written apps will detect that a dependency isn't available and either retry or wait, then error out gracefully after a period of time. Poorly written apps may lose or corrupt data. I've never had an issue with my specific set of containers, but ymmv.
  23. Seems like you have a handle on it. Most people that say they want to sync their media want it done automatically, because they think that's what they want. You have laid out a method that uses sync to manually create a backup, instead of using a backup utility which should accomplish the same thing. Which is cool, but not what is commonly done. By making the process manual, it should be as safe as a backup, with the benefits of keeping only the copies you really need vs. versioning and storing every change. That benefit is offset by the time it takes you to manually manage it. As long as you are happy with the end result, sounds good to me.🙂
×
×
  • Create New...