JonathanM

Moderators
  • Posts

    16091
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. 24 minutes ago, Milvus said:

    what exactly does it mean?

    It means recovery from corruption can be impossible with encryption in the way.

     

    Corruption can happen with hardware errors, like bad RAM, cables, or power issues. The problem is, you don't know it's going to happen until it does, and RAID (of any sort, not just Unraid) can't always compensate, meaning unless you have complete backups, you will lose data.

     

    Unraid or any RAID can't help with file deletion or overwriting good data with bad, so backups are always needed, but with encryption, the recovery options are even more limited, so backups are even more necessary.

     

    If the data is important enough to encrypt, it's important enough to keep multiple copies in multiple locations.

  2. 2 hours ago, sannitig said:

    I thought the wiping of the disk was the same as formatting.

    Nope. Wiping the disk writes all zeroes, removing any traces of files or filesystem formats. Think of a format as a filing cabinet with drawers and folders. It allows files to be stored in an organized fashion so they can be easily retrieved, as opposed to just tossing the files on the floor in an empty room. Filesystems take up space even when there aren't any files stored.

  3. 1 hour ago, cosmickatamari said:

    I've sent Unraid support an email for assistance but wondering how fast are they typically on resolving these issues?

    Did you get an auto reply? If not try sending again.

    1 hour ago, cosmickatamari said:

    the dockers were stored in a /mnt in the array

    Generally containers put their executables and other common program parts inside the docker.img file, and their customizable parts live in ./appdata/* and the templates are on the flash drive. Do you have any backups of the flash drive?

    1 hour ago, cosmickatamari said:

    (the parity drive has NOT changed). But do the other drives need to be listed on the array in that order?

    If you only had a single parity drive in Parity1, then data drive order doesn't matter. If for some reason you had that parity drive in Parity2, then order does matter. The two parity slots use different calculations.

     

  4. 2 hours ago, Getting Goin said:

    zero response

    Pick one app to work on, read the first post in that app's thread and follow the troubleshooting steps. When you have the appropriate log file, read through it and XXXX out any credentials then post it in that app's thread with a description of what you have done so far and where you are getting stuck. Since you say you have the same issue on multiple containers, I'm betting if you fix one, you can easily figure out how to deal with the rest.

  5. 6 hours ago, stainless_steve said:

    Where is the maximum size of 128GB coming from?

    Motherboard reported, may be accurate, maybe not.

     

    6 hours ago, stainless_steve said:

    what determines the maximum size of the Log ?

    fixed allocation by Unraid

     

    6 hours ago, stainless_steve said:

    and most importantly: why is the docker at 79% (732GB)? I checked and the docker.img file is just 21.5GB

    It's the percentage of space used INSIDE the docker.img file. Not the free space on the drive holding the image file.

  6. 39 minutes ago, asbath said:

    So I figured, easy enough, I'll just use cp to manually bring the files over from /mnt/user/appdata/* to /mnt/cache/appdata/*. That went swimmingly.

    Pretty sure that didn't work like you thought it did, because those locations are the same.

     

    /mnt/user paths are the combined view of all the root paths on the array disks and the pools combined. user share and disk share should never be mixed in a file operation.

  7. On 4/14/2024 at 8:50 AM, ConnerVT said:

    apcupsd may have this functionality as well,

    It does, I use it extensively, each of my VM's runs apcupsd in slave mode, and is set to begin shutting down a minute or two after the host server reports a loss of power. Server shutdown is much smoother for me if the VM's are all shut down before the server itself starts the shutdown sequence.

    • Thanks 1
  8. I just thought of a another VERY good reason I want the parity drive to be read all the way to the end.

     

    Hidden errors. Unless you run a complete SMART test, that portion of the parity drive beyond the last data drive is NEVER read or written during normal use. The last thing I want is for there to be a bad sector lurking in the last bit of the parity drive, just waiting for me to add a data drive and start exercising it. At least with a parity check that portion of the drive is read regularly.

     

    Count me as 1 vote to keep the current behaviour, for the above reason alone.

  9. 22 hours ago, thatdude78 said:

    The issue is my back up is a year old. During this period i replaced a failed 3TB HDD with an 18TB HDD plus I also had two parities when this backup was created which my current config did not.

    Just be glad that one of your current data drives wasn't in use as a parity with that old backup. That sort of error is generally fatal to the data that was on said drive.

     

    Lesson here, keep current backups, and delete any old backups made before a drive change. You really don't want to accidentally use an old backup.

  10. 16 hours ago, Gragorg said:

    I agree it should skip the portion after the largest data drive. 

    If that were implemented, it would require a change in the drive addition and replacement code, because it would no longer be a given that the full capacity of the parity drive is indeed zeroes. I don't see that kind of code rewrite happening, mainly because what is there now works, and mucking around with such important code that is currently working would require a VERY strong reason, given the amount of work needed to test for edge cases and general bugs that could be introduced.

    • Upvote 1