Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Unraid uses KVM, so try searching the general internet for exporting kvm machine to esxi or virtualbox.
  2. Since your array is currently problematic, I wouldn't break it until you have copies of your data. I'll make some assumptions here based on what you said and lay out how I would proceed. 1. Obtain 3 8TB drives. 2. Attach one to your local PC. Don't touch the server yet. Format the drive for your local PC, and copy all the array data over the network, verify the copy. This is your failsafe. Don't touch it until the rest of the procedures go well. 3. Attach one to your server, use Unassigned Devices plugin to format it as XFS, and copy all the array drive data to it using rsync. Verify the copy. Now that you have 2 copies of your data, I would feel safe breaking down the current array. 4. Set a new config, don't keep any assignments. DON'T physically remove any drives yet. Assign the drive that was in UD as Data 1. Start the array, see if things appear unchanged to the network clients. If you did everything right, there should be no difference over the network, all shares should still be there, etc. 5. Remove all the old drives, add the remaining 8TB as parity, let it build, then do a non-correcting check. If you get zero errors, you are done for now. Your 3rd 8TB drive is still untouched with a full backup of all your files, let it be a backup while you work with the server for a while. When you run out of space, then you can make the decision to wipe it out and add it to the array for more space, or just purchase another 8TB at that point so you keep a full backup. If you already have a full backup of your important data, some of this is unnecessary, but you didn't say you had a backup, so I assumed you didn't.
  3. Depends. You need to reevaluate your includes and excludes depending on which disk # end up where and what shares you want to be on them. Never use both include and exclude, only one or the other.
  4. Parity has no concept of files or folders, empty or full. If you remove a drive, parity must be rebuilt, unless the removed drive has had zeroes written to the whole drive. Deleting a file doesn't zero the drive, it simply marks the space as available. That's why data recovery software can sometimes get your files back after they have been deleted. The shrink array is just trying to enforce the notion that you need to manually back up the files on the drive you want to remove, as any content on removed drives will be gone. Some people have thought in the past that unraid should magically reconstruct data from a missing drive to other data drives in the array, but that just doesn't happen. If the rsync command that was spawned was allowed to run to completion without error, then yes, I believe the folders would be automatically removed. Be sure you examine the content of those folders before you manually delete them to be sure the copy part of the move operation succeeded without error.
  5. Why? If the disk is to be removed, what does it matter?
  6. Can you elaborate what you mean by this?
  7. I'm not following your logic. Until parity is once again valid, another drive failure will result in data loss. Whether the data is on drive slot X or drive slot Y doesn't factor in to the equation. The only scenario where you get protected again without rebuilding is to get the data from the emulated slot placed elsewhere and rebuild parity from the remaining drives. If you don't plan on rebuilding parity before getting a replacement disk, there's no point in taking the extra risk of moving data around. How healthy are the rest of your disks? Was the failure anticipated? Are you sure the "failed" disk is actually bad? A majority of the time a red balled disk is the result of something other than the drive itself. I recommend attaching the current diagnostic zip file to your next post in this thread.
  8. Yes, array stopped. If you want to reorder your data slots, now's the time to do it. As long as you don't assign a drive with data to the parity slots(s), you can put the drives in any slot you wish.
  9. Yep. Your wording leads me to believe you haven't used the new config tool before, so to be clear, you go to new config, set preserve all, then go back to the main GUI page and select none for the two drive slots you are removing. If you don't preserve all, you will need to refer back to a saved list of your current drive assignments so you don't accidentally put the wrong drive in a parity slot. Do NOT select parity is already valid unless you go through the very long process of dd'ing zeroes to the drives to be removed.
  10. You will either have to write all zeroes to the 2 drives you wish to remove, or rebuild parity. You can set a new config and rebuild parity after removing both drives at once, you don't have to do them individually. Not sure what constitutes unusable to you, the array files can still be read while parity is building, albeit slower than normal. Writing all zeroes to the two drives will keep parity busy much longer than simply rebuilding it once.
  11. Parity wouldn't have helped, it tracks changes in realtime, and also only operates on array, not cache. I would make a copy of the backup to a different location and see if it contains your work. Whether or not it's even possible to attempt a recovery on the current shrunken vdisk file is unknown, you haven't posted nearly enough info about the current state of the file systems, the disks involved, etc. Attaching the diagnostics zip file to your next post and the path of the vdisk file would be a good start.
  12. paths don't match. Both the host and container parts of the mapping must be identical for both radarr and the download client.
  13. That symptom sounds like a BIOS limitation on the number of bootable devices. Make sure the HBA's aren't bootable.
  14. Somewhat. It's a base amount plus percent of total. That reserved space is used for housekeeping and file system enhanced features. If you are that close to capacity, you really need to do 1 of 2 things, either jetson files you don't really need, or add capacity by upgrading or adding drives. It's not a good idea to fill an active array to full capacity, for just the reasons you have been experiencing. I personally tend to try to keep total array free space to approximately the size of a single array drive. That may not be necessary for some people, but it's worked well for me. When it drops below that margin, I have enough time to source my next upgrade before I run out of space.
  15. No. Parity as implemented by unraid only recreates a single (or double if using 2 parity disks) missing disk regardless of content. It has no concept of files or file corruption. Checksum is a function of the file system, and since unraid uses single member file systems, corruption can only be detected, not corrected.
  16. And, if the memory corruption is irreparable, it halts the computer immediately so you can fix the issue before it corrupts all the data written to the drives. Non-ECC can merrily go on its way, silently putting bad bits into all the data you are trying so hard to keep safe. Granted, memory failure is rare, but it does happen, on this forum we see probably 1 or 2 instances a week sometimes where a server is acting strangely and a memtest reveals bad RAM. Of those cases, many are found because of unexplained file system corruption. ECC is an insurance policy. It's just an extra unnecessary cost, until it saves your bacon. If unraid only ever holds true third tier backups, and those backups have means to verify their validity through some checksum function, then no, ECC probably isn't a good investment. You can always recovery corrupted data from your other backups in the unlikely chance you have bad RAM.
  17. You can approximate a reasonable cable tester with a high quality ethernet card and a known good port on a managed switch. There are a surprising (to me at least) amount of statistics available with some cards.
  18. Try reformatting with a more thorough tool, like the HP formatter or rufus
  19. Can you link to official documentation that walks through the live expansion? All I can find are work arounds and hacks that come with data loss warnings. I guess my google-fu is failing me.
  20. Since the array drives and many other unraid assets are not available unti the array is started, I don't think this will work with unraid. You can already set the array to autostart, and vm's to start with the array without logging in, so I don't see what you would be gaining.
  21. As long as the container name is accurate, yes, but without the ? at the end.
  22. docker exec -it <container name> /bin/bash
  23. Where are you seeing that it's even possible to expand a KVM guest vdisk without stopping it? The googling I've done suggests it's required to stop the guest. https://computingforgeeks.com/how-to-extend-increase-kvm-virtual-machine-disk-size/ When the guest is stopped, it's easy to expand the disk from the GUI, on the VM page click on the NAME of the VM, and click on the disk capacity. Enter the new larger number with G modifier and hit enter. NEVER EVER EVER set a smaller size. I know that's not what you asked, I'm just posting as a reference if someone searches and finds this answer thinking they can shrink a vdisk just as easily. You can, but you will break the vdisk permanently, and probably not be able to recover your data.
×
×
  • Create New...