Jump to content

JonathanM

Moderators
  • Posts

    16,708
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. I believe the issue is that as of 6.9.2 and earlier the mover only operates between a pool and the main array. So, to move from one pool to another, you will need to set cache:yes on the share, make sure there are no open files, and run the mover. Then once all the files are on the main array, you can set cache:prefer to the pool you want to use and once again run the mover. Alternatively you could move the files manually using mc or something like that.
  2. The biggest advantage my method has is that each step is discrete per drive, so it's much less risk as you can take your time and verify that everything is still where it should be after each step. Once you are done the only difference you should see is more free space on each replaced drive slot, and a totally empty 4TB in slot 7. All data should remain available throughout, with no major interruptions. I'd assign each line a new "job", and look for help and ask questions as needed, checking off before attempting to start the next item. Looking back over what I wrote, step 2 may require additional parts, I can't remember if Unraid will allow you to drop a parity drive without setting a new config. You definitely can't remove a data slot. If not, then it's a simple matter of setting the new config with save all, then doing the unassigning of parity2, checking that parity is valid, starting the array and letting it check. After that's done you would add the old parity 2 disk to slot 7 and let it clear. The array would still be available throughout. If you don't care to keep parity valid, you could combine step 1 and 2 for a considerable time savings, by simply doing the new config saving all data and cache, assigning the new single 8TB parity and old parity 2 to slot 7, DON'T check the parity is valid box, and let it build. Proceed from there. @JorgeB would know for sure, he plays around with test arrays much more than I do. It's fine, but it will be slow, and slow down the check more than the sum of the 2 tasks. Just to throw some random figures, if a parity check with no data access takes 12 hours, and copying a block of data would normally take 1 hour without the check running, it might take 2 hours to copy and extend the parity check to 16 hours. That's an example for effect, I haven't actually done hard benchmarks, but it feels pretty close based on observations.
  3. Good request, I agree. Until it's implemented, I suggest using other means to access your VM's. I consider the built in web based VNC Remote as a last resort, I would never use it on a regular basis. You get much better options and performance using something like NoMachine, or if you need direct console type access you can use a real VNC client like UltraVNC or https://remmina.org/ If you are using windows as your desktop and windows as the VM, RDP is a very good option as well.
  4. Try temporarily connecting peer to peer static IP on both ends single cable from Unraid to a client machine. NO switches, no reusing current cables, just Unraid and one client machine with a single ethernet cable connecting them. Make sure you set proper static IP's in both Unraid and the client machine before disconnecting them from your main network.
  5. License is only validated at array start, so as long as you don't stop the array you should be fine to wait until normal USA west coast business hours. @SpencerJ
  6. Yes. The 2 parity drives have no connection, totally separate math equations are used. The only rule is no data drive can be larger than either parity slot. The data is continuously available through the rebuild process, you don't have to wait during a rebuild. The array is fully available during parity checks, but some systems don't have enough resources to keep playback seamless during checks. Any access is shared, so using the array while things are processing will slow down all the things.
  7. Can you please clarify? In step 1 you talk about removing 2 drives, but it seems you are actually replacing (not removing) disk1 and disk4, and adding disk7? If I've got that correct, then I think the more proper and safe way to go is 1. Physically replace parity 1 with new 8TB, set 4TB aside for now, let parity build on new 8TB, run correcting check to make sure everything is still happy. 2. Unassign parity2, start array, stop array, assign old parity2 disk to data slot 7, let Unraid clear the disk then format it. 3. Physically replace 2TB disk1 with new 8TB, let unraid rebuild it, do a non-correcting check again. 4. Physically replace 2TB disk4 with new 8TB, let unraid rebuild it, do a non-correcting check again. 5. Optionally replace 2TB disk2 with original parity1 4TB, etc. I see no reason to move files around, but my process will take many many more hours and keep parity valid the entire time. There is no reason to empty drives you are replacing if you let Unraid rebuild them with the content intact. The only time you need to empty files off a drive is if you are permanently reducing the number of occupied drive slots, and you aren't doing that.
  8. I'm being lazy and asking without searching first, so feel free to shame me. Is there a transcript somewhere of these? I'd rather read than listen.
  9. If the drive is in the parity array, it's not just what's on that drive, but what's on any drive in the array. If any drive dies, ALL other drives must read perfectly from end to end to reconstruct the failed drive. If a drive with all your most important data dies, you are relying on a drive with potential errors to rebuild it.
  10. Easiest is probably install and use Nomachine as the remote interface. You install it on both the VM and the local machine.
  11. As mentioned by arturovf, google reset windows 10 password and find a method that works for you. Temporarily setting up a new VM with windows 10 installation media and point it to the existing vdisk.img is a good way to use the rescue boot methods.
  12. Depends. Remember the cardinal rule that the *.key file is paired with the specific USB stick, so if you get the new box up and running exactly the way you like it all things included on a trial key, then you would copy the config folder from the trial key, delete the *.key file that goes with that trial stick, delete the config folder on your current licensed stick making sure to keep a backup, restore the new server config folder to your licensed key, copy the *.key file that goes with that stick back into the config folder, done. The config folder holds all your customizations, and 99% of the files in there are text files, so browse around and get familiar, you might figure some stuff out.
  13. No, if the subsequent write after the read error succeeds, the disk is not disabled, but the disk error column is incremented. It is ONLY disabled if a write fails, as itimpi said, Read errors are corrected by writing the parity emulated data back to that sector.
  14. Now that you have your feet wet, take a look at enabling disk shares. That will expose all the drives as shares. Warning!! Disk shares are disabled by default for a reason. The user shares are a combination of all root folders on all the drives, so it's 2 different views of exactly the same files. If you mix user shares and disk shares in a file operation, you can corrupt data.
  15. There are very few instances where this is true. Unraid disables a disk AFTER a write to it failed. That means that data that was sent to the disk didn't get written to that disabled disk, instead it only exists on the emulated copy. This write could be something inconsequential or an overwrite of existing data, or it may be a critical write that if discarded would mean a corrupted file or worse, a corrupted file system. The "safest" thing to do would be to do a full binary compare of the disabled disk and the emulated content, display the difference and allow the user to choose which copy is most accurate. That would not be a trivial process, and would have very little benefit over what's currently available, where you can browse the emulated disk and if it looks good, rebuild that content to the physical disk. The shortcut of just "mark drive as good" means you need a full correcting parity check to be sure all the bits that got written to the emulated disk that you just discarded are updated to keep parity in sync. It typically takes just as long to do a full parity check as it does to rebuild a disk, so you aren't saving any time.
  16. Memtest for at least a full pass, preferably 24 hours. You can try the one built in to the Unraid boot menu, but you would probably be better off making a separate boot stick with the latest version from https://www.memtest86.com/
  17. Sounds like a solid plan (set up new server with trial license temporarily). Do you have a way to physically connect the new drives one or more at a time to the current box? If so, then you could mount and format the new data drives in UD, rsync each drive, then physically install in new server fully populated. Run it up, assign and build parity, that would take care of the parity data array. Migrating the containers might be a tad more complex, where do you have container appdata set right now?
  18. Truth. I have yet to see a server type system "feel" faster with that sort of overclocking. Synthetic benchmarks may show small improvements, but nothing that actually effects real world loads significantly. I HAVE seen timing issues with XMP cause micro stutters and brief freezing even if it didn't outright crash.
  19. Are you talking about the windows login? Or Unraid? For windows, it depends on whether it's a local or MS account.
  20. Quite possibly yes, since Unbalance typically copies then deletes, so you will likely have duplicate files until you rerun the Unbalance job.
  21. That can be done manually with scripting, but it's not a supported feature, and could have some unexpected or unwanted results if you don't know what you are doing.
  22. Yes, but unless things have changed, performance was abysmal using that method. I'd love to see some updated benchmarks though.
×
×
  • Create New...