Jump to content

JonathanM

Moderators
  • Posts

    16,740
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. That's going to be an issue. I'm not aware of a windows method of mounting XFS or BTRFS for writing. Theoretically, as in I've not tried it, you should be able to mount the individual windows drives as Unassigned Devices in Unraid. So, it's possible you could build out the Unraid array with your empty drives and do your migration in Unraid instead of in windows. Wouldn't hurt to see if you can read the windows drives in Unraid.
  2. If all you needed was file access, you could have booted a live linux distribution and mounted the data drive in question read only. You also could have set up a trial Unraid install and reassigned all your disks to the correct location and told it parity was already valid. Getting VM's and containers running is obviously a little more complex.
  3. That's not going to work. Closing the containers is not enough, you need to disable the docker service in settings, so the word "Docker" no longer appears in the GUI navigation bar.
  4. Half right (maybe just a quarter) No need to remove drives and rebuild the array, but you do need to copy the data elsewhere, either another data drive in the array, or whatever. There is a whole thread sticky'ed on doing the transition, but in a nutshell, accumulate enough free space to copy the content of your largest ReiserFS drive, format it to a newer format, copy the next victim to the freshly formatted drive, format it, lather, rinse repeat until the last drive is formatted. Notice there was no drive removal, no rebuilding, nothing like that. Everything is done with the drives assigned where they are. Now, since you said... my plan of attack would be, assuming you have at least a single vacant physical drive slot open and your parity drive(s) are currently smaller than 14TB 1. Purchase 2ea 14TB drives. 2. Run a noncorrecting parity check, proceed if zero errors and healthy SMART on all existing drives. If either thing is false, my whole strategy changes. 3. Replace the parity drive with one of the 14TB drives, or if your risk tolerance is low, add it as Parity2. After parity is done building, another parity check. 4. If you kept parity1 and added parity2, remove the parity1 drive now. Parity2 has to stay as parity2 to be valid, so you'll just have to deal with the OCD of not having a parity1. 5. Add the second 14TB as a new data drive. Run another parity check to be sure everything is still stable after all the horsing around. 6. Copy the content of as many ReiserFS drives as you wish to the new drive. NOTE. I said COPY. on purpose. 4ea 3TB drives would be good if they were all full, if not, maybe more would fit. I'd shoot to keep 2 or 3TB empty on a 14TB for now. 7. Once you are satisfied the copy went well, verifying by comparing content to whatever your risk tolerance and OCD require, then format any of the source drives that you want to keep in the array, and use those as targets to copy the rest of your ReiserFS drives. 8. When the only ReiserFS drives in the array are slated for removal because their content already was copied to other array drives in the new format, then you can set a new config with only the desired drives and rebuild parity, If you were working with parity2, you can move it to parity1 for the rebuild. Caveat for this method... When you are copying from drive to drive, there will be duplicate files that the user share file system will hide, because only the first copy of an identically named file in the same share path can be used. That means writes and updates to user shares could be lost, if the changes are made to the source of the copy instead of the destination, and in any case attempting to verify the copy will fail on the changed file. Reads can proceed as normal. If you must continue to write to the array while you have duplicates in place, there are ways to exclude the source disks, but it's easier just to not write if possible. The reason I stressed copying vs. moving is the speed. Moving means a copy followed by a deletion, which is also a write, but on the source disk. Deleting a file from a well used ReiserFS volume is slow to begin with, add in the parity calculation, and the write to the destination disk which is also updating parity in a different sector, means a move can easily be 10X slower than a copy. Much faster to copy the content so only the destination disk and parity disk are writing. Plus you get to verify the copy because the source is still there for secondary checks and recopy if there is an issue, and once you are sure all your data is copied, the format to the new filesystem only takes a minute or two, vs. hours to delete the files one by one.
  5. This is the important bit. The agent is installed in the VM itself, not in Unraid.
  6. Attach diagnostics to your next post. FORMATTING is NEVER expected on a rebuild. Was the data slot emulated with all the data when you removed the old drive? The rebuild should be identical to what was showing when the drive was removed.
  7. Simply writing and verifying all zeros with the standard preclear routine is secure enough for most cases. You would need special equipment and software that is rather expensive to recover anything sensible. If you think you could be the target of some government action, you probably should physically destroy the disk instead of selling or disposing of it intact.
  8. Not recommended to put SSD's in the parity array, but you currently must still have one drive assigned to a data disk position to start the array, so... For now, assign any old USB stick to the disk1 array position, and set up your SSD's in whatever pool configuration you feel comfortable, 3 single volumes, or a RAID1 with 2 of the drives and a single, or all 3 in a single volume with whatever level of redundancy you want to balance redundancy and capacity. Set all your shares to either pool only or pool prefer, so you don't actually put anything on your placeholder USB stick. If it were me in your position, I'd set up 3 individual pools, with 1 for media files, 1 for system files like the docker image and such, and 1 for container appdata. Then when you add actual HDD's to the array, you can set the media file share to pool yes, and the mover will empty the media to the parity array.
  9. The templates for the containers are stored on the flash drive. If they aren't corrupted you can copy them from your old flash drive or a recent flash drive backup.
  10. Try nomachine instead of the host based VNC.
  11. That is OLD wording, and implies the array is unavailable while the clearing is done. That is not the case any more, the clearing is done in the background while the array is still online and available, when the clearing is done Unraid adds it and makes it available to format.
  12. Are you keeping a backup of that data elsewhere?
  13. Are you positive you aren't connected to a "guest" wifi? Many devices can be set up with a main wifi that has direct access to the LAN, and a guest wifi that only has WAN access.
  14. I'm doing something similar with 3 network cards, 2 are isolated to a pfSense VM, the third is allocated to the server, both the server nic and the pfSense LAN nic are connected to a switch that serves the rest of the LAN. HOWEVER... Unless you are willing to maintain a capable router on standby to service the LAN when your server is down for whatever reason, AND you are an IT expert with time to tinker, I would strongly recommend NOT doing this. Another caveat. A trial license will NOT work, as the trial must have internet access to start the array, and without the array started, you won't have internet. It works quite well for me with a paid license, I've had my setup running this way for several years and have had no major issues. I've only had to revert to my backup router twice that I remember in all that time.
  15. How do you handle user shares that have part of their storage go away when that pool is down? For instance, I keep some VM vdisks on the parity array, and some in a pool. They all seamlessly are accessible via /mnt/user/domains, and I move them as needed for speed or space, and they all just work no matter which disk actually holds the file.
  16. I've seen someone use that type of setup, and I think they were eventually successful, but personally I gave Unraid it's own physical 10GB card and passed the 2 motherboard 1GB ports to pfsense. The 10GB card connects to the same switch as the LAN pfsense card, so there are 2 cables to the switch and one to the modem. I like giving 10GB of server bandwidth to all my LAN clients, and my internet is FAR from GB.
  17. How do you accomplish that when the containers need the array to function? And before you say, "all my containers can be run without the array" remember that the vast majority of the containers or VM's people use with Unraid use the array for bulk or working storage. This is not a simple ask, it would require major rewrites of almost all of Unraid, and change how things work on a fundamental level.
  18. Also keep in mind that many (if not all) of the really essential packages are now part of the base OS. It was not clear that someone would support a one stop shop like nerdpack, so a serious effort was made to make it unneeded. You may not even need nerdtools.
  19. https://forums.unraid.net/topic/129200-plug-in-nerdtools/?do=findComment&comment=1177720
  20. The last time I tried it (years ago) I gave up after a week. There are very few scenarios where that method makes the most sense, normally it's better to just rebuild parity and get back protected sooner rather than waiting. What is your situation? Are all your drives proven healthy? Zero errors on all recent parity checks?
  21. Use whatever method you are most comfortable. There are definitely fast ways, but given that you probably also have data scattered around that definitely SHOULDN'T be deleted, I'd err on the side of caution and pick through things methodically. Since you obviously stepped into a poorly managed IT situation I also feel compelled to ask about your backup situation. Some people wrongly assume that Unraid's ability to recover a failed drive is the same thing as a backup. IT IS NOT. Unraid by itself can't restore corrupt or deleted data, you need versioned backups in a second physical location.
  22. Yes, if you set up a new flash drive with a trial, purchase the license, then copy the entire config folder to the newly licensed flash without the old *.key file. Since you say you are having issues with the flash drive I'm assuming you are working with backup files from the drive, so just make sure you keep the key file with the physical key it was issued to, and overwrite the rest of the config folder's content with the active server config files. Probably should just remove the old dead *.key file from the backup you are working with, and make a separate copy of the newly issued key file and label it to correspond with the new physical USB stick. I'm not sure how clear I was, so if you have questions just ask. P.S. Are you using a USB 2.0 connection to the motherboard? Typically that results in less issues. My favorite USB drives are all metal and relatively large, for good heat dissipation.
×
×
  • Create New...