Jump to content

JonathanM

Moderators
  • Posts

    16,706
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. There are benefits to keeping BTRFS if your system works well with it. The OP has a real ongoing issue that may be solved by changing to XFS.
  2. In a nutshell, change any shares that currently have data on the cache to cache:yes, stop the docker and vm service, not just containers and vms, run the mover, make sure cache pool is empty, stop the array, change visible cache slots to one, change cache desired format to xfs, start the array, check that only the cache drive shows unformatted, select the box to format it and press the button. Then set desired shares to cache:prefer, run mover, enable docker and vm services. This is all covered in the wiki about replacing a cache drive.
  3. Try collecting them using the command line.
  4. I believe you answered your own question here. Honestly though, if you keep tabs on your equipment, the odds of a random out of the blue single device failure recoverable with a redundant cache pool is exceedingly rare. Most failures nowadays have advance warning.
  5. Possible performance decrease over time. BTW, I believe you are putting too much weight on the redundancy factor. Redundancy only protects against a very limited amount of risks, you are much more likely to lose data from something NOT covered by a redundant drive. You really need a good backup strategy, for container data there is CA Backup, for VM's you have the options of either backing them up like you would any desktop, with client software installed in the VM, or backing up the vdisks and xml with tools inside unraid. Any of those backup options can be targeted to a location on the parity protected array if you wish. Once you have a solid backup strategy in place, this kind of stress over the integrity of the cache pool is greatly reduced. RAID or Unraid of any flavor is NEVER a replacement for backups. It's only there to keep things up and running in case of a disk failure. It can't protect from data corruption or deletion from users or bad hardware, only backups can do that.
  6. Since the config folder should be pretty small you may be able to get by with the free version of this https://www.easeus.com/data-recovery/free-recover-data-from-fat32-sd-card.html
  7. What type of NTFS? Simple basic? One partition per drive that covers the entire space? Look in windows disk management and see, also try doing a disk check in windows, if the NTFS isn't perfectly healthy it probably won't mount in linux.
  8. See here, this tutorial is a little dated so the specifics might look different but the principles are the same.
  9. There is no point in troubleshooting an old version of unraid, please update to 6.8.3 or 6.9.0 rc1 and see if the same thing happens.
  10. Would offering 30 day trial extensions for $20 be an acceptable option? Keep everything as it currently stands, just add paid extensions. It would still be a trial version with phone home on array start, but the expiration date could be extended indefinitely in 30 day increments. Note, I have no idea how much work this would be to implement, I'm just floating an idea from an end users prospective.
  11. Too many ambiguities. Can you give a specific example? For instance, I personally use Remmina or nomachine on my desktop daily driver to access both Windows VM's and Linux VM's. For the windows VM's through remmina, I use the RDP protocol, for linux through remmina, I use VNC hosted in the VM for some things, VNC hosted on Unraid for things requiring lower level access at the expense of performance. Nomachine works universally. If I'm on a windows machine, I use RDP for windows VM's, unless I need VNC through unraid, in which case I use ultravnc viewer, which also works for the VM hosted vnc server on the linux VM's. I can also use connectwise control if I'm not local and don't feel like or can't establish a VPN link. Connectwise is expensive for purely personal use, I use it for business, so it's a write off. I NEVER use the noVNC, it's been too much of a pain for me, and it's easy to set up alternatives for the few times you really need "local" console for the VM. Hosting the remote access server portion in the VM itself is much better than relying on the KVM vnc server on unraid.
  12. Nope. A folder is just another directory entry like a file, so as long as there is free space, it goes to the first disk that meets the allocation settings. Since folders take up virtually zero space, ALL the empty folders will end up on a single drive.
  13. Temp display isn't part of the stock unraid, so this would be an issue for the temperature plugin.
  14. Not sure if this is intentional or what, but I just noticed that when the array is stopped, CA complains about the docker service not being enabled, so no container results are available. That's all well and good, but the docker service IS enabled, just not available with the array stopped. I'm just picky, but wouldn't it be more accurate to indicate that you can't configure containers until the array is started?
  15. Eventually, but not currently. SAS spindown is still a work in progress, hopefully there will be a fully fleshed out solution when 6.9 is final.
  16. Simplest way is simply replace the cache drive using the method in the wiki, in short, 1. disable docker and vm services in settings. That should remove those items from the GUI menu. 2. set ALL user shares cache yes, then invoke the mover. If you did those two steps properly, and nothing is writing to the server for the duration, the cache drive should now be empty except for the docker.img which will get recreated when you reenable the service in step 7. 3. shut down and physically replace cache drive 4. start array and format new cache drive 5. set shares that should live on cache to cache prefer 6. run mover 7. enable docker and vm services 8. done you can change /mnt/cache/appdata to /mnt/user/appdata if you wish, but I'd wait until you get the cache drive swapped. If you have any other files besides the docker.img on the root of the cache drive, those will need to be dealt with after step 2.
  17. At the moment the most hands off method is to do it in 2 steps, first to the array, then back to the new pool. Be sure you have disabled both docker and vm services, if you still have the VM and Docker menu items in the GUI they aren't disabled. Set the shares you want to move to cache yes, then run the mover. After that completes, set them to cache prefer the new pool, and run the mover again. Alternatively you could manually move them, again be sure the services are disabled.
  18. That sounds exactly like what I was expecting, pausing and resuming as a separate operation once the array was fully started. Having a paused operation of any sort should disable and hide the check button that would start a new run, instead I would expect a "resume" and "cancel" button to take that space. Keeping the current state of a paused operation would be a logical way of operating, having to fight the timing to get an operation paused again while the array starts seems illogical to me. If the operation is paused automatically for a shutdown or reboot, having it resume automatically feels logical as well, with the caveat that starting in maintenance or safe mode probably should leave the operation paused regardless of shutdown state. I suppose another way of approaching it would be having the start array button always resume, and add a button beside the start button with the option to start the array but not resume.
  19. How about a compromise? Type mc at the CLI and see how you get on.
  20. Inquiring minds want to know, can you leverage this to other VM's as well by specifying a template of some sort to fix the edits the GUI mangles?
×
×
  • Create New...