Jump to content

JonathanM

Moderators
  • Posts

    16,192
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Here is a link to a post with some of the parts you need. I use it to start a VM after my gateway IP is pingable. https://forums.unraid.net/topic/78454-boot-orderpriority-for-vms-and-dockers/?do=findComment&comment=727416 You could reuse some of the code to stop and start containers with the docker pause and unpause or stop and start. It would need a major rewrite to accomplish exactly what you want cleanly, but I think all the parts are there.
  2. Are you sure the correct cabling is in place? Going from SAS controller to SATA disks uses the same connectors as SATA ports to SAS backplane, but the cables are wired differently, forward vs. reverse breakout.
  3. Maybe try separating the download and unpack destinations into different pools?
  4. No idea if this is relevant or helpful since I personally don't use plex, but I have seen several mentions of deleting a codec folder in the plex installation and allowing it to be rebuilt. Maybe investigate that angle and see if it gets you anywhere?
  5. My plan would be to go ahead and add the new NVME as a pool, name it cache if you wish, doesn't matter much as long as you keep track of the name, then disable docker service (not just the containers) and VM service if the VM image is on disk6, then set up the mover to transfer all the shares currently on disk6 to the new pool. That way you don't have to do the backup and restore. All this hinges on being able to have both NVME drives installed at the same time, if that's not possible then your only safe option is to copy the files elsewhere off of disk6 like you were planning. If you are ABSOLUTELY SURE that all the rest of your disks are 100% healthy, you could always remove the current NVME and allow the rest of the disks with parity to emulate the files that were on disk6 and copy them to the new NVME assigned to a pool. Nothing "special", but there are so many ways of accomplishing what you want to do, it's tough to pick one way as best. The only thing I would warn is to make sure the docker and VM tab aren't present in the GUI when you start copying files around, that way you can be sure that there won't be any in use system files that don't get copied. Disabling the services will remove the tabs from the GUI until they are enabled again in settings. At some point in the process you will have to follow the shrink array procedure as mentioned by JorgeB to remove a data disk slot, unless you replace and rebuild it with another 4TB. Given that you have 10TB free you really don't need another 4TB of space right now. Sorry if I confused the issue with more options, be sure to ask more questions and lay out your plan with more specifics before you actually do anything, that way we can look over your steps and verify whether it sounds safe.
  6. Take a look in the support thread for the specific container that you are running, there should be FAQ or documentation explaining it, if not, ask the question in that thread.
  7. 30 lashes with a wet noodle!🤣 Thanks for getting us moved to the next step in the seemingly explosive growth of our little community!
  8. sleep <number of seconds> virsh start <name of VM> sleep X virsh start X etc, etc, for however many VM's and however long you want to wait between starting
  9. Each disk is a separate zfs single member. Parity array disks are not pooled at a bit level, each one can have different filesystems if desired, and are independent from each other, and can be read separately if needed.
  10. Unraid or any RAID isn't backup, it's high availability, where a failed drive can be emulated and replaced while your data is still accessible. Backup implies the ability to recover from a corrupted or deleted file, which requires a second copy somewhere besides the array. You need to have a second copy away from the array of any data you can't afford to lose.
  11. The multiple pools thing is a recent (relatively) addition that hasn't been fully integrated into mover actions. Hopefully this will all be taken care of in 6.13, at least that's been the stated intent. There aren't any public previews of 6.13 yet, as 6.12 hasn't totally settled out, but it's looking hopeful.
  12. Looks fixed now. When I posted that, the text editor buttons were missing in the dark theme, and the light theme was completely blown out with text overlapping, icons full screen, nothing working.
  13. Still massively messed up here. Only marginally useable in dark mode, light mode is completely screwed. @SpencerJ, any timeline for getting this fixed?
  14. They are in the docker image, which is not normally browsable. If you don't care about the files, since you said they can be deleted, just delete and recreate the docker image and they will be gone. You can reinstall all your containers quickly by using the previous apps section. After you get that sorted, before you start uploading more files into the image again, examine the container config path mappings. The host side is where the files will go on the array, the container side is where you point the application. So if the mapping shows host = /mnt/user/share/files and container = /home/nobody then files synced to /home/nobody in the container will appear in the \\share\files folder.
  15. The diskspeed container might be able to pinpoint which disk is causing the slowdown, diagnostics taken after an event may also be helpful.
  16. Yes. After the array has been started once and the config has been committed, you can later put a new drive into the disk1 slot and Unraid will clear it to keep parity valid. But if you plan to replace the drive, why bother zeroing it out, just do a normal drive replacement.
  17. You don't, an admin must do it. Reply back with the username you want, and hopefully @SpencerJ or one of the other admins will change it.
  18. Docker containers can only see the paths internal to the container unless you map a host path to a corresponding container path. Those paths are mapped in the container template. Sometimes containers require host paths pointing to unassigned device locations to be set to slave r/w instead of just r/w.
  19. Not likely. One of the features of Unraid is that the boot stick is easily readable and editable in virtually any OS, FAT32 is still the lowest common denominator for file systems. I'm not saying it will never happen, but until Microsoft Windows can natively read and write a ZFS mirror I don't think it's going to be considered.
  20. Any other BTRFS RAID level besides 0 and single have redundancy. So if it were RAID1, RAID10, or RAID5 you could recover from a single drive failure. BTRFS RAID5 is not a recommended profile.
  21. Just keep in mind that if any one of the 4 disks fails you will lose all the data on the whole pool, not just the disk that failed.
×
×
  • Create New...