Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Yes, as long as you copy preserving the "sparseness", it's likely that the OP wouldn't need to do anything. Until everything crashes from low space. Both qcow2 and raw are thin provisioned by default. The worst part about sparse files for VM's is the difficulty in recovering space on the host drive, even when you delete files in the VM, they still take up space in the vdisk until you force things, using a utility to zero out free space, and possibly defragmenting. Better to deal with the issue head on now, when it's easier and you are working from a known quantity instead of trying to recover from a crash. As a rule of thumb you should never overprovision a single vdisk to more than the total space available to the volume. Now, multiple vdisks? Sure. I probably have 1TB provisioned on a 480GB cache drive, but that's spread over 10+ VM's, most of which are experimental and subject to being blown away whenever needed. As long as you keep an eye on free space and keep at least 10% open, I've never run into an issue. Normally I keep my VM's lean and mean, as all the juicy bits stay on the shared array, to be shared to any and all VM's that are in need.
  2. Yes, assuming you want that specific vdisk image to live on the cache pool. You can leave it where it is, you just wouldn't benefit from the speed. That's a tough question. In a nutshell, if the actual data allocation inside the vdisk file can be forced to move to a smaller space, than yes, it's possible. However... the mechanics of doing that vary by the VM OS, and how that's handled. If you can shrink the partition inside the vdisk using tools in the VM OS, than it's relatively straight forward to copy that partition to a new smaller vdisk file. I wouldn't actually attempt to move the vdisk image, I'd back up and restore the OS to the new vdisk, clone the xml to a new VM, and change the vdisk pointer in the xml to the new location. That way if something messes up you can keep using the old VM while you try a new approach. Depending on your technical skill level, it might be less frustrating to just keep the old VM where it is, and set up a new one using the old as reference to how you would do things differently in the future. 😀 However, as long as you keep a copy of the original vdisk image intact, there's no harm in playing.
  3. If there is data that you want to keep on the 500GB drives, then you have 2 options. Rebuild 1 for 1 sequentially, that will keep the data in the same slot, nothing will change except you have more free space. or Rebuild the first drive, copy the data from the other three 500GB to the free space on the drive you just rebuilt, after that is complete set a new config and remove the 3 500GB and replace them with the new drives. Setting a new config doesn't erase any data slots, but if you put new drives in with a new config they will be blank and need to be formatted. If you want to keep data from drives you remove with a new config you have to copy it at some point. Parity doesn't hold any data, it only works with all your existing data drives to emulate 1 missing drive per parity slot. If you remove 4 data drives, nothing can be emulated. If you set a new config, parity must be built with the data drives that you assign. I'm not sure what you mean by minimizing downtime, the array is up and usable during rebuilds. It's not as fast, and continued use will slow down the rebuild considerably, but it's still available.
  4. No. You would need to find the appropriate slackware package.
  5. ReiserFS is not being updated or actively supported for many years now. It's going to cause more and more issues as time passes, not less. You need to migrate to XFS or BTRFS as soon as you can. There is a sticky thread from 5 years ago on conversion. https://forums.unraid.net/topic/35815-re-format-xfs-on-replacement-drive-convert-from-rfs-to-xfs-discussion-only/
  6. That's it in a nutshell. Depending on your financial situation and desire to recover the files, you may be better off sending it to a data recovery firm, but that bill would end with 3 zeroes before the decimal. If it's highly valued data, perhaps waiting until everything (and I do mean everything) stabilizes and settles down before attempting anything would be more prudent.
  7. Preclear is only 1 of many different ways to accomplish this goal, so no you don't need to preclear. The drive manufacturer has tests, as well as the built in SMART extended tests, and there are other third party utilities available. What is best and easiest depends on the other PC hardware you have available to you.
  8. Schedule a user script at the desired interval #!/bin/bash docker restart <container name>
  9. JonathanM

    Squid is 50!

    Yep! Makes you wonder what else she got into. 🤣 Happy Birthday!
  10. They utilize the cache pool or Unassigned Devices which doesn't use super.dat
  11. The LSI part layout is different on your board, looks like it has slightly more clearance.
  12. Since you passed the drive through to the VM, it is, for all intents and purposes, in a separate computer from Unraid. If you want to access its content from Unraid, you will need to set it up as a network share IN THE VM, and connect to that share using Unassigned Devices. Normal best practices for a VM would be to mount it in UD as a data drive, formatted to XFS, and if you want to write to it using another computer, be that a VM or physical machine, you would set UD to share it, and connect it as a network folder in that computer. The only exception I can think of is a gaming VM that refuses to allow content to reside on a network share. I second trurl though, keeping a windows VM active for downloading seems like a waste of resources. A docker container can share CPU, RAM and disks with Unraid, giving and taking as needed. A VM keeps whatever resources its allocated hostage, as you've found with that disk.
  13. This sounds suspiciously like you passed the entire device to the VM, which now controls the content. If so, then unraid has no access to it. If this device shows up in the VM's disk manager, it's not going to work like you want. It should have been mapped as a network drive in the windows VM.
  14. I don't think unraid's built in noVNC client for VM's can be hijacked like that. Binhex built a noVNC client inside the container, which is what I suspect you need to for the Selenium container for it to work the same. It's just a coincidence (because there aren't any other web based VNC viewers that I'm aware of) that unraid uses noVNC to access VM's and binhex uses it to access his container.
  15. Which platforms charge and which don't? Can you provide links to their policies where it specifically states their rates? Many people here are using google drive, which as far as I know doesn't charge to give you your data back.
  16. His post was telling you to follow directions, which you declined to do, instead you replied with snark. If you do that, you should get more help. This thread is outdated, the symptoms and fixes are many times hardware specific, so generic random ideas are the best you can get with no real info to go on.
  17. You may want to amend your report disclosing that it is concerning a docker container, not a full install. I don't believe the issue occurs on a normal install, but I could be wrong. Please read through this thread to get a sense of the history and what has already been tried to troubleshoot the issue.
  18. virsh start / shutdown / suspend / resume <vmname> Google virsh commands.
  19. As long as your current pool is healthy and properly RAID1, I think that is correct. However... I think the chances are pretty high your current pool may need some work before you proceed. Attach your diagnostics zip file to your next post in this thread, and wait for @johnnie.black to weigh in on your next steps. However you proceed, I recommend being sure your backups for significant files in the pool are current before moving on.
  20. Have you looked through this thread? https://forums.unraid.net/topic/39981-multiple-gpus-in-a-single-vm/
  21. Interesting. Honestly until people bring it up, I forget I have the restart script in place. What was so annoying to you? Or is it just the principle of the matter?
  22. Oh, a fix would be much better, but I don't like SAB, and this workaround hasn't caused me any issues, so I'm ok with it.
  23. Zero intervention required. I've had the following as a user scripts entry running hourly for probably a year. #!/bin/bash docker restart binhex-nzbget Never have to touch it.
×
×
  • Create New...