Jump to content

JonathanM

Moderators
  • Posts

    16,740
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by JonathanM

  1. Don't do that. You need to leave resources available for the host (Unraid) to emulate the motherboard and other I/O. At the very least leave CPU 0 available for Unraid. Since you only have 4 threads, I'd only use CPU2 and CPU3 for the VM, maybe try with only the last thread for the VM and leave the other three for the host. That may also be too much, depending on how much RAM the system has. If the physical box has 32GB, 8 for the VM should be fine. If it has 16 or less, reduce the VM to 4096. The more resources you tie to the VM, the slower the host is going to run, which in turn slows the VM way down. Give the VM the absolute minimum and add a little at a time until performance doesn't increase.
  2. Just the reverse array -> cache, or cache only The advantage of having the cache primary and array be a secondary with a move to cache setting is that if you ever accidentally fill the cache and the minimum free space is set correctly the excess data will go to the array, then when the cache has room, the mover will put the data back on the cache. Cache only will give an out of space error when it gets below the minimum free space set.
  3. Yes, the parity array is great for mass storage, very bad for random I/O, especially random writes. SSD or NVME is a must for vdisks.
  4. Yep, that would be why the VM is dog slow. vdisks should be on fast pools, not parity protected array disks.
  5. All support questions for specific containers should be posted in their thread, not spread out across the forum. That way people can easily see what others have asked, and the answers they received. Many problems have already been asked and answered.
  6. At any point did you format a drive? If so, you erased all the existing files.
  7. 1. What drive is the VM using for vdisk or passthrough? 2. Try changing the RAM to 8GB
  8. Perhaps post in the support thread specific to your container. In the Unraid GUI, click on the container icon, and select support.
  9. Not using bridge network.
  10. Obviously before messing with it make a backup. Stop your HA VM, and click on the 32GB under CAPACITY. Change it to 42G, or whatever floats your boat, and apply the change. Set up a new VM with your favorite live utility OS as the ISO. https://gparted.org/livecd.php is a good option. Add the existing haos vmdk vdisk file as a disk to the new VM. Boot the new VM, it should start the utility OS, where you can use gparted to expand the partition to fill the expanded vdisk image.
  11. Which is why the Unraid regular container startup has customizable delays between containers. Black start from nothing is easier, partially running start during backup sequence is more complex, it needs even better customizations. Shutdown and startup conditionals and/or delays would be ideal. As an example, for my nextcloud stack I'd like nc to be stopped, wait for it to close completely, stop collabora, stop mariadb. Backup all three. Start mariadb, start collabora, wait for those to be ready to accept connections, start nextcloud. The arr stack is even more complex. The arr's and yt dl need to be stopped, then the nzb, then the torrent and vpn. Startup should be exactly the reverse, with ping conditionals ideal, blind delays acceptable.
  12. I think that is backwards. emhttp was the only web engine in the past, currently nginx is the web server, and emhttp takes care of the background tasks.
  13. Sorry, I didn't mean to imply that there are properly working boards that don't run with all slots full. If the manufacturer says their board will run with model XXXX RAM, it should run it fine, but that doesn't mean boards don't fail. I just wanted to let you know that could be a failure symptom, you can have a board where all the slots are fine, all the DIMM's are fine, but all 4 at once isn't. I personally had a board that ran fine with all 4 DIMMS for years, until it didn't. The only failure mode was random errors when all 4 slots were full, it ran perfectly on any 2 of the DIMMS, but put all 4 in and memtest would fail every time.
  14. Are you positive nothing else was trying to access the drive during the test?
  15. Some motherboards just won't run with all slots filled.
  16. Yeah, but they hawk the ability to easily daisy chain them in the same system, even have pinout and diagrams to show how. I can see how stacking these as you add drives could be a good way to go, assuming they work as promised.
  17. It's more correct to think of the USB stick as firmware with space for storing changed settings. Unraid loads into and runs from RAM, it only touches the USB stick when you change settings. Container appdata and executables should live on a SSD or multiples for redundancy, separate from the main storage Unraid array. Legacy documentation and videos will refer to that storage space as "cache", now it's more properly referred to as a "pool" of which you can create as many as make sense for the desired speed and redundancy.
  18. Hoopster summed it up quite well, but I wanted to stick my .02 into the discussion to hopefully clear this up a little more. Parity doesn't hold any data. Period. It's not a backup. Period. It contains the missing bit in the equation formed by adding up the bits in an address row. Pick any arbitrary data offset, say drive1 has a 0, drive2 has a 1, drive3 has a 1, drive4 has a 1, so parity would need to be a 1 to make the column add up to 0. Remove any SINGLE drive, and do the math to make the equation 0 again, and you know what bit belongs in that column of the missing drive. So, you can protect ANY number of drives, and as long as you only lose 1 drive, the rest of the drives PLUS PARITY can recreate that ONE missing drive. Lose 2 drives, and you lose the content of both, but since Unraid doesn't stripe across drives, you only lose the failed drives. Unraid has the capability to use two parity drives, so you can recover from 2 simultaneous failures. However, the second parity is a much more complex math equation that takes into account which position the drives are in, so it's a little more computationally intensive. The extra math is trivial for most all modern processors.
  19. New, unproven, expensive? Advertising looks great, do you have any links to third party real tests?
  20. That's not a thing. Unraid will quite happily continue to use a disk slot even if the drive fails a write and is disabled.
  21. Strange. I'm out of things to try at this point. Maybe someone else will have some ideas.
×
×
  • Create New...