Jump to content

JonathanM

Moderators
  • Posts

    16,212
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. 7 hours ago, luisp23 said:

    VM Settings -

    Logical CPUs All ticked CPU 0 - CPU 3

    Don't do that. You need to leave resources available for the host (Unraid) to emulate the motherboard and other I/O. At the very least leave CPU 0 available for Unraid. Since you only have 4 threads, I'd only use CPU2 and CPU3 for the VM, maybe try with only the last thread for the VM and leave the other three for the host.

    7 hours ago, luisp23 said:

    Initial Memory & Max Memory - 8192 MB

    That may also be too much, depending on how much RAM the system has. If the physical box has 32GB, 8 for the VM should be fine. If it has 16 or less, reduce the VM to 4096.

     

    The more resources you tie to the VM, the slower the host is going to run, which in turn slows the VM way down. Give the VM the absolute minimum and add a little at a time until performance doesn't increase.

  2. 7 hours ago, Lien1454 said:

    Am I better off doing cache --> array

    Just the reverse array -> cache, or cache only

    The advantage of having the cache primary and array be a secondary with a move to cache setting is that if you ever accidentally fill the cache and the minimum free space is set correctly the excess data will go to the array, then when the cache has room, the mover will put the data back on the cache.

     

    Cache only will give an out of space error when it gets below the minimum free space set.

    • Like 1
  3. 12 minutes ago, ijuarez said:

     

    I just learn something, so are you saying it would be best to have the vm on the cache or and unassigned disk for better performance?

    Yes, the parity array is great for mass storage, very bad for random I/O, especially random writes. SSD or NVME is a must for vdisks.

    • Like 2
  4. 8 hours ago, peterbata said:

    Are you referring to the vDisk location. It appears to be on Disk 1 which is the 8TB SAS drive in my array.

    Yep, that would be why the VM is dog slow. vdisks should be on fast pools, not parity protected array disks.

  5. Obviously before messing with it make a backup.

     

    Stop your HA VM, and click on the 32GB under CAPACITY. Change it to 42G, or whatever floats your boat, and apply the change.

     

    Set up a new VM with your favorite live utility OS as the ISO. https://gparted.org/livecd.php is a good option. Add the existing haos vmdk vdisk file as a disk to the new VM. Boot the new VM, it should start the utility OS, where you can use gparted to expand the partition to fill the expanded vdisk image.


     

  6. 35 minutes ago, Kilrah said:

    Problem with a VPN container is it'll likely take some time before it's ready, but since there's no delay setting it'll be "started" in a second or 2 then the next is started, but actually it'll need maybe 30 seconds to connect to the VPN and actually be available for others to use...

    Which is why the Unraid regular container startup has customizable delays between containers.

     

    Black start from nothing is easier, partially running start during backup sequence is more complex, it needs even better customizations. Shutdown and startup conditionals and/or delays would be ideal.

     

    As an example, for my nextcloud stack I'd like nc to be stopped, wait for it to close completely, stop collabora, stop mariadb. Backup all three. Start mariadb, start collabora, wait for those to be ready to accept connections, start nextcloud.

     

    The arr stack is even more complex. The arr's and yt dl need to be stopped, then the nzb, then the torrent and vpn. Startup should be exactly the reverse, with ping conditionals ideal, blind delays acceptable.

    • Upvote 1
  7. 2 minutes ago, Harblar said:

    Any recommendations on boards that WILL run with all slots filled?

    Sorry, I didn't mean to imply that there are properly working boards that don't run with all slots full. If the manufacturer says their board will run with model XXXX RAM, it should run it fine, but that doesn't mean boards don't fail.

     

    I just wanted to let you know that could be a failure symptom, you can have a board where all the slots are fine, all the DIMM's are fine, but all 4 at once isn't.

     

    I personally had a board that ran fine with all 4 DIMMS for years, until it didn't. The only failure mode was random errors when all 4 slots were full, it ran perfectly on any 2 of the DIMMS, but put all 4 in and memtest would fail every time.

    • Upvote 1
  8. 48 minutes ago, Harblar said:

    Memtest86 errors/failed with all 4 sticks installed on test 3-4 of a single pass. So far I've tested each individual stick in the A2 slot and all have successfully completed a single pass with no errors. So... the memory itself might not be bad.

    Some motherboards just won't run with all slots filled.

  9. 2 hours ago, Kilrah said:

    hdplex stuff is usually good. Note only 10A on the 5V rail and single 4-pin for SATA power so you probably don't want more than maybe 5 drives on there.

    Yeah, but they hawk the ability to easily daisy chain them in the same system, even have pinout and diagrams to show how. I can see how stacking these as you add drives could be a good way to go, assuming they work as promised.

    • Upvote 1
  10. 2 minutes ago, csimpson said:

    I have Unraid running off a 16GB USB stick currently. 

    It's more correct to think of the USB stick as firmware with space for storing changed settings. Unraid loads into and runs from RAM, it only touches the USB stick when you change settings.

     

    Container appdata and executables should live on a SSD or multiples for redundancy, separate from the main storage Unraid array. Legacy documentation and videos will refer to that storage space as "cache", now it's more properly referred to as a "pool" of which you can create as many as make sense for the desired speed and redundancy.

  11. 2 hours ago, csimpson said:

    However, if I add another 4TB drive, (not parity) would I then have 12TB of protected data?

    The math doesn't add up for me.

    Hoopster summed it up quite well, but I wanted to stick my .02 into the discussion to hopefully clear this up a little more.

     

    Parity doesn't hold any data. Period. It's not a backup. Period.

     

    It contains the missing bit in the equation formed by adding up the bits in an address row. Pick any arbitrary data offset, say drive1 has a 0, drive2 has a 1, drive3 has a 1, drive4 has a 1, so parity would need to be a 1 to make the column add up to 0. Remove any SINGLE drive, and do the math to make the equation 0 again, and you know what bit belongs in that column of the missing drive.

     

    So, you can protect ANY number of drives, and as long as you only lose 1 drive, the rest of the drives PLUS PARITY can recreate that ONE missing drive. Lose 2 drives, and you lose the content of both, but since Unraid doesn't stripe across drives, you only lose the failed drives.

     

    Unraid has the capability to use two parity drives, so you can recover from 2 simultaneous failures. However, the second parity is a much more complex math equation that takes into account which position the drives are in, so it's a little more computationally intensive. The extra math is trivial for most all modern processors.

  12. 21 hours ago, tejasgadhia said:

    it doesn't appear any new files are being written to the drive, so I assume Unraid knows that the drive is not healthy.

    That's not a thing. Unraid will quite happily continue to use a disk slot even if the drive fails a write and is disabled.

  13. 1 minute ago, Inch said:

    I've tried plain bridge mode and custom network with the same results. I've just switched back to bridge, readded the sonarr port (specified 8989 in both) and it still doesn't work yet Radarr does still. 

    Strange. I'm out of things to try at this point. Maybe someone else will have some ideas.

×
×
  • Create New...