Jump to content
We're Hiring! Full Stack Developer ×

Version 6.8.3 Having trouble with my VM and cache setup


Recommended Posts

Hi, this is my first Unraid build and I'm having trouble understanding some basic configuration issues.

I've added a 10TB parity, 6 drives, and 2 SSD parity drives.

 

I got a VM installed and though I set it to only use the M.2 drive for performance. But I've been playing around with it for a couple weeks and it looks like everything is saving to the cache drives. That's problem 1.

 

The second problem is the cache drives aren't moving to the array. I tried invoking the mover but it didn't seem to do anything. I suspect it's just something I'm not understanding about the configuration.

 

There are a few more things I want to get set up but these are kind of the first two big issues. I've attached the diagnostics right after trying to invoke the mover. Hopefully that helps?

alcatraz-diagnostics-20200904-1424.zip

Edited by What's_a_Computer?
Added note about attachment.
Link to comment
25 minutes ago, What's_a_Computer? said:

2 SSD parity drives

Looks like you mean 2 SSD cache drives.

 

25 minutes ago, What's_a_Computer? said:

I got a VM installed and though I set it to only use the M.2 drive for performance.

You're not going to get any performance out of an SSD in the parity array because writes to the array are always going to be slower than the HDD parity disk.

 

If you really want to treat that disk separately, you should take advantage of the multiple pools feature in the latest beta.

 

28 minutes ago, What's_a_Computer? said:

The second problem is the cache drives aren't moving to the array. I tried invoking the mover but it didn't seem to do anything.

appdata, domains, system shares are all on cache and set to stay on cache. That is they way you want them so your dockers / VMs don't have performance impacted by slower parity, and so dockers / VMs won't keep array disks spinning.

 

And, none of your other shares have any files on cache.

 

So, I would say you actually have all that just like it needs to be.

 

 

Link to comment
Quote

Looks like you mean 2 SSD cache drives.

I sure did.

 

Quote

If you really want to treat that disk separately, you should take advantage of the multiple pools feature in the latest beta.

That makes sense now. Since my M.2 is part of the array it needs to write everything to the cache first so I won't be getting any advantage of the M.2 speeds. I initially planned on having only the OS of the VM run off the M.2 and have everything else on the array. I figure this would give me the best possible performance on the VM. Is this only possible by creating multiple pools or is there a different configuration you'd recommend?


If I do need to upgrade to the newest beta is that something I can do without reinstalling Unraid. I ask because I don't want to re-create the parity drive and all of that. It was a struggle for me getting everything running in the first place and would rather not run through all of that again.

 

Quote

appdata, domains, system shares are all on cache and set to stay on cache. That is they way you want them so your dockers / VMs don't have performance impacted by slower parity, and so dockers / VMs won't keep array disks spinning.

So if I'm understanding that correctly, all the things that the VM and the Docker containers need to run will stay on the cache drives. That doesn't mean for instance that the files downloaded by the SABnzb Docker will always stay on the cache drives right?

 

I think that must also mean that the steam downloads from in the VM are saved in the cache permanently as well. I should create a share then for my steam games since from my understanding of the Linus Tech Tips video I won't see much performance gain from having the game files on the M.2. Does that logic check out?

 

 

Thank you a lot for the help! I have very little experience with much as far as networks and linux goes.

Link to comment
1 hour ago, What's_a_Computer? said:

Is this only possible by creating multiple pools or is there a different configuration you'd recommend?

Before the 6.9 betas people usually handled this with Unassigned Devices plugin, and that will still work for now though in some ways it is simpler and more flexible with multiple pools. I have a 2x500GB cache pool which I use for caching user share writes, and 1x256GB NVMe "fast" pool for my dockers/VMs (appdata, domains, system shares).

 

Here is what those shares are normally used for:

  • appdata - the "working storage" for each docker container, for example, the plex library (database).
  • domains - the vdisks for VMs. You can put some of this on the array if you want, or you can just put the OS vdisk in domains and use the virtual network to access Unraid storage for other purposes.
  • system - docker.img and libvirt.img, docker.img is the executable code for the containers, and libvirt.img is the XML for your VMs.

dockers and VMs will also typically use other user shares on Unraid which may be on the array or cache (or cached then moved to the array). For example, your media for plex or downloads from whatever app would be on one or more user shares.

 

1 hour ago, What's_a_Computer? said:

If I do need to upgrade to the newest beta is that something I can do without reinstalling Unraid.

Yes, upgrade just replaces the Unraid OS archives on flash. Those archives are unpacked fresh from flash into RAM at each boot and the OS runs completely in RAM (think of it as firmware except easier to work with). Your configuration (config folder on flash) is independent of that upgrade. Sometimes new features might be a reason to change some things about your configuration though, such as multiple pools.

Link to comment

I updated to 6.9.0-beta25 and created a new pool with the M.2. But this leaves a blank drive in the array (disk 4). Then when trying to start the VM it crashes Unraid and I need to manually reboot my machine.

 

Could it be crashing because of the stability of the release or is this more likely an issue with a blank drive in the array? I suspect it's the latter because even after reverting to the latest stable release it crashes the same way when I try to start the VM.

 

I also want to say it's great being able to talk to an expert the support I've received from you has been top notch.

alcatraz-diagnostics-20200907-1849.zip

Link to comment

There is no requirement to have a disk4, but there is a requirement to have all disks that you have built parity with. You can't just remove a disk without rebuilding parity. Your former disk4 is being emulated by reading parity plus all other disks and calculating its data. If the disk had no data you are interested in you must rebuild parity without it. As is you have no redundancy because you have a missing disk and only one parity.

 

Not sure about the crashes, but have you seen this?

 

Link to comment
  • 2 weeks later...

I tried to update the BIOS and revert back to the latest stable release (I'm back on 6.9.0-beta25 now) but can't get the VM to start. I think what happened was the iso's were on the m.2 drive I removed from the array and that messed something up.

 

I wanted to try to re-add a windows 10 iso to my iso folder but now when I try to connect to my network share from a separate windows 10 machine I get this error.image.png.26ad147d23581786db2d2628a3166029.png
 

 

Do you think I'm on the right path at all? I've attached my diagnostic file if that helps.

alcatraz-diagnostics-20200917-1651.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...