can4d
-
Posts
64 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by can4d
-
-
Holy sh****t this fixed me. thank you for the post.
-
Trying now. thank you for that. will revert back if successful.
-
Ok, let me just start with the fact that I have read as much as I could for the past 3 days. Attempted to verify the issue is not with my hardware, modified uefi config for proper settings, modified the SYSLINUX CONFIGURATION and yet still no love. Not sure what I am doing wrong or am totally off base with my plan.
END GOAL
- Assign my new Radeon RX 6600 to my vms to run advanced graphics software
SPEC
- MB: ASRock B550 Phantom ITXax
- Processor: Ryzen 7 3700
- RAM: 32GB
- GPU: AMD RADEON RX 6600
- UNRAID: Version: 6.11.5
I only have a single pcie x16 slot for the card. MB has integrated graphics. I only remote in to my vms and no display is physically connected.
NOTICEABLE FLAGS
- All VMS show "virtual" as the graphics card
- All new VM config only shows "virtual"
- NO IOMMU groupings actually show.
BIOS/EUFI
- SVM MODE: Enabled for AMD-V
- AMD fTPM Switch: Enabled
What am I missing here? Any help or passed or wisdom for unraid grasshopper would be greatly appreciated.
-
On 2/11/2023 at 12:28 PM, trurl said:
To summarize what you need to fix
Set appdata and system shares to prefer a pool named cache_ssd, set domains share to prefer a pool named vms.
Nothing can move open files, so you have to disable Docker and VM Manager in Settings and then run Mover to get these moved to their designated pools.
All done. Everything seems like it was. one vm didn't require recreating/installing. I just added it and referenced the disk image and it was working where it left off. The others I had to readd with no issue.
one question: On my pools do I need to add a minimum of 3 drives so one can act as a backup?
Thank you again for all of your help. I won't post here anymore with questions off topic. very much appreciate it.
-
8 hours ago, JorgeB said:
Last diags show VM service disabled, enable it and post new diags if the VMs don't come back.
VMs are enabled but they still do not show.
Parity Check completed with zero errors.
Diagnostics are posted.
I definitely think my issues stem from inproper use of cache / pool.
-
only thing I noticed is I lost my previous virtual machines but can live with that as I wasn't attached to them anyway. I can redo those. If there is a way to readd great but no loss otherwise.
Started a new parity check.
-
-
-
-
-
4 minutes ago, trurl said:
It was mounting without the other drive before. And you can remove a disk from raid1 pool and it will still work. Not sure why the other didn't mount before though. That's why I'm asking for help
ok.
-
if both 500 were originally a pool don't they both have to be added to make the pool? and that is why just one added is not mounting?
-
19 minutes ago, trurl said:
Stop the array. Add a pool named cache_ssd. Assign that disk as the first disk in that pool. Start the array.
DO NOT format anything.
Then post new diagnostics.
Done.
-
14 minutes ago, trurl said:
"Stop the array. Add a pool named cache_ssd. Assign that disk as the first disk in that pool. Start the array."
is it important to identify number of slots > 1 for the pool at creation or can add to pool as need?
-
I will wait for how to proceed when you're avail. I will not make any changes.
-
-
thank you very much for your guidance and support. I am starting to feel much calmer with glimpses at the end of a dark tunnel.
-
running now. roughly .2%. estimated completion of 8 hours. ugh.
-
-
-
I have the menacing alert "all existing data will be overwritten when I start" for the parity (sdc)
-
6 minutes ago, trurl said:
OK. New Config, assign other spinner as parity.
"spinner" meaning the other 6TB drive that was not assigned as data correct?
-
5 minutes ago, trurl said:
Leave Docker and VM Manager disabled until further notice.
New Config, assign only disk1, leave all others unassigned, start the array and post new diagnostics.
done.
-
16 minutes ago, trurl said:
The only user share on the nvme was isos. isos was also on disk1 (and its mirror). No idea what that nvme pool was named unless it was referred to directly by one of your VMs. Do you know?
I do not. my original thinking I believe was to pool the 2 500 and use nvme as parity. really wanting to run vms on ssd's with backup protection. I suppose I didn't achieve that. As far as pools I only remember the cache_ssd.
No gpu passthrough despite hardware
in VM Engine (KVM)
Posted · Edited by can4d
YOU'RE A FREAKING GENIUS. THANK YOU. THANK YOU. WORKED.