thaoggamer

Members
  • Posts

    5
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

thaoggamer's Achievements

Noob

Noob (1/14)

0

Reputation

2

Community Answers

  1. Ok so finally got it working... not exactly sure what was wrong. I just deleted all the existing files/templates (again), removed docker, removed all my existing VM's and then disabled both Docker and Virtual Machine Manager. I then rebooted the unRAID server and re-installed everything (Docker/VM Manager). Got all my other VM's up and running with no issue then ran Macinabox and everything worked as per spaceinvaderone's video guide. Would love to figure our why it wasn't working but... system is up so... cancelled that idea.
  2. I have been trying to get a Mac VM running using Macinabox. Initially, when I booted into the Mac VM, it owuld crash saying there was an error. I subsequently, tried multiple re-installs of Macinabox and VM custom icons with the VM just straight up failing to boot and getting the error: "operation failed: unable to find any master var store for loader: /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd" Today, I spent time deleteing all "docker" references in various mount locations, then rebooted the server and re-installed Macinabox/VM icons. The VM is created but when I start it up, it presents the boot options but when I select the boot drive the VM is stuck on the page displaying the Apple logo. I don't know what is wrong and I modified the CPU and RAM allocation then re-ran the helper script which then gave me this output... VM is still stuck on the Apple logo screen after selecting the boot disk at start-up. I run my VM's and dockers off my cache drive. When I "edited" the docker settings pointing it to /mnt/cache/appdata the VM wouldn't start at all and displayed the error "operation failed: unable to find any master var store for loader: /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd" Any help figuring this out would be appreciated.
  3. Just migrated my unRAID server from an Intel based 8700K to an AMD based 5900x + nVidia GTX 970. No issues with the actual migration (I disabled docker and vm manager and also disabled auto start) and then just re-enabled everything afterwards. The only thing that is "odd" is I keep getting this error "...kernel: NVRM: GPU 0000:0b:00.0: request_irq() failed (-22) error" I'm assuming it's some kind of IRQ error (an interupt?) but I have no idea. The GTX970 and its corresponding HDMI audio component are in their own dedicated IOMMU so I don't believe that is the issue (but that is pure speculation). In the nVidia driver package (under settings) I have Production Branch: v535.129.03 selected but on the left hand side of that same window under "Installed GPU(s): it says "No devices were found"???? Another thing I noticed was that when the server reboots, it seems to install the nVidia driver every time? Is this how it's supposed to happen? From my other "Linux experiences" the install only happens once and the driver is just loaded on restart/power on etc? This does not appear to be the case? Any idea what is happening? I've linked a diagnostics file if it helps at all? Thanks in advance. galactica-diagnostics-20231119-1512.zip
  4. Ok so after a little more tinkering I found remnants of the appdata/domains/system folders on a number of the array disks. I deleted these and it appears that the power management settings now work fine. Only disks actively being read from or written to are no powered on.
  5. I've been trying to figure out why my disks don't spin down after the set time period of 1 hour. After some research and investigation I discovered that I had appdata/domains and system setup to run off cache as primary and the array as secondary - so it seemed that's why the disks never spun down (admittedly it still didnt' explain why the 4 disks with nothing on them, never spun down either). So I moved appdata/domains/system to cache and then set them to ONLY use cache - I saw a number of posts about setting "prefer" but I don't have this option anywhere (running v6.12.4). Following this, I was able to manually spin down the disks and they would stay that way and it seemed they did so after the set 1 hour spin down period I set. However today, I have noticed that the drives are still spun up and when I manually spin them down, they do so initially but then spin up again a few minutes later despite no data being accessed on the "storage" drives. i have 2 dockers and 1 VM running and as per above these are (or should be as far as I can tell) running off my cache... I'm stumped though, can't figure out why it worked and then stopped working within a space of 24 hours.... I've attached my diagnostics file if it's of any use. Any help would be greatly appreciated!!! galactica-diagnostics-20231114-2224.zip