SpaceInvaderOne

Community Developer
  • Posts

    1718
  • Joined

  • Days Won

    22

Everything posted by SpaceInvaderOne

  1. Was working in Unraid 6.10 rc2 but sadely this no longer works in Unraid 6.10 rc3 due to update of either Libvirt to 7.10.0 or QEMU to 6.2.0 This error now happens Also reported here https://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg1838418.html
  2. This video shows 2 methods to shrink the Unraid array. One method where you remove a drive then rebuild the parity and the second method where the drive is first zeroed then removed preserving the parity.
  3. Ok i think what happened for you is. You uninstalled macinabox and its appdata getting rid of the container and related files. However the vm template was still there. What the helper script does it first checks if a folder called autoinstall in the appdata exists. This contains the newly generated vm xml. If it is present it attempts to define the vm from the template in that folder then deletes the autoinstall folder and exits. So as it said the vm was already present it couldnt define the vm and just exited. It didnt replace your existing template, nether did it run the fixes on it. The reason i think it stops and goes no further on the apple logo is because your existing template was missing this at the bottom Running the helper script a second time would then fix this xml adding it back as the autoinstall folder wouldn't be there now. I hope this makes sense
  4. Catalina xml is now fixed and i have pushed an update. If you update the container and run the Catalina install it should be okay
  5. Um yeah thanks for pointing that out. The problem is, is that Unraid VM manager on an update from making a change in the GUI (not xml edit) will automatically change the nics to be on bus 0x01 hence address to <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> I will take a look at this and perhaps add to the macinabox helper script to fix this.
  6. Finally the new Macinabox is ready. Sorry for the delay work has been a F******G B*****D lately taking all my time. So It also has a new template so please make sure to make sure your template is updated too (but will work with the old template) A few new things added. Now has support for Monterey BigSur Catalina Mojave High Sierra. You will see more options in the new template. Now as well as being able to choose the vdisk size for install you can also choose whether the VM is created with a raw or qcow2 (my favorite!) vdisk The latest version of Open core (OpenCore to 0.7.7.) is in this release. I will try and update the container regularly with new versions as they come. However you will notice a new option in the template where you can choose to install with stock opencore (in the container) or you can use a custom one. You can add this in the folder custom_opencore folder in Macinabox appdata folder. You can download versions to put here from https://github.com/thenickdude/KVM-Opencore/releases Choose the .gz version from here and place in the above folder and set template to custom and it will use that (useful if i am slow in updating !! 🤣) - note if set to custom but Macinabox cant find a custom opencore here to unzip in this folder then it will use the stock one. Also there is another option to delete and replace the existing opencore image that your vm is using. Select this to yes and run the container and it will remove the opencore image from the macos version selected in the template and replace with a fresh one. Stock or custom. By default the NICs for the macOS versions are virtio for Monterey and BigSur. Also vDisk Bus is virtio for these too. HighSierra, Mojave and Catalina use a sata vDisk Bus. They also use e1000-82545em for there NICs. The correct NIC type for the "flavour" of OS you choose will automatically be added. However if for any macOS you want to overide the NIC type and install then you can change the default NIC type in the template from Virtio, Virtio-net, e1000-82545em or vmxnet3 By default the NIC for all vms is on <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> This will make the network adapter seem built in and should help with Apple services. Make sure to delete the macinabox helper script before running new macinabox so the new script is put in as there are some changes in that script too. I should be making some other chnages in the next few weeks soon, but thats all for now
  7. Should be ready I think for wednesday this week
  8. I am in the process of remaking Macinabox & adding some new features and hope to have it finished by next weekend. I am sorry for the lack of updates recently on this container. Thankyou @ghost82 for all you have done in answering questions here and on github and sorry i havent reached out to you before.
  9. A container to easily download the windows 11 preview iso directly from microsoft and put in the Unraid iso share
  10. Hi @cbeitel i have read you PM but thought i would reply here in the thread. So you have an esxi running with vms on it. You want to set up a vm on Unraid running esxi and have all the vms the orfiginal esxi running as nested virtualized vms so you dont have to reset up everything. This will be very inefficient to do this. You should migrate the esxi vms to Unraid without using a virtual esxi on Unraid. The vdisks will work as is on Unraid so you need only to copy them to Unraid then set up a vm pointing to the vdisks copied from the esxi. Make sure to choose similar config to the esxi vms. ie if you are using a legacy boot for the vm on esxi then choose seabios on Unraid. If using uefi then choose ovmf. I know you are worried as the windows vms are activated. So to be sure it goes smoothly you will need to get the UUID from the vm and make it the same in Unraid. You can get that from esxi or just boot the vm on esxi and open a command prompt and type wmic csproduct get UUID This will give you the uuid. Then in Unraid you will need to edit the vm templates xml to change the uuid to match the orginal. so you would just change the line 4 here. So your windows gaming VM. My advice is to set up this vm not using the default settings in the template. Choose q35 chipset and not i440fx make sure to use a vbios for your GPU you can try my vbios dump script to get one or edit one from techpowerup. https://youtu.be/FWn6OCWl63o Also make sure to put all parts of the gpu on the same bus/slot (ie the graphics sound and usb if the gpu has that) https://youtu.be/QlTVANDndpM
  11. I dont think that with an ssd however a reallocated sector is as bad as on a mechanical drive, I beleive it is a block that has failed to be erased and then replaced from one from the reserve of which there are many ( but @JorgeB will know better than me) Even so probably if it were me i would replace the cache drive because of this reallocated sector and the fact it is quite old anyway. Power on hours are 27627 so about 3 years old and has written alot of data 383850 gigs or 363 TB
  12. I would guess that you removed the disk and when you started the array maybe you had The 'Parity is valid' box checked and then afterwards ran a manual parity check. If so then the parity check will come up with many errors as the orginal parity would be incorrect. I would just stop the parity check. Stop the array then goto tools new configuration. Select preserve all Then click apply This will keep all the disks in the same place. Then go back to the main tab. Double check that the disks there are correct. Then start the array. make sure not to check the 'Parity is valid' box Then a new parity sync should start automatically.
  13. Containers like sonarr, radarr, lidarr etc tend to work better using a custom network to communicate with each other. However as when you click test connection it passes this may not fix your problem. But worth a try anyway Some people find after sonarr and radarr working fine by connecting through the server ip address and port number this stops working. So here are the steps to put them on a custom network so they can talk through name resolution. Make sure that on settings/docker/ that you have preserve user defined networks. (you will need to stop the docker service to change this setting) You will need a custom docker network. To create this goto the web terminal and type docker network create proxynet (the network can be called whatever you like doesnt need to be called proxynet that is just what i call mine) Now you must change nzbget, sonarr radarr and any other containers that connect to nzbget to use this network. So goto the docker template and change the network type to the type you created above. Once all relivant containers are changed to this network they can communicate with each other using the name of the container rather than an ip address. So in sonarr, radarr etc. Goto download clients and change the host to the name of the nzbget conatainer. (mine is called nzbgetvpn) You still need the port no as before Now click test and it will come back all good
  14. I really like the quotom mini pcs with 4 intel nics. i3, i5 and i7 versions https://www.aliexpress.com/item/32864883139.html?spm=a2g0o.detail.0.0.7484257emn0E2S&gps-id=pcDetailBottomMoreThisSeller&scm=1007.13339.169870.0&scm_id=1007.13339.169870.0&scm-url=1007.13339.169870.0&pvid=58fa061d-73ce-4a90-8595-f45e4699f414&_t=gps-id:pcDetailBottomMoreThisSeller,scm-url:1007.13339.169870.0,pvid:58fa061d-73ce-4a90-8595-f45e4699f414,tpp_buckets:668%230%23131923%2337_668%230%23131923%2337_668%23888%233325%239_668%23888%233325%239_668%232846%238113%231998_668%232717%237565%23782_668%231000022185%231000066059%230_668%233468%2315609%23267_668%232846%238113%231998_668%232717%237565%23782_668%233164%239976%23373_668%233468%2315609%23267
  15. The log says that the ssl certs are not being created as it cant verify the subdomains. You are using http verification, so it checks using the subdomain ie radoncloud.yourdomain.com is resolving back to swag through port 80. So this is not happening. I can see you have set up a port forwarding rule, but some isps block port 80 on home connections. I would suggest moving your domain to cloudflare then using dns verification rather than http See this video here for how to add your domain to cloudflare https://youtu.be/y4UdsDULZDg And here for how to setup dns verification with cloudflare and letsencrypt (swag) https://youtu.be/AS0HydTEuA4
  16. Glad you have it working. By the way SI1 and TAFKA gridrunner is the same person .My orginal username on the formums many years ago was gridrunner hence the tag TAFKA Gridrunner !! 😉 If you want to be able to have no monitor plugged in you will need an hdmi or display port dummy like here https://amzn.to/3mH5e3I
  17. Um I am not sure if the nvidia drivers would be a problem with the 1 gpu. Maybe as they are loaded before the gpu is passed through it maybe could. Try uninstalling the drivers and try again. if still doesnt work then goto tools/systemdevices and stub the P2000 both graphics and its sound parts and reboot You mention splashtop showing a black screen. Can you test with a monitor directly connected to the gpu whilst testing?
  18. Are you booting the server by legacy boot or uefi? You can check this on the maintab/flash at the bottom of the page. I find that booting legacy works better for me with passthrough so if you are not booting legacy you could try. Also you can try adding this to your syslinux config file video=efifb:off so for example label Unraid OS menu default kernel /bzimage append initrd=/bzroot with added would be label Unraid OS menu default kernel /bzimage append video=efifb:off initrd=/bzroot Hope this may help
  19. Ah okay zenstates in go isnt necessary for your cpu
  20. What CPU is in your server. If Ryzen do you have the zenstates in your go file?
  21. Its very hard to say without more info. Please post the xml of the vm here. Try a different machine type ie if using i440fx try q35 and vice versa
  22. I thought I had posted here about update a few days ago but it seems i didnt! Well I pushed out a fix for Macinabox over Easter and now it will pull BigSur correctly so if you update the container all should be good. Now both download methods in the docker template will pull Bigsur. Method 1 is quicker as it downloads base image as opposed to method 2 which pulls the InstallAssistant.pkg and extracts that.