SpaceInvaderOne

Community Developer
  • Posts

    1726
  • Joined

  • Days Won

    29

Posts posted by SpaceInvaderOne

  1. To anyone using EmbyServerBeta   There was an update about 8 hours ago which for me broke my emby

     

    Cannot get required symbol ENGINE_by_id from libssl
    Aborted
    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.
    
    ** Press ANY KEY to close this window ** 


    Temporary fix change repository from

     

    emby/embyserver:beta


    to

    emby/embyserver:4.9.0.5


    Emby will then start and run. Just in a few days when after a new version is up and issue is fixed we can just go back and set tag back to beta.

     

  2. Hi
    Yes we would need you to install the mcelog tool then post your diagnostics. The reason being is this is a Linux tool (which you can install from the nerdpack plugin on CA) is used for logging and interpreting mces ( stands for Machine Check Exceptions) These exceptions are hardware errors reported by the cpu. Modern cpus have built-in error detection which when they detect a problem, they generate an mce. These errors include things like problems with the cpu itself, memory errors, bus errors, cache errors etc etc.
    They can be -

    Temporary errors - these are often corrected by the system (os)

    Intermittent errors--  Occur every so often and not all the time. So these errors are harder to diagnose.

    Fatal errors --  Obviously more serious because they can cause server crashes or even data corruption.

    So when these errors are reported it is good to find out what they are as they can indicate potential hardware errors early before causing too much trouble.
    Howver not all erros mean something is bad. Some errors can be down to quirks of the cpu. If i remember correctly certain Amd cpus have been known to generate some mces that can be considered harmless because they are just part of the processor's normal way of working so they generate an mce under normal conditions.

    Also the way the cpus firmware or microcode is designed can lead to harmless mces being reported. Various mb bios settings also can especially those related to power management and overclocking etc. So you may want to see if there are bios updates for your mb.

    But yes without the mce log it will not be possible to know what the errors are as the tool makes them human readable.
    I hope this helps

  3. 12 hours ago, dopeytree said:

    It was moving files within a share So initially sometimes radarr sets films to download to /data/ rather than /data/media/movies 

     

    /data is the dataset right? It's not the name root name of a drive. It's the hard links style of working.

     

     

    Zpools in Unraid can obtain their names in 2 different ways -
    1. When you create an independent Zpool in Unraid  you name the pool and add the disks. The pool gets its name from what you name it as.
    2. However if you format a drive that is part of your array in ZFS format, then that drive, although a part of the array, it is also it's own Zpool. When done this way, the Zpool name is taken from the disk number.

    So assuming your disk6 is ZFS formatted, it is therefore a Zpool. The pool name being "disk6".
    A Zpool can contain both regular folders and/or datasets. So the /data in disk6, it could be either a dataset or just a regular folder. 
    But if it is a dataset, then yes, the dataset name would be /data 
    ( a dataset path in ZFS is,  poolname/dataset ,  so your ZFS path would be disk6/data) 

    To see what datasets are in your disk6 (or any other Zpools) install the ZFS master plugin. Then you can see the datasets clearly on the main tab.

    So if i understand, you say you are using hardlinks. Hard links in ZFS do work but with some limitations. Hard links can only be created within a single dataset but not across datasets.
    For example, within your disk6/data dataset, hard links can be made between files, functioning just like they would in a traditional filesystem. However hard links cannot span across different datasets in ZFS. This means that you cannot create a hard link between a file in disk6/data and another in disk6/media.
    This limitation is part of the ZFS design, which emphasizes data integrity and clear boundaries between datasets. Each dataset is basically an isolated filesystem in itself, which has advantages for management, snapshots, and data integrity. But a downside is, it also means that traditional filesystem features like hard links have these constraints. I hope this helps

    • Like 2
    • Thanks 1
    • Upvote 1
  4. 5 minutes ago, PicPoc said:

    Nice ! Again a good & hard work !!!

    What about the aivability to install from an image on USB stick ?

    ;)

    Hi there. Unfortunately, integrating the ability to install from a usb stick directly into macinabox is not feasible due to the existing mechanisms and how the vm templates are generated. To utilise USB media for installation, a manual edit of a vm template to enable usb passthrough would be necessary. However, a simpler alternative, if you need to install the os from a usb, would be converting your usb media to an image file. This would allow you to utilise an existing image on your usb without requiring direct passthrough of the actual physical device to the VM template.

    • Like 1
  5. New Macinabox almost complete. Should be out soon. Hoping for the end of next week or shortly there after
    Will have a few new features such as Ventura and Sonoma support
    Also the companion User Scripts will no longer be necessary, the container will do everything itself.
    Also I plan to add checks so the container can see that your cpu has the correct features to be able to run macOS ie checking for AVX2 etc
    And a few other new things :)

     

    • Like 7
    • Thanks 6
  6. Hi. I decided to install Huginn today and was pleasantly surprised to find it readily available in CA in the new apps it had just been added today :)  Thanks so much for this addition I  cant wait to start utilizing it.

    A quick note for anyone installing the Huginn container: you'll need to modify the permissions for the Huginn appdata folder this ensures that Huginn has the necessary permissions (easiest way i find just to use the Unraid file manager plugin to do this) to write in this location for its database etc.

    • Like 1
  7. 21 hours ago, kftX said:

    @SpaceInvaderOne Okay so unsure if I'm correct but I thought I might as well report this:

     

    The helper script to fix XMLs doesn't seem to be working? I initially got this to work but it crashed my server because it was using the cores unRAID seems to like more. But everything was showing up: OpenCore boot menu and after the apple logo the progress bar did show up.

    So then I rebooted, edited XML and reran the script after changing RAM and CPU core allocations and then all I got was a black screen with the REL date of OpenCore at the bottom. Pressing enter does bring up the apple logo but nothing else happened.

    Thinking the server crash may have had corrupted something I uninstalled everything, removed all the macinabox related files including the ones in usr/system and did a clean reinstall of everything. This time before booting up the VM I did the RAM and CPU core allocations first, then ran the script to fix XML and still nothing.

     

    The only thing the script said was something about the VM already existing but I assume that's normal.

    Ok i think what happened for you is.

    You uninstalled macinabox and its appdata getting rid of the container and related files.

    However the vm template was still there.

     

    What the helper script does it first checks if a folder called autoinstall in the appdata exists.  This contains the newly generated vm xml. If it is present it attempts to define the vm from the template in that folder then deletes the autoinstall folder and exits.

    So as it said the vm was already present it couldnt define the vm and just exited. It didnt replace your existing template, nether did it run the fixes on it.

    The reason i think it stops and goes no further on the apple logo is because your existing template was missing this at the bottom

    Screenshot_24.thumb.png.97329dafd3a42d3baf01cffea7969ae0.png

    Running the helper script a second time would then fix this xml adding it back as the autoinstall folder wouldn't be there now. 

     

    I hope this makes sense

     

     

  8. 1 hour ago, EG-Thundy said:

    So about the "master var store" issue. Deleted macinabox with the templates and everything but still keep getting it. Catalina is the only one that does this so far for me. Any  kind of suggestions would be appreciated.

    What I tried knowing it wouldn't work was to copy the same path that working big sur install has for var. Once you do that then it starts giving an error about directory not existing for opencore.img.

    Do you see the files in your system share?

    Screenshot_23.thumb.png.d88d10386721044c038e5022cb87d6a7.png

  9. On 1/31/2022 at 5:08 PM, EG-Thundy said:

    Are you trying to install Catalina? I'm having the same issue on Catalina. Big Sur installed nicely but I'm having this issue with Catalina. Also the Iso default location is not used in the xml (looking for the iso under /isos/macinabox catalina/ which it shouldn't) so I'm guessing something is off with the template.

    Catalina xml is now fixed and i have pushed an update. If you update the container and run the Catalina install it should be okay

     

  10. Um yeah thanks for pointing that out.

     

    The problem is, is that Unraid VM manager on an update from making a change in the GUI (not xml edit) will automatically change the nics to be on bus 0x01 hence address to

     <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

     

    I will take a look at this and perhaps add to the macinabox helper script to fix this.

    • Like 2
  11. Finally the new Macinabox is ready. Sorry for the delay work has been a F******G B*****D lately taking all my time.

     

    So It also has a new template so please make sure to make sure your template is updated too (but will work with the old template)

     

    A few new things added.

    Now has support for

    Monterey

    BigSur

    Catalina

    Mojave

    High Sierra.

     

    You will see more options in the new template.

    Now as well as being able to choose the vdisk size for install you can also choose whether the VM is created with a raw or qcow2 (my favorite!) vdisk

    The latest version of Open core (OpenCore to 0.7.7.)  is in this release. I will try and update the container regularly with new versions as they come.

    However you will notice a new option in the template where you can choose to install with stock opencore (in the container) or you can use a custom one.
    You can add this in the folder custom_opencore folder in Macinabox appdata folder. You can download versions to put here from

    https://github.com/thenickdude/KVM-Opencore/releases

    Choose the .gz version from here and place in the above folder and set template to custom and it will use that (useful if i am slow in updating !! 🤣)  - note if set to custom but Macinabox cant find a custom opencore here to unzip in this folder then it will use the stock one.

    Also there is another option to delete and replace the existing opencore image that your vm is using. Select this to yes and run the container and it will remove the opencore image from the macos version selected in the template and replace with a fresh one. Stock or custom.

    By default the NICs for the macOS versions are virtio for Monterey and BigSur. Also vDisk Bus is virtio for these too.

    HighSierra, Mojave and Catalina use a sata vDisk Bus. They also use e1000-82545em for there NICs.

    The correct NIC type for the "flavour" of OS you choose will automatically be added.

    However if for any macOS you want to overide the NIC type and install then you can change the default NIC type in the template from Virtio, Virtio-net, e1000-82545em or vmxnet3

    By default the NIC for all vms is on

    <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

    This will make the network adapter seem built in and should help with Apple services.

     

    Make sure to delete the macinabox helper script before running new macinabox so the new script is put in as there are some changes in that script too.

     

    I should be making some other chnages in the next few weeks soon, but thats all for now :)

     

     

     

    • Like 12
    • Thanks 3
    • Upvote 1
  12. Hi @cbeitel i have read you PM but thought i would reply here in the thread.

     

    So you have an esxi running with vms on it.

    You want to set up a vm on Unraid running esxi and have all the vms the orfiginal esxi running as nested virtualized vms so you dont have to reset up everything.

    This will be very inefficient to do this. 

    You should migrate the esxi vms to Unraid without using a virtual esxi on Unraid. The vdisks will work as is on Unraid so you need only to copy them to Unraid then set up a vm pointing to the vdisks copied from the esxi.

    Make sure to choose similar config to the esxi vms. ie if you are using a legacy boot for the vm on esxi then choose seabios on Unraid. If using uefi then choose ovmf.

     

    I know you are worried as the windows vms are activated. So to be sure it goes smoothly you will need to get the UUID from the vm and make it the same in Unraid. You can get that from esxi or just boot the vm on esxi and open a command prompt and type

    wmic csproduct get UUID

    This will give you the uuid.

     

    Then in Unraid you will need to edit the vm templates xml to change the uuid to match the orginal.

    1447516591_Screenshot2021-05-01at18_15_24.png.f31d4674425472b95bdb7489ed26d56c.png

     

    so you would just change the line 4 here.

     

     

    So your windows gaming VM. My advice is to set up this vm not using the default settings in the template.

    Choose q35 chipset and not i440fx

    make sure to use a vbios for your GPU you can try my vbios dump script to get one or edit one from techpowerup.

    https://youtu.be/FWn6OCWl63o

     

    Also make sure to put all parts of the gpu on the same bus/slot (ie the graphics sound and usb if the gpu has that)

     

    https://youtu.be/QlTVANDndpM

     

     

  13.  

    I dont think that with an ssd however a reallocated sector is as bad as on a mechanical drive, I beleive it is a block that has failed to be erased and then replaced from one from the reserve of which there are many ( but @JorgeB will know better than me)  Even so probably if it were me i would replace the cache drive because of this reallocated sector and the fact it is quite old anyway. Power on hours are 27627 so about 3 years old and has written alot of data 383850 gigs or 363 TB

  14. I would guess that you removed the disk and when you started the array maybe you had

    The 'Parity is valid' box checked and then afterwards ran a manual parity check. 

    If so then the parity check will come up with many errors as the orginal parity would be incorrect.

     

    I would just stop the parity check. Stop the array then goto tools new configuration. Select preserve all

    595431071_Screenshot2021-04-17at08_09_25.thumb.png.668606d2cc0ed43a776e3692fcd1c010.png

     

    631461716_Screenshot2021-04-17at08_10_46.thumb.png.c3ed256dcc11819f3ff6414c311e26f2.png

     

    Then click apply

     

    This will keep all the disks in the same place.

     

     

    Then go back to the main tab. Double check that the disks there are correct. 

     

    Then start the array. make sure not to check the 'Parity is valid' box

     

    Then a new parity sync should start automatically.

     

    556183307_Screenshot2021-04-17at08_12_54.thumb.png.a79165606aa68b49fe1cce5f6fa51f3f.png

  15. On 4/13/2021 at 10:46 PM, evaldez02 said:

    Hello Everyone

    My connection to NZBGET was fine until today.

    It passes the connection test but i can't grab anything else.

    Error says Pipeline error Request Failed. POST /api/v3/queue/grab/924571016: Unable to connect to NzbGet. An error occurred while sending the request.

    Is there anything i can try to fix the situtation

     

    Containers like sonarr, radarr, lidarr etc tend to work better using a custom network to communicate with each other. However as when you click test connection it passes this may not fix your problem. But worth a try anyway

     

    Some people find after sonarr and radarr working fine by connecting through the server ip address and port number this stops working.

    So here are the steps to put them on a custom network so they can talk through name resolution.

     

    Make sure that on settings/docker/    that you have preserve user defined networks.   (you will need to stop the docker service to change this setting)

     

    297189576_Screenshot2021-04-17at07_19_09.thumb.png.8209e6779f55afbd111b782c7188ee5c.png

     

     

    You will need a custom docker network. To create this goto the web terminal and type

     

    docker network create proxynet

     

    (the network can be called whatever you like doesnt need to be called proxynet that is just what i call mine)

     

    Now you must change  nzbget, sonarr radarr and any other containers that connect to nzbget to use this network.

    So goto the docker template and change the network type to the type you created above.

     

    1379912150_Screenshot2021-04-17at07_24_15.thumb.png.5186077af385be14d28888d01a8fd11a.png

    780562896_Screenshot2021-04-17at07_24_27.thumb.png.6466b4393a9d07edb2eea6e50124d42b.png

     

     

    Once all relivant containers are changed to this network they can communicate with each other using the name of the container rather than an ip address.

     

    So in sonarr, radarr etc. Goto download clients and change the host to the name of the nzbget conatainer. (mine is called nzbgetvpn)

    You still need the port no as before

     

    1048442040_Screenshot2021-04-17at07_30_55.thumb.png.0600ee1e7489bf10280330f24ee689e2.png

     

     

    Now click test and it will come back all good :)

     

    • Like 1
  16.  

    The log says that the ssl certs are not being created as it cant verify the subdomains. You are using http verification, so it checks using the subdomain ie radoncloud.yourdomain.com is resolving back to swag through port 80.

    So this is not happening. I can see you have set up a port forwarding rule, but some isps block port 80 on home connections. I would suggest moving your domain to cloudflare then using dns verification rather than http

    See this video here for how to add your domain to cloudflare https://youtu.be/y4UdsDULZDg

    And here for how to setup dns verification with cloudflare and letsencrypt (swag)   https://youtu.be/AS0HydTEuA4