jjooee

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by jjooee

  1. Did you ever figure out how to get this working? Trying to do the same setup for a lab setup and coming across the same issue.
  2. I used to have everything working perfect with this but an update must have broke things. When using netboot the config seems to be automatically launching the menu from https://boot.netboot.xyz/2.0.69/menu.ipxe rather than using my locally hosted menu. Before the menu loads I can also see "Attempting to retrieve latest upstream version number... https://boot.netboot.xyz/version.ipxe..." Any changes I make to the menu files via the web interface do not apply as it seems the configuration is bypassing any of these settings. This is driving me crazy as this was working and now I'm not able to use any of my custom configs. Is anyone else experiencing this issue and know what to do?
  3. The issue was the Docker Folders plugin. After uninstalling the plugin all buttons under Docker and VM worked properly.
  4. I upgraded to 6.12.0-rc6 and everything is working fine except one odd GUI issue. Docker - When clicking on a container to start it, pressing the start button does nothing. I can click the Logs, Edit, Remove, Project Page, Support, More Info and Donate buttons. For a container that is already running I cannot click Stop, Pause, or Restart. WebUI and Console work too. VMs - When clicking Start on a vm, nothing happens. I can click Start with console(VNC) and it does start the VM and the console window comes up. Edit, Remove VM, and Remove VM & Disks all work. For a running VM, I am unable to click Start, Pause, Restart, Hibernate or Force Stop. VM Console(VNC) does work. Any ideas? I've tried this from another computer that has never accessed the Unraid GUI before. Thanks!
  5. This is a similar use case as to why I would LOVE multi array. I've maxed out my internal 3.5" storage and want to expand to some JBOD solution via SAS. With my setup I would prefer to keep the 8 current 3.5" disks in my main chassis, two being parity and dedicate this storage to my personal/critical data... Then I would want to create a second array with a single (or dual) parity for disks in the DAS/JBOD that would be less critical data. My concern with doing this on a single array is if the JBOD either disconnects or has a power issue it could take my whole array down or throw it out of parity. It would also be nice to expand Unraid past 30 disks by creating multiple arrays. This would be ideal for stacking JBOD enclosures and keeping everything under Unraid. There seems to be a lot of confusion with newer Unraid users in how you can manage cache pools and individual user share settings to tweak where you want your data to be stored in a typical array+cache pool setup. I agree a lot of people's problems could be solved using these settings, but there are definitely some great use cases for multi-arrays.
  6. By this you mean before removing the disk, start in maint. mode and go to the disk and run the XFS repair?
  7. These are the results I get when running -vL against the disk. The disk status is now showing "Unmountable: wrong or no file system"
  8. Diagnostics attached titan-diagnostics-20220909-1303.zip
  9. @JorgeB Here are the attributes for the disk if it helps
  10. After running the check filesystem with -nv I got the attached results. When running the check again with no options I get the following: Considering the disk is showing unmountable, I believe my only option is to run the check with the -vL flag, correct? Thanks again for the help! xfsrepair-nv.txt
  11. Hello - I noticed a few days ago that one of the data disks in my array was forced offline I believe due to too many UDMA CRC error's. I replaced the cables going to that drive as a precaution. I followed the instructions noted here to rebuild the drive onto itself as it appeared the disk was fine just out of sync with the array. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself I did the rebuild process in maintenance mode and it completed successfully. I then stopped the array and started it again in normal mode. After the array started successfully I'm seeing the rebuilt disk show "Unmountable: No file system" Under these circumstances I should I choose to allow the disk to format or not? I just want to make double sure I'm not going to mess up the data on that disk or array since I thought it already rebuilt the disk from parity by following the steps above. Thank you for the help!
  12. I think I finally got it working. I did try the newest windows insider preview build but that didn't help my specific issue. Today I updated from 6.9 beta 25 to beta 29 and while that created a few new issues i'll have to deal with (vnc broke for all vms, plex transcoding) it seems to have fixed nesting in ryzen, or at least with Zen 2. I created a new vm using an existing windows 10 image that was a fresh install. This instance does not have the insider preview build with the new hyper-v nesting support. I was able to use host-passthrough for the cpu, OVMF, i1440fx-5.1 with no manual modifications to the xml and its NOW WORKING! I have vmware workstation pro 15.5 installed in the windows vm running an instance of esxi 6.5 running inside that. Going to install a few guest vms inside esxi now and see if it'll continue to nest without any issues. This is a GAME CHANGER for doing esxi labs under unraid. If this all works out in the windows vm i'll go back to trying a straight esxi 6.5/7 guest directly on unraid which will end up being my real lab environment.
  13. I know this is an old thread but I'm having major issues with this in 6.9 running on a Threadripper cpu. I've tried everything and no combination of nested hypervisors will work under unraid. I've tried vmware workstation and virtualbox running on a windows 10 vm. I've tried esxi 6, 6.5 and 7 running in an unraid VM, and even tried running KVM inside a ubuntu vm. No matter what the combination or settings (cpu host-model / passthrough, etc) it always crashes the 2nd level vm and those cpu cores shoot to 100%. I tried adding <feature policy='require' name='svm'/> between the <cpu> tags and then i dont get the vmware workstation error about AMD-V but the guest still crashes the host. I really wish there was a solution to this. I know there's only a few of us trying to do this but being able to spin us multiple ESXi instances for labbing vmware is really important to me.