Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About joshbgosh10592

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @jonp Just wondering if you were able to reproduce this issue in your lab or not. I know it's been crazy with the 6.8 version coming up, but I'm just curious.
  2. Thank you! I've submitted a bug report:
  3. As per the thread below, I'm submitting a bug report for the inability to host nested VMs. In my case, I have a Proxmox VM (PVE-Witness) running on unRAID. It's the third node of my Proxmox cluster. When I try to fire up a VM on PVE-Witness that was just running on PVE-1, I'm met with the error: TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS. As requested, I attempted a similar task on a newly created Ubuntu 18.4 VM (Ubuntu). When creating the VM in Ubuntu, I'm met with: "Warning: KVM is not available." nas-diagnostics-20191021-0320.zip
  4. If by that you did mean try Ubuntu, that fails with saying "Your CPU does NOT support KVM" Should the BIOS type matter? I'm concerned because all the Proxmox VMs use SeaBIOS, and I'm not see unRAID's Proxmox VM shows OVMF. I'd change it as a test, but it seems that you can't change it once the VM is created, and the Proxmox VM is the witness in a cluster (which means it's a PITA to reconfigure)
  5. I'm sorry, you mean to create a new VM running Ubuntu inside unRAID and attempting to build a nested VM inside that, right?
  6. Sorry to necro this thread, but I haven't found anything else anywhere that helps. I'm trying to do exactly this on 6.7.2 with a Proxmox VM. When I try to fire up a VM on this Proxmox VM, I receive an error saying that virtualization is configured but not enabled in the BIOS. When I append kvm_intel.nested=1 to the unraid os label, Libvirt fails to start when I tell the VM manager to start back up.
  7. I just looked at the logging for the first time in quite a while and I'm being flooded by errors, about 3 every second: Aug 27 23:31:24 NAS root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token I've seen in this thread that it was a security increase within unRAID's webUI, but that was resolved on 6.3 I believe. Unassigned Devices is fully updated. Any ideas?
  8. True, I thought it would show raid5 for the system (I set mconvert to raid1, so I was expecting that to show raid1). Thank you! Is there a calculation for that? I was expecting it to be "Free space, minus 1TB"
  9. I was actually just editing the quoted post, sorry about that. I think I got it! Thank you!! Now my next and hopefully final question.. Shouldn't System and Metadata say RAID5? The UI shows the sdg as having 3TB free, when it should have only 2. Here's what I ran, and the results of btrfs filesystemdf: root@NAS:~# btrfs balance start -dconvert=raid5 -mconvert=raid1 /mnt/disks/nas-ssd-pool/ Done, had to relocate 4 out of 4 chunks root@NAS:~# btrfs filesystem df /mnt/disks/nas-ssd-pool/ Data, RAID5: total=2.00GiB, used=1.00MiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=112.00KiB GlobalReserve, single: total=16.00MiB, used=0.00B
  10. Right, I mounted only the first device with "nas-ssd-pool" as the mount name. When I mount it via UD's UI, /mnt/disks/nas-ssd-pool is there, but when it's unmounted, it vanished (which is what I'd expect). I don't try to mount the other devices.
  11. Thank you! It seem easy enough, however when I'm trying to do this, the btrfs dev add -f /dev/sdg1 /mnt/disks/nas-ssd-pool fails with: "ERROR: /dev/sdg1 is mounted" which makes sense, because the directions said to mount it, but it also makes sense that the command cannot change the config of a file system while it's mounted. Any idea on this one?
  12. Thank you! That seems easy enough. I was hoping for a GUI way (feels less hacky that way lol), but I'm not afraid on a Linux command line. Now to figure out how to power the additional drives.. I'm using the SuperMicro 12 drive chassis with my 2 drive cache pool using command strip velcro to the side of the case powered off one molex to 2 sata adapter. Not sure if it's safe to run 5 SSDs off a single molex plug lol..
  13. I'm running unRAID 6.7.2 pro version with a 10TB Parity, with 2x10TB, 1x2TB, soon to add 2x2TB, 1x3TB, and 1x4TB as data drives. These are all spinning disks. I'd like to however also add an array for 1TB SSDs to house VMs for a Proxmox cluster. I can't add these to the normal pool because the parity drive would slow everything down. I'm tossing the idea of having a hardware RAID 5 and present that to unRAID's Unassigned Devices as a single virtual drive, but then I can't see if there's a failure (these would be internal drives). Is there a way to make this a software controlled RAID 5 so unRAID can alert me of any failures, but keep these drives separate from the spinning disks?
  14. How do you do this (manage it using the UA Devices Plugin)? I have it installed but it looks like I can only connect NFS/SMB or ISO file share, rather than an internal virtual disk. I need to add a pair of 1TB SSDs in a RAID0 that I don't want to be part of the normal array, but I need to be able to share it out.