• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About italeffect

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have all these IPV6 "Multicast" entries after the upgrade to 6.9. I have IPV6 turned off. Am I safe to delete them all? I have been unable to find anything that explains what they are. Thanks! (Perfectly smooth upgrade from Beta30 BTW).
  2. For those following this thread like I am - Limetech posted in the new 6.9 beta post that they are not aware of this issue? Perhaps someone more skilled than I can provide a TLDR on the issue and what has been worked out so far in that thread.
  3. Just as a point of reference I'm preclearing four 10 Tb drives right now, it's taken about 18 hrs per each of the first two cycles, about halfway through the final read now. So likely about 2 1/4 days in total. i7 8700k, LSI 9201-16e and sata, 10 drives in existing array, running all dockers and VMs while preclearing.
  4. Thanks fo this. I wasn't clear on the exact syntax for the repo. Working fine.
  5. Thanks again for the help. For the sake of simplicity I just reformatted the 8TB disk and passed it directly to the VM. Problem solved.
  6. Thanks very much for the explanation. I'm surprised I haven't run into this earlier since this is my video storage vdisk for Blue Iris and I've let it fill up (6-7TB) several times over the last several months before clearing it out. I removed the 2nd vdisk and the VM booted right away. I'm assuming i need to run qemu-img resize to shrink the vdisk, but since I can't boot into windows with the disk first to shrink the file system, what are my options? Do I need to just delete the storage vdisk and start over? I have most of the data I need backed up off it so it's not really a hug
  7. Thanks for getting back to me. Yes the 2nd vdisk is on a 10TB HD and is the only item on the disk. I think the size is 9.9TB. Inside the windows VM it shows 2TB used out of 9.9TB. Do I need to resize this disk smaller inside the windows VM to leave some space on the disk in unassigned devices? FYI The boot vdisk is on an SSD and has 50GB free inside the VM and ~300GB free looking at the drive in unassigned devices. As of now I can't get the VM to boot at all, it just hangs at some point during boot. Going to see if I can boot a backup copy from a couple months ago.
  8. My Windows 10 VM that runs Blue Iris has been working for over a year. Recently it started only running for 5-10 minutes and then ending in a paused state. All my attempts to fix it have not helped and often I can't even get it to start. It's passed 2 cores, 12 Gb of RAM, the IGPU from my 8700k and has two vdisks, both stored on unassigned devices. One SSD and one HD. It has been a great setup and has run with no issues for a long time. In attempts to fix, I have: recreated the VM turned VMs on/off Updated Unraid from 6.8.0-rc7 (which I ran since post with no issues)
  9. Is there a way to pull the pi-hole 5.0 beta with this docker? Thanks!
  10. I have something similar in my logs, it's been going on under 6.6.7 prior to me updating to 6.7.0-rc7. It happens several times a day. I recently switched both the switch that Unraid is connected to and the ethernet card, but I'm seeing similar messages both before and after I switched the hardware. Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered blocking state Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered disabled state Apr 23 10:42:24 unRAID kernel: device veth36c9b86 entered promiscuous mode Apr 23 10:42:24 unRAID kernel: