italeffect

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

italeffect's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Do I have to pass the smtp server settings in the Docker template to get email notifications to work?
  2. I think I'm just going to bite the bullet and replace the drive and do a rebuild.
  3. Thanks for this, I had no idea this was a thing. Not the issue in this case but good to look out for!
  4. I have these increasing errors on a single HD over the last few days. Knowing that this is usually a cable or controller issue I have swapped the sata break out cables from my LSI 16e, and still get errors on the single drive. I have swapped the power cables (1st) and then shuffled them (2nd try) between 3 other drives. Still I get crc errors on the same HD and no others. Is it safe to assume at this point that it is an issue with the single disk? I've run a short smart test on that HD which was fine and did a full parity check which was also fine. Disk 12, WDC_WD140EDFZ-11A0VA0_X0HB8DYC (sdo) Diagnostics Attached. Thanks! unraid-diagnostics-20221217-2154.zip
  5. Is this correct syntax for Exclude Folders? Do I need to escape the spaces in the path or use quotes? Thanks! /mnt/user/appdata/PlexMediaServer/Library/Application Support/Plex Media Server/Cache/PhotoTranscoder/,/mnt/user/appdata/PlexMediaServer/Library/Application Support/Plex Media Server/Metadata/
  6. I have all these IPV6 "Multicast" entries after the upgrade to 6.9. I have IPV6 turned off. Am I safe to delete them all? I have been unable to find anything that explains what they are. Thanks! (Perfectly smooth upgrade from Beta30 BTW).
  7. For those following this thread like I am - Limetech posted in the new 6.9 beta post that they are not aware of this issue? Perhaps someone more skilled than I can provide a TLDR on the issue and what has been worked out so far in that thread.
  8. Just as a point of reference I'm preclearing four 10 Tb drives right now, it's taken about 18 hrs per each of the first two cycles, about halfway through the final read now. So likely about 2 1/4 days in total. i7 8700k, LSI 9201-16e and sata, 10 drives in existing array, running all dockers and VMs while preclearing.
  9. Thanks fo this. I wasn't clear on the exact syntax for the repo. Working fine.
  10. Thanks again for the help. For the sake of simplicity I just reformatted the 8TB disk and passed it directly to the VM. Problem solved.
  11. Thanks very much for the explanation. I'm surprised I haven't run into this earlier since this is my video storage vdisk for Blue Iris and I've let it fill up (6-7TB) several times over the last several months before clearing it out. I removed the 2nd vdisk and the VM booted right away. I'm assuming i need to run qemu-img resize to shrink the vdisk, but since I can't boot into windows with the disk first to shrink the file system, what are my options? Do I need to just delete the storage vdisk and start over? I have most of the data I need backed up off it so it's not really a huge loss. Or can I just run the resize command anyway since it's not the boot vdisk? Is it a more sane setup to just pass the whole drive to the VM and use it that way? I found your directions on using vitio-scsi controller together with discard='unmap', so thanks for that and I'll get that setup.
  12. Thanks for getting back to me. Yes the 2nd vdisk is on a 10TB HD and is the only item on the disk. I think the size is 9.9TB. Inside the windows VM it shows 2TB used out of 9.9TB. Do I need to resize this disk smaller inside the windows VM to leave some space on the disk in unassigned devices? FYI The boot vdisk is on an SSD and has 50GB free inside the VM and ~300GB free looking at the drive in unassigned devices. As of now I can't get the VM to boot at all, it just hangs at some point during boot. Going to see if I can boot a backup copy from a couple months ago.
  13. My Windows 10 VM that runs Blue Iris has been working for over a year. Recently it started only running for 5-10 minutes and then ending in a paused state. All my attempts to fix it have not helped and often I can't even get it to start. It's passed 2 cores, 12 Gb of RAM, the IGPU from my 8700k and has two vdisks, both stored on unassigned devices. One SSD and one HD. It has been a great setup and has run with no issues for a long time. In attempts to fix, I have: recreated the VM turned VMs on/off Updated Unraid from 6.8.0-rc7 (which I ran since post with no issues) to 6.8.2 Turned off PCIe ACS override: Downstream and VFIO allow unsafe interrupts:Yes (no change) Changed machine type between i4440fx-3.0 (original), i4440fx-4.2, Q35-4.2 Changed from 2 cores to 1 Removing iGPU and booting VNC only (freezes at login or won't connect at all) The one set of log messages I'm seeing after a couple minutes of launching the VM is: Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop Feb 22 15:12:08 unRAID kernel: kvm [1007]: vcpu2, guest rIP: 0xfffff80108dc1a52 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop But not really seeing much else in terms of errors. I could really use any help on what direction to go in please. Diagnostics attached. Thanks! unraid-diagnostics-20200222-0914.zip
  14. Is there a way to pull the pi-hole 5.0 beta with this docker? Thanks! https://pi-hole.net/2020/01/19/announcing-a-beta-test-of-pi-hole-5-0/
  15. I have something similar in my logs, it's been going on under 6.6.7 prior to me updating to 6.7.0-rc7. It happens several times a day. I recently switched both the switch that Unraid is connected to and the ethernet card, but I'm seeing similar messages both before and after I switched the hardware. Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered blocking state Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered disabled state Apr 23 10:42:24 unRAID kernel: device veth36c9b86 entered promiscuous mode Apr 23 10:42:24 unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): veth36c9b86: link is not ready Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered blocking state Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered forwarding state Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered disabled state Apr 23 10:42:24 unRAID kernel: eth0: renamed from vethaf1be8e Apr 23 10:42:24 unRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth36c9b86: link becomes ready Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered blocking state Apr 23 10:42:24 unRAID kernel: docker0: port 5(veth36c9b86) entered forwarding state Apr 23 10:42:27 unRAID kernel: docker0: port 6(vethe876c38) entered blocking state Apr 23 10:42:27 unRAID kernel: docker0: port 6(vethe876c38) entered disabled state Apr 23 10:42:27 unRAID kernel: device vethe876c38 entered promiscuous mode Apr 23 10:42:27 unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): vethe876c38: link is not ready Apr 23 10:42:27 unRAID kernel: docker0: port 6(vethe876c38) entered blocking state Apr 23 10:42:27 unRAID kernel: docker0: port 6(vethe876c38) entered forwarding state Apr 23 10:42:27 unRAID kernel: docker0: port 6(vethe876c38) entered disabled state Apr 23 14:57:29 unRAID kernel: br0: port 3(vnet1) entered blocking state Apr 23 14:57:29 unRAID kernel: br0: port 3(vnet1) entered disabled state Apr 23 14:57:29 unRAID kernel: device vnet1 entered promiscuous mode Apr 23 14:57:29 unRAID kernel: br0: port 3(vnet1) entered blocking state Apr 23 14:57:29 unRAID kernel: br0: port 3(vnet1) entered forwarding state I was going to get around to asking the question at some point if these messages matter. I have not had any noticeable performance or network issues so it hasn't been a priority.