Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About cuutip

  • Rank
  1. hmmm. I'm not sure that is all. I would add when I started my windows 10 / windows server VMs under UnRaid 6.4.1, the response was amazing. To add some more clarification, (this is all on spinners BTW, and they were 5400 rpms) I would start the VMs, they would take around 2 to maybe 5 minutes to fully load. I did not experience 100% disk usage issues during the load. To continue beating the horse. If I were to restart/reboot, poweroff/poweron a VM after a complete load, the VM would load in 10 - 15 seconds flat. And I noticed my Windows server loaded in 7 seconds. it was beautiful. I did not start to have issues with the Windows VMs riding the 100% disk usage until I upgraded to 6.8.3. Maybe this is a bug in the KVM, or mayber the implentation? That is one of the reasons for my late upgrade. and quite possibly a reason for me to backport to 6.4.1. All my VMs Windows/Linux/BSD etc responded much quicker under the KVM version included in 6.4.1. I also notice there is a much greater resource use with the VMs, and more 100% processor spikes while monitoring VM usage with htop in the 6.8.3 distribution. in 6.4.1 the spikes are minimal and the resources do not seem to be taxed as much, and/or as often.
  2. This solution has definitely gotten me to a point where i need to use block storage rather that SMB shares (i.e. steam, epic, etc). The packet errors from this method are rather numerous and are the major concern I have. I have used ethtool to tune the NIC and even set the MTU to 9000 for jumbo packets, yet the errors keep building up. I have changed VM NIC drivers in the xml setup file (i.e. rtl8139) but it only does 10/100 and I get drops instead of errors. I imagine native iSCSI handling from UNRAID may be a better way to go. The UNRAID feature request link post has a bunch of folks +1'ing it so, maybe using this solution until the dev's implement a native one is what to do.
  3. I would like to post an additional solution to the board. I had zero problems with the windows 10 smb/cifs homegroup shaing in v6.4.1 . My issue was the upgrade. Going from 6.4.1 to 6.8.x would break my smb shares completely. I did some forum searching, lots of reading, some config gap analysis checking and using the command smbstatus I discovered that the smb.conf line 'name resolve order = bcast lmhost host wins' was not working for me in samba v4.11.4. *I removed lmhost from the line for my samaba 4.11.4 configs.* now my shares work like they did in 6.4.1. happy guy here
  4. I am not sure if this is a feature request/bug report or what. When I turn on destructive mode for UD+ then attempt to format my 5tb drive using NTFS, exFAT, or FAT32, it fails instantly. i checked into the issue and noticed that the format command used by UD+ uses MS-DOS MBR partition instead of letting you select MBR/GPT+. So my remedy was to open a webterminal, fdisk /dev/sd(#), create GPT+ partition and write. then refresh my screen and format. it keeps creating a MBR before the format, and does not allow for multi-partitioning in the format process to accommodate. however xfs, and btrfs work fine out of the box. I'm not sure if there is something I am missing or something I can do to help out, but I will be turning off destructive mode, format the drive on another box and reintroduce it to the UD+
  5. [Solved] Realtek apparently doesn't play well with unRAID, so I purchased another usb/etherrnet dongle off of amazon. https://www.amazon.com/gp/product/B07MK6DJ6M/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1. this ax88179.178a nic chip works great, and setup was plug and chug.
  6. I have been reading about these posts how folks are moving their pfsense boxes over to an unraid VM, and I have been been shooting to do the same with my Sophos UTM. well, the vm runs faster (clearly) and I have already seen some better performance from running it for a small amount of time. My main issue is I cannot get the USB ethernet dongle that is on my current Sophos box to be passed in successfully. the VM sees the dongle but errors out durrng the modprobe with r815x: probe of 1:1-2.0 failed with error -32 I have a USB dongle that has no issue being passed through but it is only 10/100, and I have noticed degraded performance using it, so I dont want to keep using it. I have read the following community/forum posts for research: ***GUIDE*** Passthrough Entire PCI USB Controller ***GUIDE*** Passing Through Network Controllers to unRAID 6 Virtual Machines USB Ethernet Passthrough - Is it possible? How to pass through a physical Network Controller in unRAID my failed troubleshooting includes: adding the USB device directly into the 70-persistent-net rules file: didn't work probing the r815x and attempting to force it to take: didn't work i tried adding <driver name == r815x/> into the VMs XML configuration: didn't work I've deleted the 70-persistnent-net-rules and crossed-my fingers and hoped it would configure it.: didst work for my 10/100 dongle, this works just fine. UnRaid lists it as an ICS Advent DM9601 (0fe6:9700) letting the system configure the 70-persistent-net rules file then switching the usb information with the 0bda:8153 and MAC from the r815x chipset. this breaks it and I have to reload from snapshot. I thought the issue was using ehci instead of xchi cause it was a USB 3.0 dongle. didn't work. so I put the dongle on the other USB busses and tried both ehci and xhci configurations, that didn't work. I even purchased 3 different dongles and ALL of them, once I plugged it in, came up as 0bda:8153 (ppl should be forced to list their chipsets smh) didnt work My unraid config: unRAID version: 6.41 Model: Custom M/B: Dell Inc. - 0GXM1W CPU: Intel® Core™ i7-3770 CPU @ 3.40GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 32 GB (max. installable capacity 32 GB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.14.16-unRAID x86_64 OpenSSL: 1.0.2n Not sure my VM config would be helpful considering that it does work, just not with this ONE instance of an r815x passthrough. this is the SAME dongle that is being used right now on the hard box. I have tried upgrading to 6.8 but doing so killed my SMB shares. I did some initial reading/research on it and I am not using SMBv1/cifs for my windows 10. I downgraded cause 6.41 has no issues with Windows 10 and SMBv1 turned off. So, Im trying to pick my battles here. please help. I've been pounding my head on this one for awhile.
  7. Can the moderators take these posts and move/create the thread? If they cannot, I will start the new thread. Thanks
  8. I have this exact same scenario happen. the buggy Marvel chipset along with the shutdown and data loss. I did switch the pci-e port and re-seat, but unfortunately, I did lose data.the array and parity rebuilt, when i rebooted, there was data loss. I remember my ssd cache disk had data on it, but now, after the rebuild, it doesnt recognize the cache drive anymore. the only option it (UnRaid) gives me is to format. I am wondering if there is anything else for me, or am I in an irrecoverable state. thanks