Hadrian_Aurelius

Members
  • Posts

    16
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Hadrian_Aurelius's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I've been following this thread for awhile and haven't seen any recent updates. I was affected by the issues outlined in the OP but wasn't able to get rid of them without taking my LSI controller out of the equation and going back to running directly off the motherboard SATA controller. I'm planning on bringing back the LSI card to allow me to add more disks to the array but I'm scared of running into this problem again. With the new version of Unraid, has anyone tested whether the new version itself somehow avoids the discussed issues in this thread?
  2. Been a month since I posted about my issues. Parity check completed for the first time in two months without issues (as expected). Once again, this problem started suddenly and mysteriously for me after running problem-free for years. Also, no changes to the Kernel on my system (at least not in many months) so I can't see that being the cause either. In the end, I simply removed my IBM M1015 9220-8i from the equation and I'm back to using sata directly off the motherboard - so far, no issues.
  3. Add me to the seemingly relatively small list of folks this solution (much appreciated fix/write-up) didn't work for. Here's a summary of my experience: Unraid 6.10.3 IBM M1015 9220-8i (IT mode with latest firmware) Drive that keeps dropping off (as of beginning of December 2022) = st10000dm0004 (2gr11l) There are two other st10000dm0004 drives in the system (one is the same revision and the other is a different revision and is the parity drive - 1zc101). There are also 2 x st10000ne0008 in the array. Only the above-listed specific drive has started to drop off and this is after at least two years of no problems. First drop off occurred 01 Dec 2022 (scheduled parity check). I researched the issue, found this thread (and others), checked firmware version for the drive that was getting dropped (despite not seeing why this should make any difference given trouble-free operation for last few years) and found no updates. Did same for HBA card - same - already on latest firmware. Ran through the changes outlined in OP and executed all recommended changes on all above-mentioned drives (not just the one that dropped off). Realizing there could be other factors (cabling, power etc...) I also moved the drive to a different slot on the enclosure I'm using (rsv-sata-cage-34 in a Rosewill 4U whitebox build) and swapped places with another drive. Theory was that if the issue happened again on the same slot, the rsv-sata-cage-34 or associated breakout cable could be suspect. Removed drive from array, ran extended SMART tests - everything normal. Reintroduced drive to array and performed rebuild - no errors. Walked away and forgot about it. 01 Jan 2023, same drive dropped off again during start of parity check. This eliminates slot/cable as culprit as drive was in a new slot with different breakout cable. This time, I removed the IBM M1015 controller and reconnected all drives directly to motherboard. Rebuilt drive over the top of itself again and this time also immediately followed up with a parity check. No errors. I don't really know what to make of all this. Obviously, it's possible I would immediately have had the same drop off issue if I had run the parity check after the first incident on 01 Dec 2022....or would I? Also, when I ran the parity check this time around, I didn't spin the drive down before running it so this might mean the problem is going to show up again on 01 Feb 2023. Even if the issue doesn't occur again in Feb, I'm always going to wonder if it will suddenly show up again at a later date. Still begs the question why this would suddenly start happening after years of problem-free operation and no changes to hardware/software/OS around time of first occurrence. This is a ugly problem and huge time waster when it happens. Rebuilding the array and re-checking parity is not fun. If this happens again to me, I'll follow up in this thread. If it doesn't it seems either: - The Seachest solution doesn't work for the st10000dm0004 drive I started having problems with; or - My IBM M1015 went bad (which makes no sense given that this is only happening on one drive ....so far); or - The IBM M1015 simply decided it doesn't like that drive anymore for some reason: or - We're still missing part of what the cause of this issue is I'm going to go with the that last guess.... Possibly MTF.
  4. The container for me ran fine for a few months. Zero problems. Then one day, it started constantly falling over. Wouldn't start, would crash all the time. Just unending issues. I spent weeks trying to get it working properly. Deleted/wiped everything multiple times, tried tons of different settings but in the end I just gave up and installed the container on an Unbuntu VM and same as others have said, works properly with none of these problems. If the Unraid container works fine for you, great but don't be surprised if it just suddenly fails some day for no apparent reason. This thread is full of people talking about the same unexplained problems.
  5. The Ubuntu VM thing was me. Still running with zero problems. Once again, sorry but the container just doesn't work or if it does, it won't for long.
  6. I have the same problem(s) that other have reported in this thread. After a month of constantly messing around with then wiping the container before it once again inevitably crashes, I've given up and installed the controller in an Ubuntu VM. Works fine and doesn't have any of these problems. Something just isn't right with this container. I run lots of other containers in Unraid and had/have none of the issues this one has.
  7. The workaround I've come up with for the time being is to just use NFS instead. Initially, I thought that wasn't working properly either but I just noticed it actually now is working properly on the affected Windows 10 box where SMB still isn't.
  8. I'm on Windows version 20H2 build 19042.746 and I have also spent hours on this. I've tried every single suggestion in this thread (and others in the forum) and nothing works. Like others have said, the registry settings also aren't even there now so that tip is now irrelevant. I even tried to create a DWORD entry and gave it the appropriate value but still no go.
  9. UPDATE - For now, I have put "brctl setageing br1 0" in the Unraid Go file, which is a way to make the setting persistent across boots although I'm sure there's a better way. One of the bridges in my server requires that the setageing parameter is set to "0". This relates to a workaround I'm trying to create for issues with capturing/processing network traffic inside VMs (using promiscuous mode). I'll be updating a number of posts I have to help people with this once I've solved the persistence part of this issue. I can successfully set the parameter by manually running (br1 being the bridge I need this applied to): brctl setageing br1 0 ...but this is not persistent across boots so I've gone looking for where the default setting is coming from on boot. The default setting is at least referenced in: /sys/devices/virtual/net/br1/bridge/ageing_time ....however this setting gets overwritten on boot It appears as though the following config file would need to be edited: /boot/config/network.cfg ....but I've tried adding what I interpreted the correct config option should be but I may have used the incorrect syntax. I added the following line: BRAGEING[1]="0" I came up with that line based on looking at what someone else had done in Proxmox. They used: /etc/network/interfaces <snip> auto vmbr1 iface vmbr1 inet manual bridge_ports eth1 bridge_ageing 0 bridge_stp off bridge_fd 0 ....whereas the syntax in Unraid looks like this and as I mentioned is located in /boot/config/network.cfg as opposed to /etc/network/interfaces: IFNAME[1]="br1" BRNAME[1]="br1" BRNICS[1]="eth1" BRSTP[1]="no" BRFD[1]="0" Can anyone help to identify where this setting should reside? Would much appreciate. This seems to be the best workaround to the feature request I posted at:
  10. Update - I have a workaround for this per my post at - ...but it would still be nice to have a GUI option to adjust some of these settings. Feature Request In order to capture traffic using VMs, it seems it is currently necessary to dedicate and pass through an entire NIC on a per VM basis. My main use case for VMs is multiple VMs (Security Onion, Selks, SNORT, Suricata etc) that capture/process traffic from taps and/or mirrored ports. It's a waste of resources (NICs and PCIe slots and tap/mirror ports) to have to dedicate a NIC to each and every VM in order to work around the traffic filtering that the VM Manager does by default. There has been previous discussion on trying to work around this as per: and I also attempted (unsuccessfully) to get around the current VM Manager traffic filtering by doing the following: https://vext.info/2018/09/03/cheat-sheet-port-mirroring-ids-data-into-a-proxmox-vm.html If I were able to configure promiscuous mode as easily as I can using ESXi, I could easily use Unraid as the primary hypervisor in my lab and I believe there are many other NSM geeks out there who would benefit from this feature.
  11. Update 2 - I have now also come up with a way to do all this without using an entire NIC passed through to each VM. See my post at: Update 1 - I also solved the issue by passing through a PCIe slot as well as "half" a 4-port NIC. Everything below is just for history in case it helps other people. Would very much like to know how you managed to get this working. I'm not passing through a NIC (because I wanted multiple VMs to capture from the same physical interface) in my case but I don't think that should matter since the problem I'm running into was reported both with/without pass through as per: https://forums.unraid.net/topic/79511-enable-promiscuous-mode/?_fromLogin=1 I'm having the same issue as those folks in that I have promiscuous mode up and running yet all I see on any VMs attached to the bridged interface is broadcast traffic. In your case, did you try without pass through and then moved on to pass through because you had the same issue? Many thanks in advance for any insight/help.
  12. Update 2 - I have now also come up with a way to do all this without using an entire NIC passed through to each VM. See my post at: Update 1 - I also solved the issue by passing through a PCIe slot as well as "half" a 4-port NIC. Everything below is just for history in case it helps other people. I also have the same problem. The only way around this that I can think of might be to try passing through an entire dedicated capture NIC to the VM but I'd rather not have to do this because I wanted to have multiple IDS/packet-capture VMs running, all capturing from a single physical interface. This is a huge setback for me as I completely rebuilt/upgraded this box to take over running my 24/7 VMs I had hosted on ESXi to reduce power consumption. Sorry, edited my post as I didn't read your properly and realized you've actually already done what I had thought might be the next step. Looks like someone may have solved this issue via pass through:
  13. That was it. I guess I'm an idiot. Mine doesn't seem to show "abc" as the password. Instead, it shows only the MD5 hash so I assumed I had to use it (which did seem weird to me). So I had been using u:abc p:md5hash Thanks!
  14. Thanks for responding so quickly. So far, I have tried: -Clearing all browser data -Delete and re-install container -Separate browser -Blank username/password (instead of the ones that appear in the default config) None of the above has worked so far.
  15. After installing the container, I end up stuck at the Apache Guacamole login prompt. I've tried the listed username/password in the config and they don't work - neither does changing them.