Kaldek

Members
  • Posts

    87
  • Joined

  • Last visited

Everything posted by Kaldek

  1. OK, yeah I think I'm just lacking in caffeine this morning. It's no different in functionality to if I had a two drive array (one parity, one data). It's still "parity" even though it might be conceptually similar to RAID1 (in that only two disks are taking part in the resiliency for this particular data). I don't think I'd ever recommend that anyone visualised it that way. It's just helped me wrap my head around what I saw.
  2. It took me a while to wrap my head around that but I think I get it. What I think you're saying is: All disks are involved in parity calculations for the first 4TB Only the parity drive and 8TB drive are involved in parity calculations for the last 4TB. Does that therefore mean that for the last 4TB of the 8TB drive, it's not technically parity but rather a bit mirror? And that if I added another 8TB drive to my array it would switch from a bit mirror to actual parity? I think I might be mentally barking up the wrong tree here
  3. Hi folks, I am hoping someone to clarify the behaviour of my parity check behaviour. My setup is as follows: Parity: WD 8TB CMR Disks 1-3: WD 4TB CMR Most shares. All shares use Disks 1-3 only and Disk 4 is excluded for these shares. Disk 4: Seagate 8TB SMR Video Share (only the Video share, no other shares. Disks 1-3 are excluded for this share) I split Disk 4 off for Videos because it's slower and I don't mind if this disk is the one which is mostly spun up (the other shares aren't used as much) because it reduces power consumption by only having one disk spinning. Anyway when the parity check is running it starts out like the first image here where all disks are being checked: Then it changes to the following behaviour for the last 40% or so where only the 8TB disk is being checked: I'm sure this is normal, but why?
  4. Ohhhhhhh. Yeah that will mess stuff up. Ouch. I'm actually really curious what the polarity/voltage differences were between the two cables to find out whether it was a reverse polarity issue or a 12v into 5v issue.
  5. I'm sorry, I'm scratching my head here a bit. You say you replaced the PSU, and before turning it on you also used the new power cables that came with the new PSU, and then it nuked everything even when using the right cables?
  6. To update this topic for those using USB serial adapters (which I assume is realistically most of us), so far I have not managed to get kernel boot and shutdown messages to show but you can at least get the ability to login via the USB serial port. Essentially, you don't need the "SERIAL 0 115200" line, nor the changes to the "append" line" at all to get the ability to login via the USB serial adapter. Getting the ability to login via the USB to serial adapter only requires the addition to the /boot/config/go file as follows: sed -i -e "/^#s1/ i s1:12345:respawn:/sbin/agetty -L ttyUSB0 115200 vt102" /etc/inittab Note that my device is ttyUSB0 and it's set to 115200 baud rate as I'm connecting from a Mikrotik router. This will allow you to login via serial. What it won't do is let you see kernel messages via the serial port. There's a couple of issues with even trying to do this, including the fact that the USB driver loads quite late in the boot order and so you won't see a whole bunch of boot messages anyway. I am still looking into whether I can get kernel events posted to the USB serial console but for now this should help anyone who would like to restart their unRAID server even if network connectivity gets lost due to a NIC module errors or macvlan callbacks that kill the network stack. You may ask "what is the point since you can just walk up to the monitor and login via keyboard" but if I'm away from home and my unRAID server barfs, the ability to connect to it via serial through my Mikrotik router using Serial Terminal means I can kick it in the pants even without fancy stuff like an iLO card. My Mikrotik gear is the most reliable stuff in the house, so it makes sense to use this as the pivot point (I even use this for my VPN rather than a VPN container in unRAID). It may ultimately be that because USB is loaded so late in the kernel boot process, anything more than just basic login capability is a lost cause. I haven't given up yet, but it's very possible this is the case. For those of us that are desperate, I recommend a dedicated serial port as a hardware UART serial port will be loaded very early by the Linux kernel. There are a bunch of PCI-e serial port cards you can buy for this purpose.
  7. Closed as this issue is related to my other post about the same problem with RC2. This is all caused by using a different IP address for my Shinobi Pro docker container running on a different VLAN (br0.12). Again, I think Limetech need to solve this issue as a matter of priority since it's been around since 6.5.
  8. This issue is not occurring now that I have moved Shinobi Pro to the normal bridge and sharing the same IP address as the unRAID host. I can only ask the unRAID team to work on and solve this issue since it appears to have been around since 6.5.
  9. OK I've changed Shinobi to use the normal bridge, no VLANs. Traffic between Shinobi and the cameras is now being routed. Let's see how stable it is.
  10. I will add that Shinobi is running on a separate VLAN which is a subinterface of br0 (br0.12) and that br0 is a bridge on an Intel 10Gb/s NIC using the ixgbe module.
  11. These keep cropping up on RC2. I've had a couple of callbacks related to netfilter, as well as some kernel panics. Two examples posted as well as my diagnostics file, however note that at the time of this diag file I had stopped my Shinobi Pro docker container which I could swear is the catalyst. I am leaving Shinobi disabled to see if unRAID stays up longer. I mark this as "Urgent" only because system stability is really poor at the moment. mu-th-ur-diagnostics-20201222-1354.zip
  12. I for one would be happy to move to subscription licensing. Even if that means free upgrades for XXX years in one purchase, further cost if you want to upgrade after that. I get people's concern, but devs have to eat. As for the statement about not liking "forcing people to update", as an InfoSec guy that kind of thinking drives me mad. The amount of back doors and crap I have to deal with on a daily basis due to old software is ridiculous. I just spent the last 6 months fighting hard to ensure our desktop standard going forward is cloud-first, Azure AD Joined and up-to-the-minute patched for our supported software. Your app not work with Windows 10 20H2? Get the vendor to fix it as it's not on the supported software list. What has to change is not the need to update software, it's the resistance to change that has to go, which also means that systems and applications need to be coded so that unexpected downtime related to the update is eradicated.
  13. Got this rather odd Kernel Panic from unRAID 6.9.0-rc1 that appears to be caused by nf_nat_setup which is part of netfilter. mu-th-ur-diagnostics-20201214-0924.zip
  14. I'll add some experience here with Shinobi Pro. I recently downgraded my unRAID server from an i7-6950X to a Xeon E5-2630L and thought it would be a good idea to use the hardware decoding of my Quadro P400 GPU to save CPU cycles and therefore power consumption, plus leave more headroom for other containers and VMs. Anyway it turns out that this actually increased my power consumption and the stability of Shinobi was a bit all over the place. I suspect it was also the cause of it consuming all of my memory (prior to me setting an 8GB limit as discussed by others previously). I've now allocated three CPU cores (inc. the hyperthread, so 6 virtual cores) and it's handling 5 cameras at 2560*1920 15FPS at 40% CPU across those cores.
  15. This issue is solved by disabling PCI express ASPM.
  16. Never mind I just doubled the size of my docker image.
  17. Looks like this has been discovered before on the X99 chipset and the solution is to apply the boot flag of "pcie_aspm=off". I have set this, will monitor, and update this issue over the next couple of days if I no longer see these errors.
  18. Hello folks, my Shinobi pro container is consuming 5.5GB of disk space within docker.img: root@unraid:/var/lib/docker/btrfs/subvolumes# docker ps -s CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE b01cdfb65ee9 spaceinvaderone/shinobi_pro_unraid:nvidia "/opt/shinobi/run.sh…" About an hour ago Up 3 seconds shinobipro 180MB (virtual 5.51GB) Any idea what's gone wrong here? This is causing my docker.img disk usage alert to keep tripping as it's now over 75%.
  19. My unRAID 6.9.0beta35 (required as I'm running Nvidia acceleration) keeps having issues with the intel dual-port XFP card which is installed. This card uses the ixgbe module. The behaviour is that the machine loses network connectivity. Link lights stay up but the OS unloads the module. I get these errors which happen frequently. If they self correct it keeps going but if it fails with a fatal it requires a reboot: I only have an image of the fatal error unfortunately rather than text.
  20. I can confirm that the new Nvidia drivers work for me with a Quadro P400. I followed the previous guidance and: Stopped Docker services Installed unRAID 6.9.0-beta35 (upgrading from 6.8.3) Rebooted Removed the unraid-nvidia plugin Installed the Nvidia-Driver app from Community Applications Waited until completely downloaded and installed. It took about 3 minutes for me and I have a 1Gb/s internet service. Make sure you WAIT and read the status window carefully. Rebooted Restarted the Docker services In my case I did not need to edit my Plex Docker container as it already had all of the necessary settings in place.
  21. Hi, I have the same requirement and I've been using a Raspberry Pi running Raspbian with avahi daemon connected to all my VLANs for this purpose. I only noticed today during some unRAID maintenance that the mDNS/avahi daemon seems to be part of the operating system. So I am *also* curious if I can get this working using uNRAID rather than having to maintain my Raspberry Pi.
  22. I suppose it depends how busy your server is but I personally don't pin Docker containers to CPUs. My CPU is an i7-6950X though so I also don't know the per-core processing capabilities and whether the audio transcoding impacts this CPU much. TL;DR I haven't noticed any CPU consumption even with hardware decoding using a GPU. Also, most of my streaming is to devices that support direct audio and don't transcode, so I should probably monitor next time I'm streaming to a device like an Android phone etc. Having said all that, the cores on my server are mostly idle unless I'm doing actual work on the server using VMs. Even 50% CPU on a few cores wouldn't get my attention.