N4TH4N

Members
  • Posts

    58
  • Joined

  • Last visited

Everything posted by N4TH4N

  1. @Maticks did you get networking working on 6.7.2 in ESXi ? I'm having the same issue. Older versions work fine.
  2. WOW!, 1 year later and this is still happening. How time flies. I replaced the SAS cables, Cache drive and all my array drives over the last year. Also got a new UPS. I was able to remove my RAID card 2 days ago as i'm now down to 3x 8TB disks. Has not crashed yet after removing the RAID card, but i also removed a 4TB at exactly the same time and it's only been 2 days. Sometimes it stays up 20-30 days without a crash.
  3. Hey all, I've been struggling with what appears to be random reboots for a long time now. I have already done quite a lot of troubleshooting, memory tests, etc. It's been ongoing for along time. I thought it was bad drives but I just decommissioned 6x old drives that all had errors and replaced them with 2x 8TB IronWolfs, i also replaced my 120GB cache with a 500GB 860 EVO and bought a 900w decent UPS. I took out a now un-used ethernet card and just have onboard. I also removed a RAID expander i had as i have 7 drives now which can fit directly on my RAID card. All the parts i have left "should" work reliably. But i'm still getting reboots and hangs. Sometimes it will stay up 30+ days, other times i get 24 hours. I really thought once i replaced my SSD and removed all drives that were reporting errors things might change, but nope. I have attached the last diagnostics and FCPsyslog. Any help would be greatly appreciated. Im afraid i'll have to keep changing parts out until some point when it stays up. If i get one more 8TB drive i can remove 3 more older disks and i can also at that point fall back to the motherboards SATA ports meaning i can remove the RAID card and SAS - SATA cables. The its motherboard/RAM/CPU. If it still crashes after that the only thing left is the power supply, but thats a 1000w Corsair that i highly doubt is faulty. Thanks, Nathan lemon-diagnostics-20190105-2121.zip FCPsyslog_tail.txt
  4. Although its not necessary. I would like to use different ranges for different zones in my house. I'm converting my home to a smart home and have a lot of devices including lights, cameras, power points, blinds, etc. I like to static assign these devices. 10.0.0.x range i was keeping for network infrastructure. 10.0.0.1 is my pfSense 10.0.0.2 is my smart switch 10.0.0.10-15 are my 6 AP's etc. 10.0.1.x are devices in my server cupboard 10.0.2.x are devices outside 10.0.3.x are devices in my lounge 10.0.4.x are devices in my studio 10.0.5.x are devices in my kitchen etc. I have 12 areas (rooms) in my house and each has assigned its own range. Then i use 10.1.1.x for DHCP dynamic leases. In total i would have more than 100 devices but less than 254 i get that i don't need a /8 subnet and could go about it a different way. Anyways, i ended up just manually editing the network.cfg file and rebooting. Works fine, just would be nice to have an unrestricted interface. If i want to use /8 i should be able to via the interface.
  5. Why is there no option for /8 subnet ? I run a pfSense and use a /8 subnet so i can use 10.0.0.1 - 10.255.255.254 With a /16 subnet i can only use 10.0.0.1 - 10.0.255.254 Obviously i don't need that many ip addresses. However i want to group different zones into different ranges. Not having a /8 option limits me to the C and D only when i would really like to use B as well. Thanks.
  6. Hello, I have been getting random crashes quite a lot recently. I had a SSD cache drive that was bad that i thought was causing the issues. After removing the crashes became less frequent, so maybe it was an issue as well. I haven't been able to capture proper diagnostics as the WebUI, SSH and console are all unavailable. I turned on "Troubleshooting Mode" in "Fix Common Problems" and grabbed the attached items. Intel i5 3470 CPU (Ran prime95 on the CPU for a few hours, never broke a sweat as i have water cooling) 16GB DDR3 RAM (2x 8GB ran Memtest86 for 24 hours without a single error) ASUS P8H77- M LE Motherboard (Updated the BIOS to the latest version) Corsair 1000w PSU (More than enough juice to go around) IBM Serveraid M1015 SAS RAID Card (Flashed with LSI 9211-8i IT firmware. This has been in this configuration even before i started using unRAID bare metal, used to use unRAID in an ESXi VM). Intel RAID SAS Expander RES2SV240 Orico PVU3-4P USB3 Card (Used only for a Windows VM. My boot is "append iommu=pt vfio-pci.ids=1106:3483 initrd=/bzroot") Just not sure what to do next. I seem to wake up every couple of days to a crashed server. This morning i could ping it and got an NGINX error trying to access the WebUI, dockers would not load, SSH would not connect and the unBALANCE plugin loaded the header but thats it. Thanks. unraid-diagnostics-20180220-2112.zip FCPsyslog_tail.txt
  7. Can report that this has solved my VM issues.
  8. Crashed whilst doing the reiserfsck on the corrupt disk. I was unable to access the WebUI, SSH, physical console (keyboard not responding), there was nothing onscreen except the normal login. So i was unable to capture diagnostics. I powered off the machine and removed the disk from the server. Its now powered back on and i'll use my disk dock thats connected to a usb card thats passthroughed to a VM to have a look at the disk. During the crash that corrupt disk11 i was putting quite a bit of load on that disk. Its also my oldest disk. Now that its removed i'm hoping not to see a crash again, maybe (fingers crossed).
  9. To get it up and running i removed disk 11 from the config, theres nothing i need on that disk currently. I restarted the array and its up and running. Cache disk is fine. I'll run some checks on that disk and add it back if i can. I had some bad crashes a couple of months ago that stopped right after i removed a bad 120GB SSD that was the second disk in the cache pool. During those crashes it corrupted 1 XFS disk, i was in the middle of a disk recovery on a new disk when my power went out which corrupted 2 more XFS disks. So i ended up with 3x corrupted XFS disks. Currently i have no parity drive as i needed the space temporarily and have not had a chance to purchase a new drive yet. I have 3 disks sitting out of the machine that i have been trying various tools like UFS Explorer to get as many files recovered as possible. The 3 corrupt XFS disks i did initially run xfs_repair and it required me to run xfs_repair -L (wish i never did) after which the disks reported much more free space than i should have and i had a huge lost+found folder on each disk. It was then i decided to pull the disks from the system and tend to them one at a time with UFS Explorer. I havn't done a single write to them since in the hope that ill find a method to get as much data back as possible. Any other suggestions? Once i get a new drive for parity and a new UPS ill work on getting the system stable and start replacing old disks with newer once every month or so when i can afford to until i have no old disks.
  10. Hey, I'm having an issue getting my server started. Last night i upgraded from 6.3.5 to 6.4.1, everything seemed to be going well and working smoothly until today when i woke up. I noticed my server was inaccessible. I had put a new (second hand) motherboard and CPU in my server months ago and have had a few crashes since then. I only just updated the BIOS to the latest version (it was a 2012 board on its release version). Perviously i had been running a ASUS P6T and i7 920 from 2008 (24/7 for 8+ years). Reason why it crashed aside its now not mounting disks and starting the array. So far i have updated the BIOS to the latest version (in an attempt to help with crashes, maybe). I have booted into safe mode to ensure its not plugins causing the issues. But i'm unsure of whats next. Any help to point me in the right direction would be appreciated. unraid-diagnostics-20180218-2344.zip
  11. Really like the 6.4.1 update. Theres even a balance button for the BTRFS formatted drive.
  12. Much appreciated. My workflow can resume. I use the VM daily to do disk backups on client machines before formatting drives and reinstalling the OS. I have a few disk docks connected to a USB3 pcie card thats passthroughed to the VM. I had to revert back to a spare bare metal machine when the issue started.
  13. root@unRAID:~# btrfs balance start -dusage=75 /mnt/cache Done, had to relocate 50 out of 114 chunks root@unRAID:~# devid 1 size 111.79GiB used 85.79GiB path /dev/sdi1 I have downloaded and installed the 6.4.1, just waiting 20 mins for a copy to finish and ill reboot. Thanks for the help, ill report back when i know the result. I do recall seeing devid at 111.79GiB used but just thought it meant the partition was using the whole disk. I had and still have no idea about how a BTRFS format drive works. Is there anything else i'll need to do ?
  14. It paused again, sometime since my last post. unraid-diagnostics-20180217-2217.zip
  15. Thanks, i'll try and get it to do it again and note the time so its easier to pin point in the diagnostics. I have 47GB free on the host disk.
  16. [Solved]: Issue was that my BTRFS format cache drive was fully allocated even though it was reporting it had 47GB free space. Solution was to run a balance on the cache drive and then update to version 6.4 or newer. I was on 6.3.5 which had problems with BTRFS drives. btrfs balance start -dusage=75 /mnt/cache Hey, I got sick of Windows 10 which i had previously been running as a VM, removed it and created a new Windows 7 VM from scratch. Since then i've had theses issues. Sometimes it will stay running for 12+ hours, sometimes 5 mins and it will pause. I have been running VM's on unRAID for a LONG time and have never came across this issue before. Before i spend time reinstalling the VM from scratch is there anything else i can try. Im only running 1 VM (Windows 7) + some dockers. I have allocated 2 of my 4 cores to the VM. I have allocated 8GB memory (16GB total) to the VM (79% used with everything running under load). I have the VM on a 120GB cache drive (47GB free). I have configured Windows to never sleep or turn off screen. I have disabled hibernation. Thanks in advance for your help.
  17. Hey, I had a power failure which means i needed to do a parity check. For some reason my system crashed during the parity check a few hours in and upon restarting 3 of my 13 array drives were un-mountable. All drives corrupt were XFS. 9 drive are RFS and 4 are XFS. I'm just wondering if XFS is more volatile than RFS. I was missing around 1.5TB of data over the 3 drives after doing a xfs_repair and am now running a recovery through UFS Explorer to try and get the data back. I think its found most of it but the file structure/names are never coming back. Although this loss is devastating as i may not get everything back its my own fault for trusting parity. After recovering what i can and organise what was saved im going to backup everything thats irreplaceable offsite. I think ill also add a UPS to my setup.
  18. Thanks, ill just have to swap the parity to the 4TB then again later on to the 6TB.
  19. Hey, Im just wondering if its possible to use a 4TB HDD with only a 3TB parity drive. I'm happy to only have 3TB of the 4TB drive available until i update the parity drive to a 6TB. I plan on swapping the 3TB parity to a 6TB but have a 4TB sitting here. I don't really want to use the 4TB as the parity right now as ill be swapping to a 6TB soon. So i was just wondering what will happen if i simply put the 4TB into the array with only a 3TB parity. Thanks.
  20. Just wondering if it's possible to change the theme for Dolphin.
  21. Hey, Since upgrading to unRAID 6 i have had various lockups when managing docker containers and vm's on 3 seperate machines that have caused the web ui to stop functioning. I still have ssh access to the server and was wondering if there was a script or command available that will reboot the server without having to physically press the power button. 1 server is at my house, one is 1200km away and the other is 10km away. When i'm doing things to them at 3am and it stops responding i cant ring my friends to reboot them until the morning which is pretty inconvenient. Thanks.
  22. Hey Guys, I just pulled the card out of my unRAID system and put it into another to do a driver test which went well so the card is infact functional and works correctly with the drivers i was supplying it with. The problem is that this is happening when passing through via KVM. Its a device cannot start error. Any suggestions as to what i'm doing wrong ?
  23. After checking dmesg and it having this error present: vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param "allow_unsafe_interrupts" to enable VFIO IOMMU support on this platform I added this to my /boot/syslinux.cfg file after append and before bzroot intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 Then rebooted. With the following code in my XML and the allow_unsafe_interrupts in the syslinux.cfg i managed to get my VM to boot with my PCIe card. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </hostdev>
  24. After physically swapping the ports i got past the other error about grouping but now get the following error. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </hostdev> internal error: early end of file from monitor: possible problem: 2016-02-16T04:40:43.996264Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x7: vfio: failed to set iommu for container: Operation not permitted 2016-02-16T04:40:43.996304Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x7: vfio: failed to setup container for group 10 2016-02-16T04:40:43.996312Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x7: vfio: failed to get group 10 2016-02-16T04:40:43.996324Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x7: Device initialization failed 2016-02-16T04:40:43.996335Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x7: Device 'vfio-pci' could not be initialized
  25. I have a 2x USB 3.0 Card in a PCIe slot thats assigned to a different group. I was hoping to forward this card to a VM also but if it means i can use my eSATA card instead ill sacrifice it. 01:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) /sys/kernel/iommu_groups/10/devices/0000:01:00.0