N4TH4N

Members
  • Posts

    58
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Gold Coast, Australia

N4TH4N's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. @Maticks did you get networking working on 6.7.2 in ESXi ? I'm having the same issue. Older versions work fine.
  2. WOW!, 1 year later and this is still happening. How time flies. I replaced the SAS cables, Cache drive and all my array drives over the last year. Also got a new UPS. I was able to remove my RAID card 2 days ago as i'm now down to 3x 8TB disks. Has not crashed yet after removing the RAID card, but i also removed a 4TB at exactly the same time and it's only been 2 days. Sometimes it stays up 20-30 days without a crash.
  3. Hey all, I've been struggling with what appears to be random reboots for a long time now. I have already done quite a lot of troubleshooting, memory tests, etc. It's been ongoing for along time. I thought it was bad drives but I just decommissioned 6x old drives that all had errors and replaced them with 2x 8TB IronWolfs, i also replaced my 120GB cache with a 500GB 860 EVO and bought a 900w decent UPS. I took out a now un-used ethernet card and just have onboard. I also removed a RAID expander i had as i have 7 drives now which can fit directly on my RAID card. All the parts i have left "should" work reliably. But i'm still getting reboots and hangs. Sometimes it will stay up 30+ days, other times i get 24 hours. I really thought once i replaced my SSD and removed all drives that were reporting errors things might change, but nope. I have attached the last diagnostics and FCPsyslog. Any help would be greatly appreciated. Im afraid i'll have to keep changing parts out until some point when it stays up. If i get one more 8TB drive i can remove 3 more older disks and i can also at that point fall back to the motherboards SATA ports meaning i can remove the RAID card and SAS - SATA cables. The its motherboard/RAM/CPU. If it still crashes after that the only thing left is the power supply, but thats a 1000w Corsair that i highly doubt is faulty. Thanks, Nathan lemon-diagnostics-20190105-2121.zip FCPsyslog_tail.txt
  4. Although its not necessary. I would like to use different ranges for different zones in my house. I'm converting my home to a smart home and have a lot of devices including lights, cameras, power points, blinds, etc. I like to static assign these devices. 10.0.0.x range i was keeping for network infrastructure. 10.0.0.1 is my pfSense 10.0.0.2 is my smart switch 10.0.0.10-15 are my 6 AP's etc. 10.0.1.x are devices in my server cupboard 10.0.2.x are devices outside 10.0.3.x are devices in my lounge 10.0.4.x are devices in my studio 10.0.5.x are devices in my kitchen etc. I have 12 areas (rooms) in my house and each has assigned its own range. Then i use 10.1.1.x for DHCP dynamic leases. In total i would have more than 100 devices but less than 254 i get that i don't need a /8 subnet and could go about it a different way. Anyways, i ended up just manually editing the network.cfg file and rebooting. Works fine, just would be nice to have an unrestricted interface. If i want to use /8 i should be able to via the interface.
  5. Why is there no option for /8 subnet ? I run a pfSense and use a /8 subnet so i can use 10.0.0.1 - 10.255.255.254 With a /16 subnet i can only use 10.0.0.1 - 10.0.255.254 Obviously i don't need that many ip addresses. However i want to group different zones into different ranges. Not having a /8 option limits me to the C and D only when i would really like to use B as well. Thanks.
  6. Hello, I have been getting random crashes quite a lot recently. I had a SSD cache drive that was bad that i thought was causing the issues. After removing the crashes became less frequent, so maybe it was an issue as well. I haven't been able to capture proper diagnostics as the WebUI, SSH and console are all unavailable. I turned on "Troubleshooting Mode" in "Fix Common Problems" and grabbed the attached items. Intel i5 3470 CPU (Ran prime95 on the CPU for a few hours, never broke a sweat as i have water cooling) 16GB DDR3 RAM (2x 8GB ran Memtest86 for 24 hours without a single error) ASUS P8H77- M LE Motherboard (Updated the BIOS to the latest version) Corsair 1000w PSU (More than enough juice to go around) IBM Serveraid M1015 SAS RAID Card (Flashed with LSI 9211-8i IT firmware. This has been in this configuration even before i started using unRAID bare metal, used to use unRAID in an ESXi VM). Intel RAID SAS Expander RES2SV240 Orico PVU3-4P USB3 Card (Used only for a Windows VM. My boot is "append iommu=pt vfio-pci.ids=1106:3483 initrd=/bzroot") Just not sure what to do next. I seem to wake up every couple of days to a crashed server. This morning i could ping it and got an NGINX error trying to access the WebUI, dockers would not load, SSH would not connect and the unBALANCE plugin loaded the header but thats it. Thanks. unraid-diagnostics-20180220-2112.zip FCPsyslog_tail.txt
  7. Can report that this has solved my VM issues.
  8. Crashed whilst doing the reiserfsck on the corrupt disk. I was unable to access the WebUI, SSH, physical console (keyboard not responding), there was nothing onscreen except the normal login. So i was unable to capture diagnostics. I powered off the machine and removed the disk from the server. Its now powered back on and i'll use my disk dock thats connected to a usb card thats passthroughed to a VM to have a look at the disk. During the crash that corrupt disk11 i was putting quite a bit of load on that disk. Its also my oldest disk. Now that its removed i'm hoping not to see a crash again, maybe (fingers crossed).
  9. To get it up and running i removed disk 11 from the config, theres nothing i need on that disk currently. I restarted the array and its up and running. Cache disk is fine. I'll run some checks on that disk and add it back if i can. I had some bad crashes a couple of months ago that stopped right after i removed a bad 120GB SSD that was the second disk in the cache pool. During those crashes it corrupted 1 XFS disk, i was in the middle of a disk recovery on a new disk when my power went out which corrupted 2 more XFS disks. So i ended up with 3x corrupted XFS disks. Currently i have no parity drive as i needed the space temporarily and have not had a chance to purchase a new drive yet. I have 3 disks sitting out of the machine that i have been trying various tools like UFS Explorer to get as many files recovered as possible. The 3 corrupt XFS disks i did initially run xfs_repair and it required me to run xfs_repair -L (wish i never did) after which the disks reported much more free space than i should have and i had a huge lost+found folder on each disk. It was then i decided to pull the disks from the system and tend to them one at a time with UFS Explorer. I havn't done a single write to them since in the hope that ill find a method to get as much data back as possible. Any other suggestions? Once i get a new drive for parity and a new UPS ill work on getting the system stable and start replacing old disks with newer once every month or so when i can afford to until i have no old disks.
  10. Hey, I'm having an issue getting my server started. Last night i upgraded from 6.3.5 to 6.4.1, everything seemed to be going well and working smoothly until today when i woke up. I noticed my server was inaccessible. I had put a new (second hand) motherboard and CPU in my server months ago and have had a few crashes since then. I only just updated the BIOS to the latest version (it was a 2012 board on its release version). Perviously i had been running a ASUS P6T and i7 920 from 2008 (24/7 for 8+ years). Reason why it crashed aside its now not mounting disks and starting the array. So far i have updated the BIOS to the latest version (in an attempt to help with crashes, maybe). I have booted into safe mode to ensure its not plugins causing the issues. But i'm unsure of whats next. Any help to point me in the right direction would be appreciated. unraid-diagnostics-20180218-2344.zip
  11. Really like the 6.4.1 update. Theres even a balance button for the BTRFS formatted drive.
  12. Much appreciated. My workflow can resume. I use the VM daily to do disk backups on client machines before formatting drives and reinstalling the OS. I have a few disk docks connected to a USB3 pcie card thats passthroughed to the VM. I had to revert back to a spare bare metal machine when the issue started.
  13. root@unRAID:~# btrfs balance start -dusage=75 /mnt/cache Done, had to relocate 50 out of 114 chunks root@unRAID:~# devid 1 size 111.79GiB used 85.79GiB path /dev/sdi1 I have downloaded and installed the 6.4.1, just waiting 20 mins for a copy to finish and ill reboot. Thanks for the help, ill report back when i know the result. I do recall seeing devid at 111.79GiB used but just thought it meant the partition was using the whole disk. I had and still have no idea about how a BTRFS format drive works. Is there anything else i'll need to do ?
  14. It paused again, sometime since my last post. unraid-diagnostics-20180217-2217.zip
  15. Thanks, i'll try and get it to do it again and note the time so its easier to pin point in the diagnostics. I have 47GB free on the host disk.