klamath

Members
  • Posts

    89
  • Joined

  • Last visited

Everything posted by klamath

  1. klamath

    nfs4

    There is no way to enable NFSv4 server in unraid, almost positive that is a typo or misprint.
  2. I submitted a feature request here for an alternative NFS server in unraid.
  3. The NFS Server in unraid is still using v3 and has bugs in the recent release, I think offering an alternative to the built in nfs server would be a good for people seeking out NFS server with more enabled options. https://github.com/nfs-ganesha/nfs-ganesha/wiki Thanks, Tim
  4. I have not, once this thread reports back A-OK I will upgrade.
  5. Having the same issue with NFS on my Dell T130 server, forced me to downgrade after the first NFS crash. Seems like core functionality testing was skipped in 6.6 release as NFS/SMB are core components of a NAS not fluff like docker and kvm or a GUI that looks pretty on a cell phone.
  6. I want my array drives for WORM media, freenas server is a better fit for random IO workload. I also want redundancy and not waste drive bays on btrfs raid 1's. The cache drive i already have install is only 1TB and my plex server is almost 600GB with all the media indexing it has been doing. I used NFS before with mixed results, iSCSI would be best but pretty sure that isnt built into the kernel at all
  7. I converted my Norco case into a jbod, I had a super micro motherboard with a X5570 CPU all inside the norco case to begin with. I just bought a Dell T130 with a Xeon 1270 v5 and i went from 65MB/s party check speeds to 100-150MB with default tune, no modifications to the config at all. The system did a party check, start to finish without any hiccups on the network with NFS, no client reported any timeouts at all, plex (a major subscriber to NFS) didn't register any issues at all during parity check. @Drewster727 In my case I install nmon and saw the unraidd process logged most time in System Wait during a parity check, another interesting thing between systems i noticed is my SAS card's interrupts are now spread across all cores vs old system having all interrupts assigned to core 0. Hope this helps a little bit! Tim
  8. Not sure I follow, looking to have the OS/Plex DB off the array on a external freenas server. CPU and Memory used on the unraid server.
  9. Howdy, Thinking of migrating my vmware instance of Plex over to KVM unraid, wondering if there are any other options besides NFS v3 for backend storage options. Thanks, Tim
  10. Like night and day so far, under parity check the cpu is at 40% utilization @ 600GB checked, steady so far.
  11. Still no dice, server goes unresponsive once I hit the 4TB barrier, T130 with a xeon 1270 has been ordered.
  12. So thinking out loud, if CPU was an issue with the amount of drives is the issue NFS should stop responding when checking all 28 drives, not the 5 HE8 drives. So im thinking maybe the tunable testing script might be not factoring in the entire run into the recommendation. Speeds jump to 100+MB/s once system only checks the He8 drives. Think returning values to default will help? The only system that looks like it would work is the Dell T130, fits price point well and gives good CPU numbers. Tim
  13. If i can evacuate all 4 drives at once can i use this method "The "Clear Drive Then Remove Drive" Method" or do i need to remove each drive one by one? Tim
  14. You think reducing the array by 4 drives might get me out of having to buy a new head?
  15. Using a Xeon 5560, not sure passmarks score but googling shows around 5400, is there a reason why when check is running with 28 drives everything is responsive, however when checking the 8TB drives things become unresponsive?
  16. Howdy, I have been battling this for a while. Over the holidays I accidentally fubared my super.dat while rolling back from 6.3 to 6.2 (don't computer before coffee). I was in the process of upgrading parity and a few data disks to new HE8 drives. After getting everything resettled and working I started to notice weird link speeds with the HP SAS expander, randomly it would show link speeds of 1.5 on some drives, after replacing the SAS expander with a new Intel one all drive speeds now report correctly. This last week I added another Intel SAS expander and replaced all SAS cables and 9211-8i card and my speeds doubled. So far so good, when running parity checks I am avg 81MB/s with 28 drives, everything works however im seeing process unraidd using 100% of CPU0, seems to be all in system wait. Things all appear to be working fine, however when the parity crosses over to the 8TB drives NFS and SMB stop responding, I see errors emitted on NFS clients reporting timeouts waiting for server to respond. I have been trying to figure out why the 8TBs are causing issues. -All interrupts for 9211 are on CPU0, I changed affinity of that IRQ to allow scheduling on all cores, doesn't seem to help according to /proc/interrupts. -IRQ issues? It seems that from lspci -v both my USB controller/9211 and 10gb all share IRQ 10, however /proc/interrupts shows them on different interrupts. -Same issues seen in 6.2 and 6.4 release. -nr_requests set at default 128, however increasing to 512 seems to have a speed increase. The Max queue depth for SAS 2008 is 3200, should those match per port? Drives all check out, no pending sectors or other indicators of an issue, nothing in syslog showing up during these slow downs. Running out of ideas at this point as to why unraidd is taking up so much CPU during a parity check and if that has anything to do with NFS going unresponsive. Once parity check is canceled all file sharing services start responding as normal. Any help would be appreciated, Tim orion-diagnostics-20180128-1331.zip
  17. Howdy Yall, I ran into an interesting issue today, I started seeing some parity errors with my unraid server out of the blue over the weekend. [Sat Jan 13 16:28:10 2018] md: recovery thread: P incorrect, sector=1953547096 [Sat Jan 13 16:28:14 2018] md: recovery thread: PQ incorrect, sector=1954021656 [Sat Jan 13 16:28:17 2018] md: recovery thread: Q incorrect, sector=1954407368 [Sat Jan 13 16:28:18 2018] md: recovery thread: PQ incorrect, sector=1954528768 [Sat Jan 13 16:29:32 2018] md: recovery thread: Q incorrect, sector=1963910384 I started looking into the issue today and noticed that my link speeds are all over the place. I did some diagnosis and on some reboots I get the expected 3GB link speeds but most times I see a mix of speeds. I have replaced all SAS cables going to backplane, from SAS expander to SAS2008 card. I have bypased the SAS expander and moved to plugging backplane directly into SAS cards and they all link up at 6GB. I ordered a new Intel SAS expander since that is the last component I havent replaced yet. Interesting thing I noticed is that on the same backplane 3 drives link up at 3GB while the last one links up at 1.5GB. Anyone run into this before? *All SAS2008 and HP Expander running current FW *Reset BIOS settings *Drained "flea power" Thanks!
  18. nope, you can try but there are a lot of libraries that are missing because unraid is a stripped down version of linux.
  19. im going to assume your using windows, download putty and SSH to IP of unraid server, username is 'root' edit the go file: nano /boot/config/go do the needful and save (cnrl x)
  20. Id recommend looking at running iperf to test line speed, samba and unraid has some overhead to it. https://elkano.org/blog/testing-10g-network-iperf/ Tim
  21. try some of this, i have this in my go file, edit as needed! ethtool -G eth0 rx 8192 tx 8192;ifconfig eth0 down;sleep 1;ifconfig eth0 up route add default gw 192.168.1.1 sysctl -w net.nf_conntrack_max=700000 sysctl -w net.ipv4.tcp_timestamps=0 sysctl -w net.core.netdev_max_backlog=250000 sysctl -w net.ipv4.tcp_sack=1 sysctl -w net.core.rmem_max=4194304 sysctl -w net.core.wmem_max=4194304 sysctl -w net.core.rmem_default=4194304 sysctl -w net.core.wmem_default=4194304 sysctl -w net.core.optmem_max=4194304 sysctl -w net.ipv4.tcp_rmem="4096 87380 4194304" sysctl -w net.ipv4.tcp_wmem="4096 65536 4194304" sysctl -w net.ipv4.tcp_low_latency=1 sysctl -w net.ipv4.tcp_adv_win_scale=1 btw transferring to cache drive i assume? Tim
  22. not sure, docker/containers are for losers, real IT people run VMs.
  23. shouldn't make a difference either way.
  24. The shares should be visible during a parity check....