lbosley

Members
  • Posts

    39
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

lbosley's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Probably just a typo, but your /21 mask needs to be 255.255.248.0, not 255.255.248.9. Did you reboot the server after the network.cfg change? Not sure a file edit applies your change without a network startup. Or try an ifconfig to set the IP and mask on the fly. Obviously, make sure your VPN has a route to the unRaid network. And although it probably cannot ping, does a ping of the unRaid host return the correct IP of the unRaid host (DNS)?
  2. One thought about testing without the router connected. If the PC uses DHCP for addressing and the router is the DHCP server you may not be able to talk to anything once your PC is plugged in. You might have to assign a static address to test like this. If it's not too hard to connect the PC into the switch in the basement it might be best to do so and leave everything else in place.
  3. I assume the router in use is a WiFi router? If so, try connecting your PC into the wall jack directly (where the WiFi router is connected. Might be a quick and easy way to test around the router device.
  4. I know that I have a friend who complained about similar speed with his gigabit network. Last week he finally replaced the switch and is now seeing gigabit speeds (100MB/sec. +). Drops are not uncommon in congested networks. But your counters are pretty high - especially if you've been restarting while troubleshooting this issue. Yes, if you connect directly to the switch you will remove the router and most of the cabling in your test. Good idea.
  5. Did you notice the receive errors and drops in your Ethernet stats? I think I would start with that issue. Local speed to your cache drive looks fine. Maybe you are having problems with the switch connecting unRAID and your Windows system? Bad NIC?
  6. And regarding your testing of two client PCs, this is another part of the trunking issue. The trunk decides on which physical port to use. The decision is based on an algorithm from the source and destination addresses. So, very likely both of your test PC's have landed on the same physical connection. You might be able to change an IP address to see if it changes the link usage behavior.
  7. To achieve a faster point-to-point connection you need a faster pipe. -> 10Gb Just keep in mind that you will need this connectivity all the way through the client to host chain. Otherwise, any slower links in between (including your switch's capabilities) could easily make your 10Gb investment a significant waste of hardware dollars.
  8. My configuration was pretty simple: unRaid host set to Bonding- yes. Mode to balance-rr (0) Network switch ports configured for Trunk - admin mode and static (I don't believe dynamic mode would negotiate) Link comes up bonded. Network switch shows one interface primarily in use. From what I read, your configuration sounds like it is working. 50% of a 2Gb link = 1Gb throughput. You will not exceed the speed of a member link (1Gb) between these hosts. Your unRAID network statistics are showing this, as well. Google 2Gb trunk throughput for more information. The bonded interface is functioning properly.
  9. I was able to get my unRAID host trunked (LACP round robin) but also gave it up. First of all, if you are expecting to see 2Gb speeds end-to-end, forget it. The LACP algorithm will decide which physical port to use based on MAC or IP addressing, and transmissions will pin to a single gigabit link and stay there. If you have multiple hosts connecting this might be advantageous (think more lanes on the highway instead of faster speed limit). From my experience in a single host to server connection you will not see an increase in throughput. In fact, my testing showed the file copy performance slightly degraded in a trunking configuration. I was using an HP switch and trunking on both client and server ends of the connection. The trunking provides for link redundancy, but not much more IMO.
  10. Fantastic tool!!! Thank you for your efforts, John. One thought came to mind when I was running this benchmark. I wonder if somehow the tool could add a function to identify drives on a particular controller and test the associated disks simultaneously to show the overall throughput of a given controller/bus? If the card and slot are not limiting factors one would expect the graph to show the exact same arc from outer to inner tracks - indicating that the drives were the only limiting factor. But just like in the tunables and parity checks you might find a different ceiling in the combined testing. I ran into this when I moved one of my PCIe 3.0 x8 controllers into a v2.0 slot (4-lanes) and noticed the parity check speed cut by more than a third. I'm betting a single drive performance test like this wouldn't have shown this difference. Might be interesting to see. Anyway, very well done, and thank you. I was just talking to a friend last week about "wouldn't it be cool if someone had a disk benchmark tool..."
  11. I guess I figured you were primarily just looking at unRAID storage. From my research the SuperMicro X10SRA-F MB is about as good as it gets for connecting a large storage.subsystem. I know there are some X99 enthusiasts out there also. But the SM boards are a better fit for servers, IMO. I only have a PCIe 2.0 slot available in my current system. I noticed when I move one of my SAS controllers into this slot the performance drops off noticeably. Good excuse to upgrade. I bought 16GB (2) of the MEM-DR480L-SL02-ER21 8GB DDR4-2400 ECC RAM from an Amazon seller. My needs don't call for much RAM. I run Plex and some plug-ins, and not much else planned right now. Hopefully it arrives before the weekend, but I kind of doubt it. The CPU will more than double the power of my current i3 chip.
  12. Adrian, I just ordered the same motherboard with the E5-2620 v4 chip. Even tough the v3 clock speed is higher, Passmark scores the v4 (8-core) processor at about 14% faster than the v3 and it costs the same. I purchased ECC buffered memory from the SuperMicro approved list - Samsung DDR4-2400. With plenty of PCIe lanes, ports, and slots, your system should be a beast for unRAID expansion possibilities. Good luck on your build.
  13. I was one of the two people from the earlier thread. I think you recommended the vm.dirty settings. I only recently enabled VM's and Dockers - like within the past couple of weeks. And yes, Cahe_Dirs is only set to cache the Movies directory. I also now believe this is not actually related to directory caching. In my mind the test I ran this evening seems to point in another direction. I could repeatedly complete a find command of the entire Movies directory in a matter of seconds without any spin-ups. This tells me the directory is cached in memory. Yet when I started the Mover process an extra disk spun up. As I stated I repeated this test with cache_dirs running, and with it (and pretty much everything else) uninstalled. Maybe I have a weird hardware problem with one of my HBA's where activity on one is triggering something on the other? It was interesting that I was finally able to reproduce this behavior. I tried it again just now and got the same result - disk 1 spins up with a handful of reads and writes as the Mover sends a couple of folders to disk 18. But this time when I initially wrote the files to the cache drive a couple of array disks also spun up.
  14. I'm guessing everyone is just shrugging their shoulders on this one. Can't blame you, but I think there is a bug in the software or I have something behaving odd in my hardware. This evening I updated the firmware on my HBA's. The LSI controller firmware was fairly old. The SuperMicro controller was up to date. I tested again by starting a find command for the entire shared folder to build up the directory cache. Then I spun down all disks and created a couple of folders and files. I continued to repeat the find command just to see if anything would spin up. Only the cache disk spun up during the write operations and they finished fine. Then I started the Mover from the GUI. I watched the expected drive (disk 18) start up. Seconds later the parity disk spun up and all appeared normal. But then a few seconds later disk 1 spun up. Statistics showed the system had 1 read and 3 writes to disk 1. I then shut down my only Docker (Plex) and removed every installed plug-in to repeat this test. Once again the system spun up disk 1 when the Mover rsync'ed the files from my cache disk to disk 18. There is no new file or any indication of any folder or file being modified in disk 1. This is not how this system should function. Can anyone explain what I am seeing? Looks like a bug to me, folks.
  15. For several months now I have experienced problems with drive spin-up with my unRaid array. I am hoping that I can get some support from other users who may be experiencing some of these same symptoms. Considering the intermittent nature of this problem, it is difficult to pinpoint exactly when these crazy spin-ups started for me, but I believe it goes back to 6.2.x releases or earlier. Apologies for the length of this report. My config: unRaid 6.3.2 w/ 20 WD Red drives (12x 4TB, 6x 6TB, 1x 8TB parity, 1x 2TB Hitachi cache drive) SuperMicro X10SLH-F Motherboard w/8GB Kingston memory (upgraded to 16GB yesterday) Using on-board SATA for 6-drives SuperMicro SAS-2LP SAS/SATA controller (8-drives) LSI 9207-8e SAS controller (6-drives in external enclosure) CyberPower CP1500 UPS Software: I run a handful of basic plug-ins (which were all removed during my extensive troubleshooting of this problem). This includes Unassigned Devices, Cache_Dirs, Recycle Bin, Tips and Tweaks, Preclear, Active Streams, System Stats, and UPS. I also installed the Lime Tech Plex docker only in the past couple of weeks. I have one share in play – Movies. The share spans all of the disks and is cached. I have around 3,100 folders, containing approximately 160,000 files. My allocation method is high-water, and all but one disk is full. The disks are set for 30-minute spin-down and I do not use spin-up groups. The issue: I am randomly experiencing delays associated with disk spin-up when performing rather straight-forward operations – such as creating a file/folder, playing a movie from the array, or running the Mover. Probably 90% of the time the system works as I expect. With the exception of opening a file, browsing operations and file creation works without delay. Yet, the very next operation may cause me to wait while disks are sequentially spun up. This is happening not just from a Windows SMB operation, but also when doing little more than poking around the GUI where I might see a sudden locking of the interface as disks are spinning up. This GUI locking and spin-up would ALWAYS happen when checking dlandon’s File Activity monitor - which was first introduced for this issue. The logs routinely show most or all of my disks being spun up in the middle of the night when the Mover is running. I primarily work from a Windows 7 SP1 workstation to upload content to unRaid. My other client machine is a Windows 10 Kodi workstation. I should also mention that my cache disk is set to never spin down (location of Plex docker files). I admit that it is possible that prior to my knowledge my system was unexpectedly spinning up drives. But I can say with conviction that it never caused much of a delay in my operations. I was accustomed to browsing into a folder that wasn’t cached and made to wait as the associated disk spun up. Now these delays seem to be frustratingly longer as multiple disks are brought up one at a time. I opened an SSH session to my array last night where I was issuing a basic Find command to enumerate my Movies directory (basically, doing what Cache_Dirs does). This is the only folder set to cache in Cache_Dirs. When the array was first started last night the find command initially took 20-30 seconds to complete. Subsequent (cached) runs would finish in about 4 seconds – even with all disks manually spun down. I re-ran the command several times as I created folders and fired the Mover to see what would happen. Most times the find operation would spit right through, but occasionally it would stall as a disk would spin up. On two of the tests the Mover spun up an extra drive. Later I fired up a movie from my Kodi machine. It spun for the usual 5 seconds as the associated disk was spun up. Then I watched the movie sputter for the next couple of minutes, realizing that my array was busy spinning up several other disks. This morning I checked the find command and watched it run several times, completing in 4 seconds without a delay and without spinning up a drive (directory cached). Literally one minute later I tried to create a new folder in the Movies share from my Windows machine and waited for several minutes as all but one disk spun up. This is a perfect example of how my system has been performing. I rebuilt my array configuration from scratch several weeks ago. I also rebuilt my windows machine, just to say that it too is clean. I do not believe this is related to SMB or client access. I also do not blame cache_dirs per se. But I believe the directory caching is being prematurely flushed for no apparent reason. My array has certainly grown in the past year with more files and more disks. Maybe I’ve hit a threshold of some kind? I have made changes to the cache pressure and vm.dirty settings recommended by others in this forum to no avail. Although I had no expectations for success, I finally gave in and purchased additional RAM. I am not only looking for suggestions, but I would like to hear from others with similar configurations – large array with 15+ disks (w/ spin down enabled) and running a cache drive. Take a look at your logs and see if your Mover operation needs to spin up more than the target disk that will be written to. If someone thinks the behavior I am seeing is normal, please explain it to me. Also, let me know if you are experiencing similar spin up frustration that was previously not an issue. Thanks for you input.