tyranuus

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tyranuus's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yes, hoping the 2.5gbe will last a few years at the very least. Remaining switches to upgrade from gigabit to 2.5 should arrive tomorrow
  2. Card arrived and I'm now seeing throughput to the SSD mounted standalone in the server of just shy of 300MBps read/write sequential, around 260-270MBps read/ 220-260 write on layered 4k, and around 19MBps read on raw 4k q1t1, albeit write is around 9MBps, so below what I'd get connected locally but nothing to shake a stick at. I can only imagine this improving when the samba performance regressions are fixed Also shows that if you get proper switches and at least a half decent NIC for your server you can get good results out of 2.5gbe. Also means once the final switches arrive, we will be able to max out the 1GBps internet connection and still have 150MBps+ capacity available for other traffic from the NAS or between devices. Win!
  3. I managed to pick up a last-left generic 10gbe NIC based on an Aquantia 107 chip for £45 from Amazon, which seemed like a bit of a bargain. Doing a quick check on the TXA074 looks like it's a legit Chinese card based in the chipset that usually goes for a lot more. Assuming that arrives working correctly, based on everything I've read that is ideal as a server card. If there are any issues, I can at least go via Amazon. Should be server 10gbe card, 2x 8 port and 2x 5 port for sub £400, with arrival times across the next couple of weeks
  4. Hi Guys I've been wanted to overhaul my network to 2.5GBe for some time for a bit of 'futureproofing' and also as my laptops and desktops for the last 4-5 years have all been 2.5Gbe enabled. My router is also 2.5GBe on LAN AND WAN for when I get FTTP higher than 1GBps. What had been putting that off was the cost of switching equipment, which was typically still £200+, especially for 8 way switches. Anyway, finally Amazon UK is doing an offer on 2.5GB Zyxel Switches, (5 port for 69.99/8 port for 99.99), so I am planning to make a move. 5/10GB is still way too expensive in the UK and it'll never fly past the wife; most of my network is ready, it just needs switches and my servers moved over. (I'll need 4 so it adds up). I had been digging through posts, and most information seemed to be either quite outdated, or using the Realtek NICs which seem not particularly great on Linux. If I am reading correctly, the Intels aren't great either, but its either them or Marvell/Aquantia to get acceptable performance. For what its worth, the Zyxel kit is meant to support flow control. I can find Intel 2.5gbe NICs on Amazon for around £20-30, so pretty acceptable. Is my understanding above correct?
  5. Thank you, that is great to know. I assume realistically once 4.18 is incorporated into a beta/RC, it should just require an unraid OS update to unlock the benefits, and shouldn't require any additional configuration changes?
  6. Thank you, that is good to know. Is there any rough roadmap to how long it is likely to be before 6.13 hits? I suspect it won't be inside the trial period
  7. I am trying to find out. The server was running OMV 4.1.36, which apparently was based on Debian 9 and released in 2018 if that helps. Edit, if I'm reading the logs correctly, it's using Samba 4.5.1, so a decent chunk older than the Samba 4.17 that Unraid is currently using, but still within the Samba 4.x family. Edit 2: Just realised the other server only has 4GB of RAM, so it's even more relatively anaemic than I had thought, which, even with caching involved, seems strange that i3-2110 and 4GB of RAM is producing a higher 4KB 1T1QD result than this Unraid server running an i5-3570K and 24GB RAM!
  8. Unfortunately disabling the CPU Kernel mitigations did nothing for the 4KB performance, if anything speed seemed to decrease very slightly, so it does not seem to be CPU overhead causing this issue. I mean, I'm not devastated by close to 10MBps 4KB single thread reads and writes, its still WAY higher than any single HDD even locally can achieve so the SSD drive should still achieve its goals, but knowing how much performance is being left on the table, not just from having the drive installed physically, but vs OMV4 being able to return double those figures even if its some sort of cache read seems a bit wrong, especially knowing the older server has a weaker CPU, both in terms of cores and clock speeds, and also considerably less (and I believe slower) RAM. As it is, Unraid is returning very similar figures from the HDD array and the SSD standalone, at least for 4K 1T1QD reads/writes, which suggests a bottleneck elsewhere in Unraid, and as the old server is returning higher figures, with weaker hardware, something seems to be going on. I also checked the dashboard whilst the 4K test was running, and none of the individual CPU cores are getting anywhere near 100% either, so its not a CPU bottleneck either, with one of the cores being pegged around 100% either. I just took a look at the SAMBA configuration on the old server, and it looks like its configured: Use Sendfile (ON) Asynchronous IO Enabled socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 read raw = yes write raw = yes max xmit = 65535 getwd cache = yes Doing a quick read on Unraids default SAMBA config, it sounds like its not far off these already, so I'm not sure its even anything in the configuration, but something else going on, almost like something somewhere has a hard limit written in, which is only allowing 9.5-10MBps?
  9. OK thanks I will try the kernel mitigations disabling and see if that impacts performance. It just caught me out that OMV4 vs Unraid (whether its cached data or not), was reporting roughly double 4KB single thread performance compared to Unraid via SAMBA, using considerably slower/less capable hardware, on the same network.
  10. Apologies, I will have to dig later, it is mounted by the unassigned devices plugin, I am not sure if that is using /mnt/user or not
  11. Would FUSE still be an issue, as I am testing using an SSD which is not part of the array, and is just an unassigned drive being mounted/shared seperately?
  12. Hi Guys, I have just setup a 'new' NAS using the following specs running an Unraid trial to get the feel for the platform, and in most ways I really like it. Overall functionality seems good whilst not being overwhelming, the array functionality seems to work well, as does the modularity of it compared to ZFS via TrueNAS for example, and the parity on the main array is really re-assuring that if more than 2 discs don't go at once, there is still parity/redundancy to keep moving and rebuild without losing data. This all being said whilst general read/write on larger files I've noticed 4Kb/small file performance seems distinctly lacking or capped compared to the prior OMV server, and I'm hoping someone could help me figure out why? NAS Specs: 3570K 24GB DDR3 1600 Array: 7x WD Red 3TB via IBM M1015 HBA SAS card (1 being used as parity) 1x Toshiba 7200 RPM 4TB drive via IBM M1015 HBA SAS (also being used as parity) Unassigned drive: 1x Samsung 860 Evo 1TB connected directly to Intel 6GB/s SATA 1GBps LAN Now, if I run a 1GB crystaldiskmark test on the array network share, I see around 9.5Mbps 4KB 1T1QD reads, which seems incredibly high for HDDs, so some caching is going on, however if I run it on the SSD in the system which is not part of the array and just mounted seperately, I see more or less exactly the same read speed, which should definately not be the case unless there is a bottleneck, the SSD should have a substantial lead over HDDs, 5400RPM ones at that. The SSD is mildly higher, but talking a few hundred KB higher, closer to 10MBps. If the drive was local, I know it'd be hitting around 40MBps read. Now, initially I'd just put that down to overhead of Samba, frankly the read and write speeds for anything except 4KB 1Q1T are basically maxing gigabit, the bit that concerned me however, is when I run the same test on a HDD in the old server which was running OMV 4, its seeing around 20MBps on 4KB reads, in the same 1T1QD setting. Now again, there's no way a HDD could produce those sorts of 4KB results, even a 7200RPM drive (usually they'd be less than 1MBps, so there is definately some cache being used or similar, but the old server is running an i3-2110, 8GB RAM, notably inferior hardware, so it seems off that it is returning higher 4KB single thread read results full stop, even if they're not reflective of true drive performance. Any thoughts?