aminorjourney

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • URL
    https://www.transportevolved.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

aminorjourney's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Well, THIS is fun. I've got an unRAID server that acts as our video vault at work. It's where we store our B-Roll, all previous YouTube episodes etc. It's got a 2 TB NVMe SSD acting as a cache drive. It's connected to our network switch, which is a 10GBe enabled Netgear XS712Tv2. Also connected to the switch is our iMac Pro edit machine with 10GBe networking, and when our part-timer is in the office, a MacBook Pro with a Sanlink3T1 offering 10GBe connections. We've made all of the usual tweaks to the SMB networking on OS X 10.13.6 so that all of the SMB signing is off. We're accessing the data on the server using SMB. Our SMB server settings are: [global] ea support = yes vfs objects = catia fruit streams_xattr fruit:resource = file fruit:metadata = netatalk fruit:locking = none fruit:encoding = native max protocol = SMB2_02 If we turn direct IO off, we get read/write speeds using blackmagic speed test of around 750 MB/s - 820 MB/s read and 750 MB/s-820 MB/s write. If I turn on Direct IO, write speed goes up to 1.1 GB/s but write drops way down to 320 MB/s and doesn't go any higher. iPerf is solid both ways. It gives us full saturation (so about 9.4 to 9.9 Gbit/s) to the iMac Pro and to the unRAID system (I've operated iPerf as both client and server on both machines to test the connection both ways). iPerf doesn't obviously care about Direct IO being enabled or not, as it isn't writing to the unRAID system. It's just checking the physical network speed. iPerf running as server on iMac Pro gives: [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 1.09 GBytes 9.38 Gbits/sec [ 5] 1.00-2.00 sec 1.15 GBytes 9.86 Gbits/sec [ 5] 2.00-3.00 sec 1.15 GBytes 9.85 Gbits/sec [ 5] 3.00-4.00 sec 1.15 GBytes 9.85 Gbits/sec [ 5] 4.00-5.00 sec 1.15 GBytes 9.86 Gbits/sec [ 5] 5.00-6.00 sec 1.15 GBytes 9.85 Gbits/sec [ 5] 6.00-7.00 sec 1.15 GBytes 9.85 Gbits/sec [ 5] 7.00-8.00 sec 1.15 GBytes 9.85 Gbits/sec [ 5] 8.00-9.00 sec 1.15 GBytes 9.86 Gbits/sec [ 5] 9.00-10.00 sec 1.15 GBytes 9.86 Gbits/sec [ 5] 10.00-10.04 sec 47.5 MBytes 9.88 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.04 sec 11.5 GBytes 9.81 Gbits/sec receiver ----------------------------------------------------------- unRAID running as client on iPerf: [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 1.14 GBytes 9.80 Gbits/sec 0 1.04 MBytes [ 4] 1.00-2.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes [ 4] 2.00-3.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes [ 4] 3.00-4.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes [ 4] 4.00-5.00 sec 1.15 GBytes 9.86 Gbits/sec 0 1.04 MBytes [ 4] 5.00-6.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes [ 4] 6.00-7.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes [ 4] 7.00-8.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes [ 4] 8.00-9.00 sec 1.15 GBytes 9.86 Gbits/sec 0 1.04 MBytes [ 4] 9.00-10.00 sec 1.15 GBytes 9.85 Gbits/sec 0 1.04 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 11.5 GBytes 9.85 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 11.5 GBytes 9.85 Gbits/sec receiver unRAID running as iPerf server: [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 1.15 GBytes 9.88 Gbits/sec [ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec [ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec [ 5] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec [ 5] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 5.00-6.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec [ 5] 7.00-8.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 8.00-9.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 9.00-10.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 10.00-10.00 sec 1.35 MBytes 9.34 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec receiver iMac Pro running iPerf as client [ 4] 0.00-1.00 sec 1.15 GBytes 9.89 Gbits/sec [ 4] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec [ 4] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec [ 4] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec [ 4] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec [ 4] 5.00-6.00 sec 1.15 GBytes 9.89 Gbits/sec [ 4] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec [ 4] 7.00-8.00 sec 1.15 GBytes 9.89 Gbits/sec [ 4] 8.00-9.00 sec 1.15 GBytes 9.89 Gbits/sec [ 4] 9.00-10.00 sec 1.15 GBytes 9.89 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec sender [ 4] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec receiver In other words. Everything is just peachy with the network interface. So why in the blazes is read so slow when Direct IO is switched on? I wouldn't mind, but we use the 2TB NVMe SSD as our 'work' drive when editing video. When we're done working on the video, we write the data to the spinners using the mover. If it's just me in the studio, I'll happily settle with the 750-850 MB/s read and write, but i'd like to get that extra few hundred MB/s if I can. I rarely use that bandwidth, but with two people accessing the SSD at the same time, I'd like to get max performance to avoid lag. Attached: my system info, and thanks in advance ;) Nikki (currently perplexed of Portland) kranz-diagnostics-20180903-1051.zip
  2. The NVMe is a 512 GB Samsung Pro 960. It's formatted with BTRFS. And yeah, I may try that with the RAM -- look up a way to make that as my linux skills are a little rusty! I should also add here that I've been watching the network connection, and it looks like things are okay for now. I will wait for the current work to finish (transcoding large video files, slowly) and then I'll tweak the network settings for jumbo packets.
  3. Yes. That's what I thought. I did try an MTU of 9,000 last week (so jumbo frame) and had the same fallover. Which brings me back to suspecting an issue with the actual cable. I have to work from home today as I'm looking after a kid, but i'm going to try and drop by the local cable store later and see if I can get a brand-new cat 6e I can use. Update: Been watching the server remotely via our VPN... and interestingly the built-in Gigabit Nic is having issues, not the 10GBe.
  4. Yes. That's what I thought. I did try an MTU of 9,000 last week (so jumbo frame) and had the same fallover. Which brings me back to suspecting an issue with the actual cable. I have to work from home today as I'm looking after a kid, but i'm going to try and drop by the local cable store later and see if I can get a brand-new cat 6e I can use.
  5. I can't see that particular setting -- I can see the Network and system status... but that's it. ...and I think that doesn't work for me since I just reset the server, and it's since last reboot... so I'll run some more tests and see if those errors appear. I've turned the cache disk off to see if that helps things a little. I'm also wondering if the MTU setting needs changing on eth1. I did a ping test earlier and got no packet loss with ping -D -s 1472 <server> so ... I don't *think* it's the issue. Like I said, I'm going to get a nice Cat 6e later, as I suspect my cabling isn't up to par. I've got 10 days left to sort this out, so fingers crossed I can do it before I decide to buy this
  6. Hi all! So I've got my unRAID 6.3.5 system up and running, and it's configured thusly (short profile below, full profile attached) Model: Custom M/B: Gigabyte Technology Co., Ltd. - Z77X-UP5 TH-CF CPU: Intel® Core™ i7-3770K CPU @ 3.50GHz HVM: Disabled IOMMU: Disabled Cache: 128 kB, 1024 kB, 8192 kB Memory: 32 GB (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 eth1: 10000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.9.30-unRAID x86_64 OpenSSL: 1.0.2k I've currently got two NICs as you can see. The first is set up for general network access, and is connected to the outside world through the standard gigabit switch I have in my office. The second NIC (10GBe) is configured with a static IP and is connected directly to my new iMac Pro, which has its own static address on the same network. This makes it possible for me to transfer files directly between the two computers at speeds in excess of 250 MB/s I've tried using AFS, SMB, and NFS, but as soon as I push the network hard, the shares disappear. In the case of SMB and AFS shares, it causes the computer to crash. NFS doesn't cause a crash, but it does make things a little unhappy. If I turn off my cache drive (512 GB NVMe drive attached on PCI-e bus with a 4cx adaptor) then the shares aren't ejected or suffer any issues, presumably because I'm not pushing the NIC too hard. If I turn the cache drive On (or to prefer or only, for example) then the system gets all upset, eventually causing the share to eject. I'm making this post because I suspect the occam's razor answer here is that I'm using a bad ethernet cable, and that there's significant packet loss at high transfer speeds. BUT I want to make sure I'm not missing anything else. I'm not in the office for a few days, so it's going to take a while to get an answer. But if anyone has any suggestions/ideas/experience I haven't got, please share. Thanks in advance! Nikki. Profile.xml
  7. So, just looping back here with a bit of an update. The 2720 SGL is unusuable as feared. I'm going to run the system without for a few days to see if that solves some of my problems, but It will be a micro unRAID system until I can get those extra four drives back online. I'm looking for the suggested card, and also a couple of other options. Nikki.
  8. Hi Frank, Thanks for this. It's exactly what I wanted. I'm looking at going with a pro build vs a DIY build, and I'm really not sure what's best. Obviously, the Pro build (45 Drives Q30) would require a few months of saving, but I'm going to look at the LSI boards you suggested too to see if I can use existing equipment to do the same thing. Nikki.
  9. Hi there! *waves* I'm considering a new unRAID setup for my studio, transforming my current editing machine (a pretty powerful hackintosh) into an unRAID server that i can use to store Final Cut Pro video work on. My company is growing, and I'd like to build a NAS rather than have attached storage, so that other people can access my files if required. The computer I'm considering using as a base is a five-year old Gigabyte Z77X-UP5-TH with 32 GB RAM, and an i7 3770k processor. I currently have 40 TB of internal attached storage, and once I've offloaded some of that footage to other drives, I'm hoping to use some of it in an unRAID system (formatted first, of course). I am using a Highpoint 2720SGL card in my machine under Mac OSX and it's working fine to provide RAID 0 configuration to the SAS drives I have in my system currently. I'd love to continue to use this RAID HBA in my unRAID. Checking the compatibility charts, it says that the Highpoint 2720 causes parity issues for 6.1.x+, which I assume is all new releases beyond 6.1.x. BUT I know I can use the 2720SGL in plain old single drive mode rather than rely on its on-board RAID chipset. Anyone know if THAT works with unRAID? Cheers, Nikki.