Jump to content

MatzeHali

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by MatzeHali

  1. Yes, I thought I would be able to make easy use of option 2, but since I'm accessing from MacOSX, and talking about multiple user accounts on NextCloud, and the SMB user management on MacOSX is only ever allowing one user at a time, I thought I could connect via WebDAV, which would enable me to connect to multiple Nextcloud user folders with correct authentification. I do like the idea of a second instance of Nextcloud Docker and will try this. I guess there is the small off that I would loose some settings if both instances at the same time would write a config file? Or is that anyhow not possible at all, because file locks would prevent that? Thanks, M
  2. Is the question so incomparably stupid and easy to fix that nobody even wants to bother enlighten me, or is it so complicated that nobody has a clue? ?
  3. Hi there, I have set up my UnRAID box so it uses it's onboard Gigabit LAN for connectivity to the internet and streaming devices in the household (utilising the main IP 192.168.0.100), and also have a dual 10GBit NIC installed, which connects to a second physical network, which is MTU9000 only and is on a different IP (192.168.1.100), since both my workstations who utilise the storage, mostly, are also on both networks with different NICs, and to avoid connecting through the wrong route. So, fast stuff is always through 192.168.1.x, slow stuff and internet 192.168.0.x. Now I have installed the next cloud docker and this gets correctly bridged through the onboard NIC and gets it's own address 192.168.0.101 to be connectable from the internet. Is there a possibility to also bridge this to the 10GBit NIC and get the additional route 192.168.1.101, so I can also connect to it via the 10Gigabit NIC from my main workstations, if I want to copy a big deliverable file to it before sharing it? Thanks, M
  4. Hey Supacon, did you find a solution fixing your problems, already? I'm having somehow similar problems, with shares I can write to very fast, but reading is capped without any chance to get faster, and also, when starting multiple copies in Finder to the UnRAID server, it really craps out sometimes. Would be good to know, if you found a solution, already. thx, M
  5. Hey guys, OK, after ruling storage speeds out by tuning the server so locally everything is set up for read speeds well beyond 1200MiBs for sequential reads, the main purpose of the UnRAID build, I ran into the following problem: When accessing the SMB-share from MacOSX Catalina utilising 10Gbe, where SMB signing and also directory caching is switched off, I got write speeds up to about 800 to 900 MiB/s, which is totally fine, even though roughly at 66% of theoretical throughput of 10Gbe, but the read speeds are capped off at max 600MiB/s, meaning, I'm seeing small spikes to 600MiB/s every minute or so for a few seconds, and then I'm getting very constant 570MiB/s (plus/minus 10). I have tried to implement tunings of the sysctl.conf on the Catalina-machine, but since Catalina needs to have SIL disabled for this, and the sysctl.conf needed to be created totally now, I don't even know hot to check if those settings are activated. This is my sysctl.conf at the moment: # OSX default of 3 is not big enough net.inet.tcp.win_scale_factor=8 # increase OSX TCP autotuning maximums net.inet.tcp.autorcvbufmax=33554432 net.inet.tcp.autosndbufmax=33554432 kern.ipc.maxsockbuf=67108864 net.inet.tcp.sendspace=2097152 net.inet.tcp.recvspace=2097152 net.inet.tcp.delayed_ack=0 Sadly, this didn't bring any performance gains I could measure. Any other Mac users out there utilising 10Gbe and having fast enough storage to confirm faster read speeds through SMB? What's your settings? Thanks, M
  6. Hey hey, so, after some decent tuning of the ZFS parameters and adding a cache-drive to the UnRAID array, I'm quite happy with the performance of the pool and the UnRAID array on the box, running FIO there, I'm getting anywhere from 1200MiB/s to 1900MiB/s sequential writes and up to 2200MiB/s sequential reads on the ZFS pool and between 800MiB/s and 1200MiB/s reads on the UnRAID cache drive. Since the box is mainly for video editing and VFX work and this fully saturates a 10Gbe connection, now on to the main problem: Samba. I'm playing around with sysctl.conf-tunings on MacOSX at the moment, but since officially the sysctl.conf is not supported anymore, even though after deactivating SIL on Catalina, I'm not even sure it takes those values and uses them. So, should I open a thread where I'm addressing the SAMBA-MacOSX-problem solely, or is someone here still reading this and would be able to share advice? Thx, M
  7. I'll try some stuff with the NVME tomorrow. Seeing those FIO results directly on the server with the spinning rust gives me hope that probably there's no need for any Cache-disks or similar, and I can utilize the NVME as a passthrough for a VM and rather try NFS to see if this sets me up better on the networking side, since apparently SMB here is the limiting factor, be it on the UnRAID or on the MacOSX side. Problem will be to consistently get the ZFS to share to NFS, but that's one step further. First I'll check if it actually goes faster at all. Cheers and thanks for your input so far, M
  8. Do you mean, assign it as a Cache for the UnRAID-array? Just to make sure I'm understanding correctly. At the moment I'm running some FIO benches on the box and getting these read speeds from the ZFS-array doing sequential reads bigger than the RAM available without any ARC2-cache drive attached to the pool: seqread: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1 ... fio-3.15 Starting 32 processes seqread: Laying out IO file (1 file / 262144MiB) Jobs: 32 (f=32): [R(32)][100.0%][r=16.4GiB/s][r=16.8k IOPS][eta 00m:00s] seqread: (groupid=0, jobs=32): err= 0: pid=29109: Mon Jun 29 15:55:39 2020 read: IOPS=17.2k, BW=16.8GiB/s (18.1GB/s)(8192GiB/487301msec) slat (usec): min=58, max=139996, avg=1848.09, stdev=1446.29 clat (nsec): min=348, max=4571.0k, avg=3394.21, stdev=7243.75 lat (usec): min=59, max=140000, avg=1853.75, stdev=1446.70 clat percentiles (nsec): | 1.00th=[ 956], 5.00th=[ 1336], 10.00th=[ 1608], 20.00th=[ 2024], | 30.00th=[ 2352], 40.00th=[ 2640], 50.00th=[ 2928], 60.00th=[ 3184], | 70.00th=[ 3440], 80.00th=[ 3792], 90.00th=[ 4448], 95.00th=[ 5856], | 99.00th=[ 17280], 99.50th=[ 20352], 99.90th=[ 26752], 99.95th=[ 37120], | 99.99th=[111104] bw ( MiB/s): min= 3007, max=18110, per=99.90%, avg=17196.46, stdev=33.55, samples=31168 iops : min= 3005, max=18110, avg=17183.94, stdev=33.48, samples=31168 lat (nsec) : 500=0.01%, 750=0.19%, 1000=1.09% lat (usec) : 2=17.99%, 4=64.96%, 10=12.76%, 20=2.45%, 50=0.51% lat (usec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=0.66%, sys=80.82%, ctx=9014952, majf=0, minf=8636 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=8388608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=16.8GiB/s (18.1GB/s), 16.8GiB/s-16.8GiB/s (18.1GB/s-18.1GB/s), io=8192GiB (8796GB), run=487301-487301msec And here is the same size config as sequential reads: seqwrite: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1 ... fio-3.15 Starting 32 processes seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) seqwrite: Laying out IO file (1 file / 262144MiB) Jobs: 32 (f=32): [W(32)][100.0%][w=11.9GiB/s][w=12.2k IOPS][eta 00m:00s] seqwrite: (groupid=0, jobs=32): err= 0: pid=22114: Mon Jun 29 16:14:14 2020 write: IOPS=8024, BW=8025MiB/s (8414MB/s)(4702GiB/600003msec); 0 zone resets slat (usec): min=77, max=158140, avg=3949.10, stdev=4442.43 clat (nsec): min=558, max=63118k, avg=14186.33, stdev=71735.56 lat (usec): min=78, max=158155, avg=3970.60, stdev=4445.31 clat percentiles (usec): | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 7], | 30.00th=[ 8], 40.00th=[ 10], 50.00th=[ 14], 60.00th=[ 16], | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 23], 95.00th=[ 25], | 99.00th=[ 31], 99.50th=[ 36], 99.90th=[ 56], 99.95th=[ 141], | 99.99th=[ 2737] bw ( MiB/s): min= 2346, max=14115, per=99.87%, avg=8014.13, stdev=101.56, samples=38383 iops : min= 2339, max=14114, avg=8008.87, stdev=101.61, samples=38383 lat (nsec) : 750=0.01%, 1000=0.01% lat (usec) : 2=0.23%, 4=6.22%, 10=33.92%, 20=41.43%, 50=18.09% lat (usec) : 100=0.05%, 250=0.03%, 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% lat (msec) : 100=0.01% cpu : usr=1.07%, sys=55.61%, ctx=4682973, majf=0, minf=442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,4814770,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=8025MiB/s (8414MB/s), 8025MiB/s-8025MiB/s (8414MB/s-8414MB/s), io=4702GiB (5049GB), run=600003-600003msec
  9. So, here's the result for the iperf-run, which was done with switch -d, so this speed goes in both directions: Accepted connection from 192.168.0.51, port 55664 [ 5] local 192.168.0.111 port 5201 connected to 192.168.0.51 port 55665 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 1.15 GBytes 1174 MBytes/sec [ 5] 1.00-2.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 2.00-3.00 sec 1.15 GBytes 1177 MBytes/sec [ 5] 3.00-4.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 4.00-5.00 sec 1.15 GBytes 1177 MBytes/sec [ 5] 5.00-6.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 6.00-7.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 7.00-8.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 8.00-9.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 9.00-10.00 sec 1.15 GBytes 1176 MBytes/sec [ 5] 10.00-10.00 sec 1.83 MBytes 1255 MBytes/sec I understand that I have to live with an overhead in SMB, but reading at less than half consistently seems odd. Thanks for any ideas.
  10. Hi Johnnie, I have not. I'll try to find out how to do that from a Mac to an UnRAID box and report back. Thanks for pointing me in that direction. While I'm quite firm in doing this technical stuff and can learn quickly, I'm more the creative mind and lack a lot of background, so forgive me when it takes me a day or two to get this going. Cheers, M
  11. Hi Johnnie, since this is not a ZFS-specific question, but rather a SMB-specific question, I thought I'd go with the general forum, since from the box itself, the read and write speeds are much higher than that, so it seems to be a network rather than a ZFS problem. Thanks, M edit: I also now tested with a share on the UnRAID array, now, and it's the same pattern. Write speeds at about 840MB/s, reads at 570-580MB/s.
  12. Hi there, I'm trying to find a sweet spot for a ZFS configuration with 15 disks and potentially a level 2 ARC, and thus, am benchmarking around a bit. Configuration is a 16core XEON with 144GB of RAM, 15 WD RED 14TB drives, a 1TB EVO970 NVME and 10GBE ethernet connectivity. I access via Samba with MacOSX. While trying different RAID-Z configurations, with trying 16GB write once, read multiple times scenario: Writing directly to the NVME as a single drive pool (for testing only) I can reach 880MB/s. Reading from that same place multiple times I get 574MB/s. Writing to a 2x7raidz1 I'm getting write speeds at about 760MB/s, but the exact same read speeds of 574MB/s. Since the file is easily small enough to fit in the ARC, I would assume, latest with the second read it comes from RAM, so I doubt the read speeds on the box are the problem. So, my question is, why is my read speed somehow capped at 574MB/s? MTU-size is configured to 9000 and SMB-signing is switched off, I don't have any other ideas to try. Thanks, M
  13. To clarify, I just used my MacBook with an older OS. I was not able to get it working under Catalina. Cheers, M
  14. Awesome, thanks. I'll try to have some kind of extensive testing scenario before I go into any production state, so I'll report back if I had any reliability problems and hopefully with an extensive performance comparison between raidz2 configurations and a corresponding draid vdev, not only with rebuild times, which definitely will be much faster, but also with IO-performance. Cheers, M
  15. Dear Steini84, thanks for enabling ZFS support on UnRAID, which makes this by far the best system for storage solutions out there, having the chance of creating ZFS pools of any flavour and the possibility of an UnRAID-pool on the same machine. I'm just starting to do testing on my machine, and stumbled over dRAID-vdev-driver documentation and was interested, if there was a possibility if you could include the option for this one in your build? I know, for ZFS-standards, this is far from production ready software, but since I'm testing around, I'd be really interested what performance gains I'd get with an 15+3dspares draid1 setup compared to a 3x 4+1 raidz1 vdev-pool, for example. Thanks. M
  16. Hi there, I'm running the USB Creator 1.6 within MacOSX 10.14.6 Mojave on a MacBook Pro and when trying to customize the version, in the field of Server name, I can't type with my native keymapping MacOSX is using (I'm using a Dvorak-Layout), but it apparently defaults to US-QUERTY. This is the first time I see that of an application, thought I'd let you know that this is definitely not expected behaviour of any software under MacOSX. Cheers, M
  17. Hi guys, I wanted to create my first test install of UNRAID today and got the USB Creator for MacOSX and started it from the Applications folder with Admin privileges to get the following error message: PasteBoard: Error creating pasteboard: com.apple.pasteboard.clipboard [-4960] PasteBoard: Error creating pasteboard: com.apple.pasteboard.find [-4960] 2020-05-24 15:24:19.491 Unraid USB Creator[84675:30215356] The application with bundle ID com.limetech.UnraidUC is running setugid(), which is not allowed. Exiting. (1) I'll try with a different machine, I just thought, you might want to know that there's a problem with the tool. Cheers, M
  18. I thought that I would internally number the ports and route the cables, so port number would correctly connect to the correct bay, but that plugin looks much easier, so no sweat, there. Seems a perfect solution. Thanks for the answer. M
  19. Hi there, I'm planning on a storage server based on UnRAID, and the only central question I have before I can plan for hardware is: How is UnRAID identifying SATA disks? So, if my built includes an HBA-card and this has 4 ports, will it list the drives by port? I'm asking because, while there of course are server grade hard drive enclosures out there who would be able to identify a failed drive, and you could just pull that, if the build is a little bit more budget, how will I be able to identify the failed drive? Thanks and have a great day, Matthias Halibrand
×
×
  • Create New...