Jump to content

MatzeHali

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

1 Neutral

About MatzeHali

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Solved this by setting up bridging for the second bond, too. That slipped my attention before. With that bridging enabled, I was able to set br1 within the VM settings.
  2. If I share my pool to nfs via: zfs set sharenfs=on poolname I can't access it from MacOSX via nfs://servername/poolname, which is the path I would expect it to be. Where is it shared to? Thanks, M
  3. In the end the easiest solution to have nextcloud docker behave normally on both subnets was to install a second docker for the second interface. Easy as pie and so far no hiccups. Reason was I did want the next cloud behave as a "real" server with it's own IP on both subnets, not just ports on the main IP, so that's the route I took. Thanks for all the help and suggestions. M
  4. Hi. The share is on the UnRAID, but I have now solved it by connecting via WebDAV in the Finder which works well. Cheers, M
  5. Hi there, I'm set up with a primary network bond configured with the onboard eth0 and eth1 being bond1 and a secondary bond2 with an additional 10gbe-card eth2&eth3. How do I configure a VM to bridge to the second bond. With the normal br0 it bridges to bond1 and I can't get it off of this. If I put in br1 in the XML it throws an error that this is not available. Same with bond2. Thanks, M
  6. Hi there, running next cloud with your provided docker for a few weeks now and everything is smooth, there's just one question: When connecting to app data via SMB, to directly up- or download files to the server without using the next cloud web interface, I can't access the data folder. It has a big one way street sign on it in Finder with MacOSX Catalina. It does not make a difference with which user I log into the smb share. Is there a way to change that, or is that by design and I can only put files on the next cloud shares utilising the web interface? thx, M
  7. OK. But my server address is for example 192.168.0.100 and 192.168.1.100, so I can reach the web interface and everything with the 100-address. The NextCloud docker gets assigned a new IP (in my case 192.168.0.101), so what you are describing doesn't work, because it's just reachable at a specific IP, and I would love it to be reachable also from 192.168.1.101 on the additional NIC. I'll try out stuff within the next days and report back what seemed to be the best solution. Thanks for your suggestions so far. M
  8. Since the slow and the fast network are having different IP addresses, I guess that doesn't work? Because explicitely want the different IPs on different NICs, which are also different from the main server IP. Cheers, M
  9. Yes, I thought I would be able to make easy use of option 2, but since I'm accessing from MacOSX, and talking about multiple user accounts on NextCloud, and the SMB user management on MacOSX is only ever allowing one user at a time, I thought I could connect via WebDAV, which would enable me to connect to multiple Nextcloud user folders with correct authentification. I do like the idea of a second instance of Nextcloud Docker and will try this. I guess there is the small off that I would loose some settings if both instances at the same time would write a config file? Or is that anyhow not possible at all, because file locks would prevent that? Thanks, M
  10. Is the question so incomparably stupid and easy to fix that nobody even wants to bother enlighten me, or is it so complicated that nobody has a clue? ?
  11. Hi there, I have set up my UnRAID box so it uses it's onboard Gigabit LAN for connectivity to the internet and streaming devices in the household (utilising the main IP 192.168.0.100), and also have a dual 10GBit NIC installed, which connects to a second physical network, which is MTU9000 only and is on a different IP (192.168.1.100), since both my workstations who utilise the storage, mostly, are also on both networks with different NICs, and to avoid connecting through the wrong route. So, fast stuff is always through 192.168.1.x, slow stuff and internet 192.168.0.x. Now I have installed the next cloud docker and this gets correctly bridged through the onboard NIC and gets it's own address 192.168.0.101 to be connectable from the internet. Is there a possibility to also bridge this to the 10GBit NIC and get the additional route 192.168.1.101, so I can also connect to it via the 10Gigabit NIC from my main workstations, if I want to copy a big deliverable file to it before sharing it? Thanks, M
  12. Hey Supacon, did you find a solution fixing your problems, already? I'm having somehow similar problems, with shares I can write to very fast, but reading is capped without any chance to get faster, and also, when starting multiple copies in Finder to the UnRAID server, it really craps out sometimes. Would be good to know, if you found a solution, already. thx, M
  13. Hey guys, OK, after ruling storage speeds out by tuning the server so locally everything is set up for read speeds well beyond 1200MiBs for sequential reads, the main purpose of the UnRAID build, I ran into the following problem: When accessing the SMB-share from MacOSX Catalina utilising 10Gbe, where SMB signing and also directory caching is switched off, I got write speeds up to about 800 to 900 MiB/s, which is totally fine, even though roughly at 66% of theoretical throughput of 10Gbe, but the read speeds are capped off at max 600MiB/s, meaning, I'm seeing small spikes to 600MiB/s every minute or so for a few seconds, and then I'm getting very constant 570MiB/s (plus/minus 10). I have tried to implement tunings of the sysctl.conf on the Catalina-machine, but since Catalina needs to have SIL disabled for this, and the sysctl.conf needed to be created totally now, I don't even know hot to check if those settings are activated. This is my sysctl.conf at the moment: # OSX default of 3 is not big enough net.inet.tcp.win_scale_factor=8 # increase OSX TCP autotuning maximums net.inet.tcp.autorcvbufmax=33554432 net.inet.tcp.autosndbufmax=33554432 kern.ipc.maxsockbuf=67108864 net.inet.tcp.sendspace=2097152 net.inet.tcp.recvspace=2097152 net.inet.tcp.delayed_ack=0 Sadly, this didn't bring any performance gains I could measure. Any other Mac users out there utilising 10Gbe and having fast enough storage to confirm faster read speeds through SMB? What's your settings? Thanks, M
  14. Hey hey, so, after some decent tuning of the ZFS parameters and adding a cache-drive to the UnRAID array, I'm quite happy with the performance of the pool and the UnRAID array on the box, running FIO there, I'm getting anywhere from 1200MiB/s to 1900MiB/s sequential writes and up to 2200MiB/s sequential reads on the ZFS pool and between 800MiB/s and 1200MiB/s reads on the UnRAID cache drive. Since the box is mainly for video editing and VFX work and this fully saturates a 10Gbe connection, now on to the main problem: Samba. I'm playing around with sysctl.conf-tunings on MacOSX at the moment, but since officially the sysctl.conf is not supported anymore, even though after deactivating SIL on Catalina, I'm not even sure it takes those values and uses them. So, should I open a thread where I'm addressing the SAMBA-MacOSX-problem solely, or is someone here still reading this and would be able to share advice? Thx, M